Skip to content

Latest commit

 

History

History
778 lines (559 loc) · 37.4 KB

advanced-service-mesh-dev.adoc

File metadata and controls

778 lines (559 loc) · 37.4 KB

Lab 3 - Advanced Service Mesh Development

In this lab, we will learn the advanced use cases of service mesh. The lab showcases features:

  • Fault Injection

  • Traffic Shifting

  • Circuit Breaking

  • Rate Limiting

These features are important for any distributed applications built on top of Kubernetes/Openshift. We use a different set of microservices that you will be building and deploying. Our application is called the Coolstore microservice developed in prior labs (i.e Catalog, Inventory) that you developed and deployed to OpenShift cluster in Module 1 or/and Module 2.

Warning

If you have already deployed the inventory and catalog microservices from Module 1, you can skip this step and move to section "1. Enabling automatic sidecar injection"

In case you haven’t done Module 1 or Module 2 today, or you didn’t quite complete them, you should deploy the coolstore application and microservices by executing the following shell scripts in VS Code Terminal:

The following script will deploy the inventory service:

sh $PROJECT_SOURCE/istio/scripts/deploy-inventory.sh {{ USER_ID }}  && \
sh $PROJECT_SOURCE/istio/scripts/deploy-catalog.sh {{ USER_ID }}
Warning

It sometimes takes time to create a new build image for network latency in OpenShift. So if you got failed to deploy catalog-service with Error from server (NotFound): services "catalog-springboot" not found. Please try again with delay via the following command:

sh $PROJECT_SOURCE/istio/scripts/deploy-inventory.sh {{ USER_ID }}  && \
sh $PROJECT_SOURCE/istio/scripts/deploy-catalog.sh {{ USER_ID }} 3m

Wait for the commands to complete. This will build and deploy the inventory and catalog components into their own namespaces. They won’t automatically get Istio sidecar proxy containers yet, but you’ll add that in the next step!

1. Enabling automatic sidecar injection

Red Hat OpenShift Service Mesh relies on a proxy sidecar within the application’s pod to provide Service Mesh capabilities to the application. You can enable automatic sidecar injection or manage it manually. Red Hat recommends automatic injection using the annotation with no need to label projects. This ensures that your application contains the appropriate configuration for the Service Mesh upon deployment. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods.

Note

The upstream version of Istio injects the sidecar by default if you have labeled the project. Red Hat OpenShift Service Mesh requires you to opt in to having the sidecar automatically injected to a deployment, so you are not required to label the project. This avoids injecting a sidecar if it is not wanted (for example, in build or deploy pods).

The webhook checks the configuration of pods deploying into all projects to see if they are opting in to injection with the appropriate annotation.

Confirm inventory and catalog services are running

First, confirm you have running catalog service and associated database with the following command:

oc get pods -n {{USER_ID}}-catalog --field-selector status.phase=Running

You should see two pods running (one for the service, and one for its database):

NAME                         READY   STATUS    RESTARTS   AGE
catalog-database-1-xnrmz     1/1     Running   0          2m13s
catalog-springboot-1-sqbfq   1/1     Running   0          59s

Do the same for inventory:

oc get pods -n {{USER_ID}}-inventory --field-selector status.phase=Running

You should again see two pods running (one for the service, and one for its database):

NAME                         READY   STATUS    RESTARTS   AGE
inventory-1-hx4nk            1/1     Running   0          3m44s
inventory-database-1-rh59m   1/1     Running   0          4m26s

Add sidecars

OpenShift Service Mesh requires that applications "opt-in" to being part of a service mesh by default. To "opt-in" an app, you need to add an annotation which is a flag to istio to attach a sidecar and bring the app into the mesh.

Rather than manually adding the annotations necessary to inject istio sidecars, run the following commands to add the annotations which will trigger a sidecar to be injected into our inventory and catalog microservices, as well as their associated databases.

First, do the databases and wait for them to be re-deployed:

oc patch deployment/inventory-database -n {{USER_ID}}-inventory --type='json' -p '[{"op":"add","path":"/spec/template/metadata/annotations", "value": {"sidecar.istio.io/inject": "'"true"'"}}]' && \
oc patch dc/catalog-database -n {{USER_ID}}-catalog --type='json' -p '[{"op":"add","path":"/spec/template/metadata/annotations", "value": {"sidecar.istio.io/inject": "'"true"'"}}]' && \
oc rollout status -w deployment/inventory-database -n {{USER_ID}}-inventory && \
oc rollout status -w dc/catalog-database -n {{USER_ID}}-catalog

This should take about 1 minute to finish.

Note

The above complex-looking command uses the oc patch command to programmatically edit the Kubernetes objects. You could just as easily have edited the file in an editor, but YAML can sometimes be tricky so we made it easy for you!

Next, let’s add sidecars to our services and wait for them to be re-deployed:

oc patch deployment/inventory -n {{USER_ID}}-inventory --type='json' -p '[{"op":"add","path":"/spec/template/metadata/annotations", "value": {"sidecar.istio.io/inject": "'"true"'"}}]' && \
oc patch dc/catalog-springboot -n {{USER_ID}}-catalog --type='json' -p '[{"op":"add","path":"/spec/template/metadata/annotations", "value": {"sidecar.istio.io/inject": "'"true"'"}}]' && \
oc rollout latest dc/catalog-springboot -n {{USER_ID}}-catalog &&
oc rollout status -w deployment/inventory -n {{USER_ID}}-inventory && \
oc rollout status -w dc/catalog-springboot -n {{USER_ID}}-catalog

This should also take about 1 minute to finish. When it’s done, verify that the inventory-database is running with 2 pods (2/2 in the READY column) with this command:

oc get pods -n {{USER_ID}}-inventory --field-selector="status.phase=Running"

It should show:

NAME                         READY   STATUS    RESTARTS      AGE
inventory-2-nx8qp            2/2     Running   2 (33s ago)   40s
inventory-database-2-jfw99   2/2     Running   0             62s

Do the same for the catalog and confirm they also show the "2/2" pods running:

oc get pods -n {{USER_ID}}-catalog --field-selector="status.phase=Running"
NAME                         READY   STATUS    RESTARTS   AGE
catalog-database-2-8q9ws     2/2     Running   0          81s
catalog-springboot-1-sqbfq   2/2     Running   0          2m52s
Warning

It may take a minute or two before the inventory and catalog services are recognized and brought into the mesh.

Next, let’s create a virtual service to send incoming traffic to the catalog. Open the empty catalog-default.yaml file in catalog/rules directory to copy the following VirtualService into the empty file using VS Code:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: catalog-default
spec:
  hosts:
  - "istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}"
  gateways:
  - {{USER_ID}}-bookinfo/bookinfo-gateway
  http:
    - match:
        - uri:
            exact: /services/products
        - uri:
            exact: /services/product
        - uri:
            exact: /
      route:
        - destination:
            host: catalog-springboot
            port:
              number: 8080

Execute the following command in VS Code Terminal:

oc create -f $PROJECT_SOURCE/catalog/rules/catalog-default.yaml -n {{ USER_ID }}-catalog

Access the http://istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}[Catalog Service Page^] and ensure it should look something like:

catalog
Note

It takes a few seconds to reconcile istio ingress with the gateway and virtual service. Leave this page open as the Catalog UI browser creates traffic (every 2 seconds) between services, which is useful for testing.

Ensure if we injected side car to each pods. Access the https://kiali-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}/console/graph/namespaces/?edges=noEdgeLabels&graphType=versionedApp&namespaces={{ USER_ID }}-catalog%2C{{ USER_ID }}-inventory&unusedNodes=false&injectServiceNodes=true&duration=60&pi=15000&layout=dagre[Kiali Graph page^] and verify that {{ USER_ID }}-inventory, {{ USER_ID }}-catalog are selected Namespaces then enable Traffic Animation in the Display drop-down to see animated traffic flow from Catalog service to Inventory service:

istio

You can see the incoming traffic to the catalog service along with traffic going to both the catalog and inventory databases along each branch. This mirrors what we would expect - when you access the catalog frontend, a call is made to the catalog backend, which in turn access the inventory and combines it with catalog data and returns the result for display.

Note

You may occasionally see unknown or PassthroughCluster elements in the graph. These are due to the istio configuration changes we are doing in realtime and would disappear if you wait long enough, but you can ignore them for this lab.

2. Fault Injection

This step will walk you through how to use Fault Injection to test the end-to-end failure recovery capability of the application as a whole. An incorrect configuration of the failure recovery policies could result in unavailability of critical services. Examples of incorrect configurations include incompatible or restrictive timeouts across service calls.

Istio provides a set of failure recovery features that can be taken advantage of by the services in an application. Features include:

  • Timeouts to minimize wait times for slow services

  • Bounded retries with timeout budgets and variable jitter between retries

  • Limits on number of concurrent connections and requests to upstream services

  • Active (periodic) health checks on each member of the load balancing pool

  • Fine-grained circuit breakers (passive health checks) – applied per instance in the load balancing pool

These features can be dynamically configured at runtime through Istio’s traffic management rules.

A combination of active and passive health checks minimizes the chances of accessing an unhealthy service. When combined with platform-level health checks (such as readiness/liveness probes in OpenShift), applications can ensure that unhealthy pods/containers/VMs can be quickly weeded out of the service mesh, minimizing the request failures and impact on latency.

Together, these features enable the service mesh to tolerate failing nodes and prevent localized failures from cascading instability to other nodes.

Istio enables protocol-specific fault injection into the network (instead of killing pods) by delaying or corrupting packets at TCP layer.

Two types of faults can be injected:

  • Delays are timing failures. They mimic increased network latency or an overloaded upstream service.

  • Aborts are crash failures. They mimic failures in upstream services. Aborts usually manifest in the form of HTTP error codes or TCP connection failures.

To test our application microservices for resiliency, we will inject a failure in 50% of the requests to the inventory service, causing the service to appear to fail (and return HTTP 5xx errors) half of the time.

Open the empty inventory-default.yaml file in the inventory/rules directory and copy the following into the file:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: inventory-default
spec:
  hosts:
  - "istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}"
  gateways:
  - {{USER_ID}}-bookinfo/bookinfo-gateway
  http:
    - match:
        - uri:
            exact: /services/inventory
        - uri:
            exact: /
      route:
        - destination:
            host: inventory
            port:
              number: 80

Delete the gateway to direct route catalog that was setup earlier with:

oc delete -f $PROJECT_SOURCE/catalog/rules/catalog-default.yaml -n {{ USER_ID }}-catalog

Create the new VirtualService to direct traffic to the inventory service by running the following command via VS Code Terminal:

oc create -f $PROJECT_SOURCE/inventory/rules/inventory-default.yaml -n {{ USER_ID }}-inventory

Now, you can test if the inventory service works correctly via accessing the http://istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}[CoolStore Inventory page^]. If you still see Coolstore Catalog then reload the page to see Coolstore Inventory with CTRL+F5 (or Command+Shift+R on Mac OS).

fault-injection

Let’s inject a failure (500 status) in 50% of requests to inventory microservices. Edit inventory-default.yaml as below.

Open the empty inventory-vs-fault.yaml file in inventory/rules directory and copy the following codes.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: inventory-fault
spec:
  hosts:
  - "istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}"
  gateways:
  - {{USER_ID}}-bookinfo/bookinfo-gateway
  http:
    - fault:
         abort:
           httpStatus: 500
           percentage:
             value: 50
      route:
        - destination:
            host: inventory
            port:
              number: 80

Before creating a new inventory-fault VirtualService, we need to delete the existing inventory-default virtualService. Run the following command via VS Code Terminal:

oc delete virtualservice/inventory-default -n {{ USER_ID }}-inventory

Then create a new VirtualService with this command:

oc create -f $PROJECT_SOURCE/inventory/rules/inventory-vs-fault.yaml -n {{ USER_ID }}-inventory

Let’s find out if the fault injection works corectly via accessing the http://istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}[CoolStore Inventory page^] once again. You will see that the Status of CoolStore Inventory continues to change between DEAD and OK:

fault-injection

Back on the https://kiali-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}/console/graph/namespaces/?edges=noEdgeLabels&graphType=versionedApp&namespaces={{ USER_ID }}-catalog%2C{{ USER_ID }}-inventory&unusedNodes=false&injectServiceNodes=true&duration=60&pi=15000&layout=dagre[Kiali Graph page^] and you will see red traffic from istio-ingressgateway as well as around 50% of requests are displayed as 5xx on the right side, HTTP Traffic. It may not be exactly 50% since some traffic is coming from the catalog and ingress gateway at the same time, but it will approach 50% over time.

Warning

Kiali "looks back" and records/displays the last minute of traffic, so if you’re quick you may see some of the prior traffic flows from earlier in the lab. Within 1 minute the graph should clear up and only show what you are looking for!

fault-injection

Let’s now add a 5 second delay for the inventory service.

Open the empty inventory-vs-fault-delay.yaml file in inventory/rules directory and copy the following code into it:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: inventory-fault-delay
spec:
  hosts:
  - "istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}"
  gateways:
  - {{USER_ID}}-bookinfo/bookinfo-gateway
  http:
    - fault:
         delay:
           fixedDelay: 5s
           percentage:
             value: 100
      route:
        - destination:
            host: inventory
            port:
              number: 80

Delete the existing inventory-fault VirtualService in VS Code Terminal:

oc delete virtualservice/inventory-fault -n {{ USER_ID }}-inventory

Then create a new virtualservice:

oc create -f $PROJECT_SOURCE/inventory/rules/inventory-vs-fault-delay.yaml -n {{ USER_ID }}-inventory

Go to the Kiali Graph you opened earlier and you will see that the green traffic from istio-ingressgateway is delayed for requests coming from inventory service. Note that you need to check Traffic Animation in the Display select box.

Note

You may still see "red" traffic from our previous fault injections, but those will disappear after the 1 minute time window (the default lookback period) of the graph elapses.

fault-injection

Click on the "edge" (the line between istio-ingressgateway and inventory) and then scroll to the bottom of the right-side graph showing the HTTP Request Response Time. Hover over the black average data point to confirm that the average response time is about 5000ms (5 seconds) as expected:

delay

If the Inventory’s front page was set to correctly handle delays, we expect it to load within approximately 5 seconds. To see the web page response times, open the Developer Tools menu in IE, Chrome or Firefox (typically, key combination CTRL+SHIFT+I or CMD+ALT+I on a Mac), select the Network tab, and reload the inventory web page.

You will see and feel that the webpage loads in about 5 seconds:

Delay

Before we will move to the next step, clean up the fault injection and set the default virtual service once again using these commands in a Terminal:

oc delete virtualservice/inventory-fault-delay -n {{ USER_ID }}-inventory && \
oc create -f $PROJECT_SOURCE/inventory/rules/inventory-default.yaml -n {{ USER_ID }}-inventory

Also, close the tabs in your browser for the Inventory and Catalog services to avoid unnecessary load, and stop the endless for loop you started in the beginning of this lab in VS Code by closing the Terminal window that was running it.

3. Enable Circuit Breaker

In this step, you will configure a circuit Breaker to protect the calls to Inventory service. If the Inventory service gets overloaded due to call volume, Istio will limit future calls to the service instances to allow them to recover.

Circuit breaking is a critical component of distributed systems. It’s nearly always better to fail quickly and apply back pressure upstream as soon as possible. Istio enforces circuit breaking limits at the network level as opposed to having to configure and code each application independently.

Istio supports various types of conditions that would trigger a circuit break:

  • Cluster maximum connections: The maximum number of connections that Istio will establish to all hosts in a cluster.

  • Cluster maximum pending requests: The maximum number of requests that will be queued while waiting for a ready connection pool connection.

  • Cluster maximum requests: The maximum number of requests that can be outstanding to all hosts in a cluster at any given time. In practice this is applicable to HTTP/2 clusters since HTTP/1.1 clusters are governed by the maximum connections circuit breaker.

  • Cluster maximum active retries: The maximum number of retries that can be outstanding to all hosts in a cluster at any given time. In general Istio recommends aggressively circuit breaking retries so that retries for sporadic failures are allowed but the overall retry volume cannot explode and cause large scale cascading failure.

Note

that HTTP2 uses a single connection and never queues (always multiplexes), so max connections and max pending requests are not applicable.

Each circuit breaking limit is configurable and tracked on a per upstream cluster and per priority basis. This allows different components of the distributed system to be tuned independently and have different limits. See the Envoy’s circuit breaker for more details.

Let’s add a circuit breaker to the calls to the Inventory service. Instead of using a VirtualService object, circuit breakers in Istio are defined as DestinationRule objects. DestinationRule defines policies that apply to traffic intended for a service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.

Open the empty inventory-cb.yaml file in inventory/rules directory and add this code to the file to enable circuit breaking when calling the Inventory service:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: inventory-cb
spec:
  host: inventory
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 1
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1

Run the following command via VS Code Terminal to then create the rule:

oc create -f $PROJECT_SOURCE/inventory/rules/inventory-cb.yaml -n {{ USER_ID }}-inventory

We set the Inventory service’s maximum connections to 1 and maximum pending requests to 1. Thus, if we send more than 2 requests within a short period of time to the inventory service, 1 will go through, 1 will be pending, and any additional requests will be denied until the pending request is processed. Furthermore, it will detect any hosts that return a server error (HTTP 5xx) and eject the pod out of the load balancing pool for 15 minutes. You can visit here to check the Istio spec for more details on what each configuration parameter does.

4. Overload the service

We’ll use a utility called siege to send multiple concurrent requests to our application, and witness the circuit breaker kicking in and opening the circuit.

Execute this to simulate a number of users attempting to access the gateway URL simultaneously in VS Code Terminal.

siege --verbose --time=1M --concurrent=10 'http://istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}'

This will run for 1 minute, and you’ll likely encounter errors like [error] Failed to make an SSL connection: 5 which indicates that the circuit breaker is tripping and stopping the flood of requests from going to the service.

To see this, open the https://grafana-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}/d/LJ_uJAvmk/istio-service-dashboard?orgId=1&refresh=10s&var-service=inventory.{{ USER_ID }}-inventory.svc.cluster.local&var-srcns=All&var-srcwl=All&var-dstns=All&var-dstwl=All[Istio Service Dashboard^] in Grafana and ensure to see Client Success Rate(non-5xx responses) of inventory service is no longer at 100%:

Note

It may take 10-20 seconds before the evidence of the circuit breaker is visible within the Grafana dashboard, due to the not-quite-realtime nature of Prometheus metrics and Grafana refresh periods and general network latency. You can also re-run the siege command to force more failures.

circuit-breaker

That’s the circuit breaker in action, limiting the number of requests to the service. In practice your limits would be much higher.

You can also see the Circuit Breaker triggering HTTP 503 errors in the animation:

circuit-breaker

In practice, these 503 errors would trigger upstream fallbacks while the overloaded service is given a chance to recover.

Before we move on the next step, clear existing destinationrule, virtural service and gateway via the following commands.

oc delete destinationrule/inventory-cb -n {{ USER_ID }}-inventory && \
oc delete virtualservice/inventory-default -n {{ USER_ID }}-inventory && \
oc create -f $PROJECT_SOURCE/catalog/rules/catalog-default.yaml -n {{ USER_ID }}-catalog

6. Enable Authentication using Single Sign-on

In this step, you will learn how to enable authentication. You will secure the Catalog endpoint. We will use JWT with Red Hat Single Sign On which is part of the Red Hat Runtimes.

References:

Let’s deploy Red Hat Single Sign-On (RH-SSO) that enables service authentication for traffic in the service mesh.

Red Hat Single Sign-On (RH-SSO) is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0. The RH-SSO server can act as a SAML or OpenID Connect-based Identity Provider, mediating with your enterprise user directory or 3rd-party SSO provider for identity information and your applications via standards-based tokens. The major features include:

  • Authentication Server - Acts as a standalone SAML or OpenID Connect-based Identity Provider.

  • User Federation - Certified with LDAP servers and Microsoft Active Directory as sources for user information.

  • Identity Brokering - Integrates with 3rd-party Identity Providers including leading social networks as identity source.

  • REST APIs and Administration GUI - Specify user federation, role mapping, and client applications with easy-to-use Administration GUI and REST APIs.

We will deploy RH-SSO in a new project. Go to the {{ CONSOLE_URL }}/topology/ns/{{ USER_ID }}-catalog[Topology View^], click on Create Project:

rhsso

Type the following name then click on Create:

  • Name: {{ USER_ID}}-rhsso

rhsso

Click the Start building your application in the Topology view:

create_new

In the search box, type in ccn and choose CCN + Red Hat Single Sign-On 7.4 on OpenJDK + PostgreSQL and then click Instantiate Template.

rhsso

Type the following varialbles then leave the others as default. Click on Create:

  • RH-SSO Administrator Username: admin

  • RH-SSO Administrator Password: admin

  • RH-SSO Realm: istio

  • RH-SSO Service Username: auth{{ USER_ID}}

  • RH-SSO Service Password: {{ OPENSHIFT_USER_PASSWORD }}

rhsso

Add the following labels in VS Code Terminal:

oc project {{ USER_ID}}-rhsso && \
oc label dc/sso app.openshift.io/runtime=sso && \
oc label dc/sso-postgresql app.openshift.io/runtime=postgresql --overwrite && \
oc label dc/sso-postgresql app.kubernetes.io/part-of=sso --overwrite && \
oc label dc/sso app.kubernetes.io/part-of=sso --overwrite && \
oc annotate dc/sso-postgresql app.openshift.io/connects-to=sso --overwrite

Go back to the {{ CONSOLE_URL }}/topology/ns/{{ USER_ID }}-rhsso[Topology View^]:

sso

Once this finishes (it may take a minute or two), click on https://secure-sso-{{ USER_ID }}-rhsso.{{ ROUTE_SUBDOMAIN}}[Secure SSO Route^] to access RH-SSO web console as below:

sso

Click on Administration Console to configure Istio Ream then input the usename and password that you used earlier:

  • Username or email: admin

  • Password: admin

sso

You will see general information of the Istio Realm. Click on Login tab and de-select (swich off) Require SSL by setting it to none then click on Save.

sso
Note

Red Hat Single Sign-On generates a self-signed certificate the first time it runs. Please note that self-signed certificates don’t work to authenticate by Istio so we will change not to use SSL for testing Istio authentication.

Next, create a new RH-SSO client that is for trusted browser apps and web services in our Istio realm. Go to Clients in the left menu then click on Create.

sso

Input ccn-cli in Client ID field and click on Save.

sso

On the next screen, you will see details on the Settings tab, the only thing you need to do is to input Valid Redirect URIs that can be used after successful login or logout for clients.

http://istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}/*
sso

Don’t forget to click Save!

Now, let’s define a role that will be assigned to your credentials, let’s create a simple role called ccn_auth. Go to Roles in the left menu then click on Add Role.

sso

Input ccn_auth in Role Name field and click on Save.

sso

Next let’s update the password policy for our authuser.

Go to Users menu on the left side menu then click on View all users.

sso

If you click on the auth{{ USER_ID }} ID then you will find more information such as Details, Attributes, Credentials, Role Mappings, Groups, Contents, and Sessions. You don’t need to update any details in this step.

sso

Go to Credentials tab and input the following variables:

  • New Password: {{ OPENSHIFT_USER_PASSWORD }}

  • Password Confirmation: {{ OPENSHIFT_USER_PASSWORD }}

  • Temporary: OFF

Make sure to turn off the Temporary flag unless you want the auth{{ USER_ID }} to have to change his password the first time they authenticate.

Click on Reset Password.

sso

Then click on Change password in the popup window.

sso

Now proceed to the Role Mappings tab and assign the role ccn_auth via clicking on Add selected >.

sso

You will confirm the ccn_auth role in Assigned Roles box.

sso

Well done, you have enabled RH-SSO to with a custom realm, user and role!

Rename services

In upcoming versions of OpenShift Service Mesh, newer versions of Istio will auto-detect protocols like http. But for now, we must explicitly include the protocol name in our Kubernetes service names so that we can do advanced things like apply authentication and authorization policies. To do that, run the following commands to update the service names for both catalog and inventory:

oc patch -n {{ USER_ID }}-catalog svc/catalog-springboot -p '{"spec": {"ports":[{"port": 8080, "name": "http"}, {"port": 8443, "name": "https"}]}}'

Turning to back to Istio, let’s create a user-facing authentication policy using JSON Web Tokens (JWTs) and the OIDC authenticaiton flow.

In VS Code, open the empty ccn-auth-config.yml file in catalog/rules directory to copy the following RequestAuthentication and AuthorizationPolicy:

apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
  name: calalog-req-auth
  namespace: {{ USER_ID }}-catalog
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: catalog-springboot
  jwtRules:
  - issuer: http://sso-{{ USER_ID }}-rhsso.{{ ROUTE_SUBDOMAIN }}/auth/realms/istio
    jwksUri: http://sso-{{ USER_ID }}-rhsso.{{ ROUTE_SUBDOMAIN }}/auth/realms/istio/protocol/openid-connect/certs
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: catalog-auth-policy
  namespace: user1-catalog
spec:
  rules:
  - from:
    - source:
        requestPrincipals: ["*"]
    when:
        - key: 'request.auth.claims[iss]'
          values:
            - >-
              http://sso-{{ USER_ID }}-rhsso.{{ ROUTE_SUBDOMAIN }}/auth/realms/istio
  selector:
    matchLabels:
      app.kubernetes.io/name: catalog-springboot

The following fields are used above to create a RequestAuthentication in Istio and are described here:

  • issuer - Identifies the issuer that issued the JWT. See issuer usually a URL or an email address.

  • jwksUri - URL of the provider’s public key set to validate signature of the JWT.

Then execute the following oc command in VS Code Terminal to create this object:

oc create -f $PROJECT_SOURCE/catalog/rules/ccn-auth-config.yaml -n {{ USER_ID }}-catalog

Now you can’t access the catalog service without authentication of RH-SSO. You confirm it using the following curl command in VS Code Terminal:

curl -i http://istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}

You should get and HTTP/1.1 403 Forbidden and RBAC: access denied messages.

The expected response is here because the user has not been identified with a valid JWT token in RH-SSO. It normally takes 5 ~ 10 seconds to initialize the authentication policy in Istio Mixer. After this things go quickly as policies are cached for some period of time.

sso

In order to generate a correct token, run next curl request in VS Code Terminal. This command will store the output Authorization token from RH-SSO in an environment variable called TOKEN.

export TOKEN=$( curl -s -X POST 'http://sso-{{ USER_ID }}-rhsso.{{ ROUTE_SUBDOMAIN }}/auth/realms/istio/protocol/openid-connect/token' \
 -H "Content-Type: application/x-www-form-urlencoded" \
 -d "username=auth{{ USER_ID }}" \
 -d 'password={{ OPENSHIFT_USER_PASSWORD }}' \
 -d 'grant_type=password' \
 -d 'client_id=ccn-cli' | jq -r '.access_token')  && echo $TOKEN;

Once you have generated the token, re-run the curl command below with the token in VS Code Terminal:

curl -s -H "Authorization: Bearer $TOKEN" http://istio-ingressgateway-{{ USER_ID }}-istio-system.{{ ROUTE_SUBDOMAIN }}/services/products | jq

You should see the following expected output:

...
 {
    "itemId": "444435",
    "name": "Quarkus twill cap",
    "desc": "",
    "price": 13,
    "quantity": 600
  },
  {
    "itemId": "444437",
    "name": "Nanobloc Universal Webcam Cover",
    "desc": "",
    "price": 2.75,
    "quantity": 230
  }
]

Congratulations! You’ve integrated RH-SSO with Istio to protect service mesh traffic to the catalog service, without having to change the application at all. Istio can use Keycloak to authenticate service-to-service calls (also called "east-west" traffic).

For "north-south" traffic, such as traffic coming in from a frontend web application, RH-SSO provides various adapters for apps like Spring Boot, JBoss EAP and others to configure your apps to authenticate against RH-SSO. Quarkus also provides MicroProfile JWT and Keycloak adapters for those types of apps. See the Quarkus Guides for more detail.

Red Hat also offers Red Hat build of Quarkus(RHBQ) to support and maintenance over stated time periods for the major versions of Quarkus. In this workhop, we use RHBQ to develop cloud-native microservices. Learn more about RHBQ. This is one of the cloud-native runtimes included in Red Hat Runtimes.

When combining Red Hat SSO with istio, you can ensure traffic within the service mesh and traffic coming and leaving the mesh can be properly authenticated.

Summary

In this scenario you used Istio to implement many of the features needed in modern, distributed applications.

Istio provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more without requiring any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, configured and managed using Istio’s control plane functionality.

Technologies like containers and container orchestration platforms like OpenShift solve the deployment of our distributed applications quite well, but are still catching up to addressing the service communication necessary to fully take advantage of distributed microservice applications. With Istio you can solve many of these issues outside of your business logic, freeing you as a developer from concerns that belong in the infrastructure.

Congratulations!