czwartek, 19 marca 2020

Monitoring Quarkus microservices using Jaeger, Prometheus and Grafana in OpenShift 4


In this post I'll describe how you can easily create Quarkus microservice using Java Microprofile standard apis which will be traced and monitored by Jaeger, Prometheus and Grafana and all components will be deployed in OpenShift 4.

1. First we'll build and deploy to OpenShift 4 Quarkus microservice container which will be based on following example source code.

If you don't want to build Quarkus microservice container image yourselves you can deploy container I've already built in advance:
$ oc new-project quarkus-msa-demo

$ oc new-app quay.io/jstakun/hello-quarkus:0.2 --name=hello-quarkus

$ oc expose svc hello-quarkus
Now you can jump to section 1.3
1.1 Compile native Quarkus microservice application
$ git clone https://github.com/jstakun/quarkus-tracing.git

$ cd ./quarkus-tracing
In order to build native Quarkus image you'll need to setup your machine as per Quarkus documentation.
$ ./mvnw package -Pnative
1.2 Build and deploy Quarkus microservice container image to OpenShift
$ oc login -u developer

$ oc new-project quarkus-msa-demo

$ cd ./target

$ oc new-build --name=hello-quarkus --dockerfile=$'FROM registry.access.redhat.com/ubi8/ubi-minimal:latest\nCOPY *-runner /application\nRUN chgrp 0 /application && chmod +x /application\nCMD /application\nEXPOSE 8080'

$ oc start-build hello-quarkus --from-file=./tracing-example-1.0-SNAPSHOT-runner

$ oc new-app hello-quarkus

$ oc expose svc hello-quarkus
1.3 Now you can call your Quarkus microservice and check what endpoints are exposed:
$ ROUTE=$(oc get route | grep hello-quarkus | awk '{print $2}') && echo $ROUTE
$ curl $ROUTE/hello
$ curl $ROUTE/bonjour
$ curl $ROUTE/conversation
$ curl -H "Accept: application/json" $ROUTE/metrics/application
$ curl $ROUTE/metrics (in newer Quarkus use /q/metrics)
$ curl $ROUTE/health/live (in newer Quarkus use /q/health/live)
$ curl $ROUTE/health/ready (in newer Quarkus use /q/health/ready)
1.4 Optionally you can define readiness and liveness probes for your Quarkus microservice container using /health endpoints:
$ oc edit dc hello-quarkus

   spec:
     containers:
       - image:
         ...
         readinessProbe:
           httpGet:
             path: /health/live
             port: 8080
             scheme: HTTP
           initialDelaySeconds: 5
           timeoutSeconds: 2
           periodSeconds: 5
           successThreshold: 1
           failureThreshold: 3
         livenessProbe:
           httpGet:
             path: /health/ready
             port: 8080
             scheme: HTTP
           initialDelaySeconds: 5
           timeoutSeconds: 2
           periodSeconds: 5
           successThreshold: 1
           failureThreshold: 3
Please refer to ReadinessHealthCheck and SimpleHealthCheck java classes source codes for example implementations of health checks.

2. Now let's configure Quarkus microservice tracing with Jaeger followed by monitoring with Prometheus and Grafana

2.1 First you'll need to install Jaeger, Prometheus and Grafana operators. 

Please refer to OpenShift documentation on how to install operators using either OpenShift Web Console or oc command line interface.
In essence you'll need to login to the OpenShift cluster with cluster-admin credentials. Jaeger operator could be installed cluster wide, and Prometheus and Grafana operators should be installed in project where Quarkus microservice has been deployed.

2.2 Let's deploy Jaeger instance and enable tracing for our Quarkus microservice

Go to the list of installed operators in your project


Click on Jaeger Operator


Click on Create Instance link in Jaeger Box


Click on Create button at the bottom and wait for a while until Jaeger pod is up and running. 

Operator will create Jaeger Collector service (exposing port 14268) which needs to be called by Quarkus microservice. Jaeger Collector endpoint is defined in application.properties configuration file. You can overwrite it's value with QUARKUS_JAEGER_ENDPOINT environment variable in Quarkus microservice deployment config configuration.
$ COLLECTOR=http://$(oc get svc | grep collector | grep -v headless | awk '{print $1}'):14268/api/traces && echo $COLLECTOR
$ oc set env dc/hello-quarkus QUARKUS_JAEGER_ENDPOINT=$COLLECTOR
If you want to access Jaeger web UI you'll need to expose Jaeger query service using secure Route with Re-encrypt TLS Termination (you don't need to add custom certificates to the route definition). 

If you experience following error during Jaeger web UI authentication: "The authorization server encountered an unexpected condition that prevented it from fulfilling the request.", make sure to name Jaeger route the same as route name specified in jaeger-ui-proxy service account, which you can check with following command:
$ oc describe sa jaeger-ui-proxy | grep OAuthRedirectReference

Now you must call Quarkus microservice for couple of times with i.e. curl $ROUTE/conversation and you should see traces collected in Jaeger web UI.






2.3 Now let's configure Prometheus to collect metrics generated by our Quarkus microservice

Go to the list of installed operators in the project and click on Prometheus Operator. Next click on Create Instance link in Prometheus box


Change namespace in alertmanager settings to namespace name where Prometheus will be deployed. We'll configure Alertmanager later in section 2.5


Wait until 2 Prometheus pods will be up and running and come back to Prometheus Operator page. Click on Create Instance in Service Monitor box


Modify spec configuration as per example below. Make sure to configure properly selector and port name.
spec:
  endpoints:
    - interval: 5s
      port: 8080-tcp
  selector:
    matchLabels:
      app: hello-quarkus


In order to access Prometheus web UI you'll need to expose Prometheus service
$ oc expose svc prometheus-operated
Now you should call Quarkus microservice for couple of times using i.e. curl $ROUTE/conversation and you should see metrics scraped by Prometheus in web UI. For example enter following metric name into query text area: application_org_acme_quickstart_ConversationService_performedTalk_total


Please refer to Quarkus documentation and ConversationService java class source code for more details on how to enable metrics in Quarkus microservices.

2.4 Finally let's configure Grafana to visualize metrics collected by Prometheus

Go to the list of installed operators in the project and click on Grafana Operator. Next click on Create Instance link in Grafana box


You can keep default settings and click on Create button


Wait until Grafana pod is up and running and come back to list of installed operators and click on Grafana Operator.
It happend to me Grafana Operator failed to provision Grafana instance with following error: no matches for kind "Route" in version "route.openshift.io/v1". In this case you need to change Grafana instance ingress to false in yaml configuration, and expose Grafana service manually:
$ oc expose svc grafana-service
Next click on Create Instance link on Grafana Data Source box


You only need to change url to point to Prometheus service which has been created earlier and click Create button


Now you can either login yourselves to the Grafana and create your own dashboard or you can use sample dashboard I've created for you.

Come back to list of installed operators and click on Grafana Operator and click on Create Instance link on Grafana Dashboard box


Copy & paste example dashboard yaml file content. This dashboard is expecting defined Prometheus data source named "Prometheus", so make sure at this point your Prometheus data source name is correct and click Create button.


Finally you can open Conversation Dashboard in Grafana
 



2.5 Optionally we can also configure Prometheus Alertmanager to manage alerts

Go to the list of installed operators in the project and click on Prometheus Operator. Next click on Create Instance link in Alertmanager box.


For testing purposes you can change number of replicas to 1 and click Create button


In order to run Alertmanager pod you'll need to create alertmanager secret as per Prometheus operator documentation. Check events in the project to verify what is the expected secret name.
$ oc create secret generic alertmanager-example --from-file=alertmanager.yaml
At this point you need to make sure alertmanager Prometheus configuration matches Alertmanager service name and port name.
$ oc get svc | grep alertmanager

$ oc edit prometheus
 
...
spec:
  alerting:
    alertmanagers:
    - name: alertmanager-operated
      namespace: quarkus-msa-demo
      port: web
When Promtheus and Alertmanager are connected you can create sample Alert Rule. Go to the list of installed operators in the project and click on Prometheus operator and click on Create Instance link in Prometheus Rule box.


Copy & paste example rule definition and click Create button.


In order to get this rule fired you'll need to call $ROUTE/conversation endpoint at least 30 times and wait for 10 minutes.

In the meantime you can find this rule in Prometheus web UI


Finally after rule gets fired you should see it in Alertmanager web UI (of course if you exposed it with oc expose svc)


Congratulations! You've successfully configured Quarkus microservice monitoring with Jaeger, Prometheus and Grafana.

Brak komentarzy:

Prześlij komentarz