This post is follow up on my previous post about Quarkus microservices monitoring. This time I'll use built-in OpenShift Monitoring stack (Prometheus and Alertmanager) instead of installing project specific monitoring stack.
1. First we'll deploy to OpenShift 4 Quarkus microservice container which is based on following example source code.
$ oc login -u developer$ oc new-project quarkus-msa-demo$ oc new-app quay.io/jstakun/hello-quarkus:0.2 --name=hello-quarkus$ oc expose svc hello-quarkus
2. In order to be able to monitor your own application services we need to enable this feature in OpenShift
2.1 Login as cluster admin
2.2 Verify if user workload monitoring stack is running$ oc login -u kubeadmin
$ echo "---
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
techPreviewUserWorkload:
enabled: true
" | oc create -f -
$ oc get pod -n openshift-user-workload-monitoring3. Create monitoring role and grant this role to developer user so that he can create application specific Prometheus Service Monitor
NAME READY STATUS RESTARTS AGE
prometheus-operator-744fc6d6b6-pkqnd 1/1 Running 0 2m38s
prometheus-user-workload-0 5/5 Running 1 83s
prometheus-user-workload-1 5/5 Running 1 2m27s
$ echo "---4. Create Prometheus Service Monitor which instructs Prometheus operator to deploy exporter scarping metrics from Quarkus microservice
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: monitor-crd-edit
rules:
- apiGroups: ["monitoring.coreos.com"]
resources: ["prometheusrules", "servicemonitors", "podmonitors"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
" | oc create -f -
$ oc adm policy add-cluster-role-to-user monitor-crd-edit developer
$ oc login -u developer
$ echo "---4.1 Now we'll call Quarkus microservice to quickly check if metrics are collected by Prometheus
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: prometheus-example-monitor
name: prometheus-example-monitor
namespace: quarkus-msa-demo
spec:
endpoints:
- interval: 5s
port: 8080-tcp
selector:
matchLabels:
app: hello-quarkus
" | oc create -f -
$ ROUTE=$(oc get route -n quarkus-msa-demo | grep hello-quarkus | awk '{print $2}') && echo $ROUTE
$ while true;
curl $ROUTE/conversation
echo;
do sleep .5;
done;
4.2 You can open OpenShift Web Console, login as developer user and go to Metrics tab in Developer Perspective under Monitoring View. In order to verify if metrics are collected you can create following example custom query: application_org_acme_quickstart_ConversationService_performedTalk_total
Alternatively you can also use command line:
$ TOKEN=$(oc sa get-token grafana-operator -n quarkus-msa-demo) && echo $TOKEN
$ URL=$(oc get route thanos-querier --template='{{.spec.host}}' -n openshift-monitoring) && echo $URL
$ METRIC=application_org_acme_quickstart_ConversationService_performedTalk_total
$ curl -k -H "Authorization: Bearer $TOKEN" https://$URL/api/v1/query?query=$METRIC
4.3 At this point you can also create example Alert rule
$ oc apply -f conversation-service-highload-rule.yaml
After some time (5 minutes in this example rule) you can check whether alert is firing in OpenShift Web Console using follwing custom query:
ALERTS{alertname="ConversationHighLoad", alertstate="firing"} You should also be able to find this alert in Alertmanager UI
5. Finally let's install Grafana and create Quarkus microservice Grafana Dashboard to visualize metrics stored in Prometheus
5.1 We must install Grafana Operator version 3.2+. As for now Grafana operator version 3.2+ is unavailable in OpenShift Operator Hub. Hence we'll install the operator manually from github:
$ oc login -u kubeadmin5.2 Create example Grafana deployment
$ oc project quarkus-msa-demo
$ oc create sa grafana-operator
$ oc adm policy add-cluster-role-to-user cluster-monitoring-view -z grafana-operator
$ oc create -f https://raw.githubusercontent.com/integr8ly/grafana-operator/master/deploy/roles/role.yaml
$ oc create -f https://raw.githubusercontent.com/integr8ly/grafana-operator/master/deploy/roles/role_binding.yaml
$ oc create -f https://raw.githubusercontent.com/integr8ly/grafana-operator/master/deploy/crds/Grafana.yaml
$ oc create -f https://raw.githubusercontent.com/integr8ly/grafana-operator/master/deploy/crds/GrafanaDataSource.yaml
$ oc create -f https://raw.githubusercontent.com/integr8ly/grafana-operator/master/deploy/crds/GrafanaDashboard.yaml
$ wget https://raw.githubusercontent.com/integr8ly/grafana-operator/master/deploy/operator.yaml
$ sed -i "s/grafana-operator:latest/grafana-operator:v3.3.0/g" operator.yaml
$ oc create -f operator.yaml
$ oc get pods
NAME READY STATUS RESTARTS AGE
grafana-operator-6d54bc7bfc-tl5vw 1/1 Running 0 5m29s
$ oc create -f grafana-deployment5.3 Create Grafana DataSource
$ oc expose svc grafana-service
$ TOKEN=$(oc sa get-token grafana-operator) && echo $TOKEN5.4 Create example Grafana Dashboard
$ echo "---
apiVersion: integreatly.org/v1alpha1
kind: GrafanaDataSource
metadata:
name: prom-grafanadatasource
namespace: quarkus-msa-demo
spec:
datasources:
- access: proxy
editable: true
jsonData:
httpHeaderName1: Authorization
timeInterval: 5s
tlsSkipVerify: true
name: Prometheus
secureJsonData:
httpHeaderValue1: >-
Bearer $TOKEN
type: prometheus
url: >-
https://thanos-querier.openshift-monitoring.svc:9091
isDefault: true
version: 1
name: my-prom-datasources.yaml
" | oc create -f -
$ oc create -f grafana-dashboard.yaml5.5 Test Grafana Dashboard.
Get Grafana route url:
$ echo http://$(oc get route | grep grafana | awk '{print $2}')Login with default credentials: admin/admin and navigate to Conversation Service dashboard where you should see visualized metrics collected from Quarkus microservice.