wtorek, 10 września 2024

Using certificates signed by custom CA in SSL comumunication in Python applications

If you are dealing with AI these days like myself you are probably deploying a lot of python applications in containers. If your applications are referencing AI models API endpoint you are most likely also dealing with SSL communication configuration.

In this post I'll quickly explain how to validate certificates signed by custom CA in SSL communication in Python applications (using requests package) needed to access the CA certificates chain used to sign the certificate used by secured service. Here is guideline how this can be achieved in OpenShift or other Kubernetes flavour.

1. First we need to download certificate chain used by the secured service:

$ openssl s_client -showcerts -connect my-service.my-domain.local:443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > certificate_chain.pem

If CA certs are missing you must manually copy them to certificate_chain.pem

We can quickly check what is the content of the file:

$ cat certificate_chain.pem | openssl crl2pkcs7 -nocrl -certfile /dev/stdin | openssl pkcs7 -print_certs | grep subject | head

2. Next let's create secret containing these certificates:

$ oc create secret generic ca-certs --from-file=cacerts.crt=certificate_chain.pem

3. Mount secret to the python application Kubernetes deployment:

$ oc set volume deployment my-python-app --add --type secret --mount-path /var/secrets --secret-name ca-certs --read-only

4. Add environment variable REQUESTS_CA_BUNDLE to the python application Kubernetes deployment pointing to the path where secret containing certificates has been mounted:

$ oc set env deployment my-python-app REQUESTS_CA_BUNDLE=/var/secrets/cacerts.crt --overwrite=true

From now on the requests package will use these certificates to validate certificates presented by secured service referenced by the python application.  


środa, 15 marca 2023

Sustainable Computing in OpenShift

As per this blog sustainable computing concerns the consumption of computing resources in a way that means it has a net zero impact on the environment, a broad concept that includes energy, ecosystems, pollution and natural resources. 

Can we do sustainable computing in OpenShift?

Yes, we can! 

Meet the Kepler project. Kepler exposes a variety of metrics about the energy consumption of Kubernetes components such as Pods and Nodes. 

In this blog I'll describe my initial experience with deploying and using Kepler on top of OpenShift clusters.

I've installed Kepler using Helm chart, however they are also working actively on the Kepler Operator which most probably sooner or later will be the preferred installation method in OpenShift.

$ git clone https://github.com/sustainable-computing-io/kepler-helm-chart
$ cd git/kepler-helm-chart/

At this point it makes sense to review and modify values.yaml and adjust the configuration to your needs. I did some minor changes which you can review here.

$ helm install kepler . --values values.yaml  --create-namespace  --namespace kepler

Next you'll need to grant the kepler service account necessary SCC permissions and bind it to the kepler-exported daemon set. These commands must be executed using a cluster-admin account.

$ oc adm policy add-scc-to-user privileged -z kepler

$ oc patch ds/kepler-exporter --patch '{"spec":{"template":{"spec":{"serviceAccountName": "kepler"}}}}'

Optionally you can create a dedicated SCC using this example and add it to the kepler service account as I did above.

Now you should wait until kepler exporter pods are running on each node as per daemon set configuration.

$ oc get pods -n kepler
NAME                    READY   STATUS    RESTARTS   AGE
kepler-exporter-2k5cx   1/1     Running   0          14h
kepler-exporter-8ctd5   1/1     Running   0          17h
kepler-exporter-cqq9d   1/1     Running   0          17h

By default kepler exporter pods expose Prometheus metrics at /metrics uri. You can learn more about Kepler metrics here. In OpenShift to allow scraping these metrics by Prometheus you must first enable user workload monitoring as per the documentation. Next you can configure Service Monitor in kepler project. Just remember to put the kepler project name at the bottom. Once it is done you can query the metrics using PromQL. 

For example this query will show top power consuming pods in your cluster.

topk(10, kepler_container_joules_total)

Returned values are measured in Joules which can be converted to Watts. Since 1 Watt = 1 Joule per second you’ll need to use the rate() function which gives the power in Watts since the rate function returns the average per second. Therefore, to get the container energy consumption in Watts you can use the following query:

sum by (pod_name, container_name, container_namespace, node) (irate(kepler_container_joules_total{}[1m]))

Enjoy!


czwartek, 9 lutego 2023

Managing local accounts in OpenShift GitOps

OpenShift GitOps is based on the ArgoCD upstream project and provides Kubernetes operator based automation for ArgoCD instances lifecycle management on top of OpenShift. By default it is integrated with OpenShift Identity Management and RBAC which provides OpenShift users and roles integration with ArgoCD. This is great for managing user access but you might also have a need to grant access to ArgoCD for some external applications.

The solution might be to create local ArgoCD accounts with limited permissions tailored to your needs which might act as an "service account" to be used by external applications to automate integration with ArgoCD.

Local ArgoCD accounts can be configured during creation of ArgoCD CRD. You also edit existing ArgoCD CRD.

spec:
  rbac:
    policy: |
      g, system:cluster-admins, role:admin
      g, cluster-admins, role:admin
      p, tekton, applications, get, */*, allow
     p, tekton, applications, sync, */*, allow

  extraConfig:
    accounts.tekton: 'apiKey'

In the above example I have created local account called tekton with applications get and sync permissions granted for all (*/*) applications. Please have a look at ArgoCD RBAC docs for more details. 

In order to be able to generate a token for this account I also must have enabled apiKey capability. For more details about local accounts please have a look at ArgoCD Local accounts docs. Please note this account has no login capability hence it won't be able to login to ArgoCD UI or via argocd cli.

Once this is done you can always check the current ArgoCD RBAC configuration in argo-rbac-cm config map in the project/namespace where your ArgoCD instance is deployed.

Next you can login to ArgoCD UI or use argocd cli to generate access tokens for the account.


 

Remember to copy the new token as it won't be available anymore after you close it, and in case you lose it you'll need to generate the new one.

One of use cases for using access tokens is integration with Tekton Pipelines. Have a look at the following TektonHub task where access token based authentication can be used.

 




środa, 7 grudnia 2022

Checking Security Context Constraints permissions

Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions include actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.

Security context constraints allow an administrator to control:

  • Whether a pod can run privileged containers with the allowPrivilegedContainer flag.

  • Whether a pod is constrained with the allowPrivilegeEscalation flag.

  • The capabilities that a container can request

  • The use of host directories as volumes

  • The SELinux context of the container

  • The container user ID

  • The use of host namespaces and networking

  • The allocation of an FSGroup that owns the pod volumes

  • The configuration of allowable supplemental groups

  • Whether a container requires write access to its root file system

  • The usage of volume types

  • The configuration of allowable seccomp profiles

By default the cluster contains several default security context constraints (SCCs)  with different sets of permissions and privileges as per the documentation

You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope.

For example when you assign anyuid scc to service account my-sa

$ oc adm policy add-scc-to-user anyuid -z my-sa 

corresponding role will be created and bound to the service account with cluster scope.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:openshift:scc:anyuid
rules:
- apiGroups:
  - security.openshift.io
  resourceNames:
  - anyuid
  resources:
  - securitycontextconstraints
  verbs:
  - use

As an cluster admin using oc cli you can check who has permissions to use specific scc.

$ oc adm policy who-can use scc anyuid

resourceaccessreviewresponse.authorization.openshift.io/<unknown>

Namespace: default
Verb:      use
Resource:  securitycontextconstraints.security.openshift.io

Users:  system:admin
        system:serviceaccount:apps-mlapps:my-sa
        system:serviceaccount:apps-sealed-secrets:secrets-controller
        ...

Groups: system:cluster-admins
        system:masters


If you are using Advanced Cluster Security for Kubernetes you can also check who has these permissions using ACS Central web UI:

On the left hand side Menu click on Configuration Management and next on the right top side click on RBAC Visibility & Configuration dropdown list and select Roles. Finally type "Role: system:openshift:scc" in the filter.

Click on any available link in User & Groups or Service Account columns to reveal list of users, groups or service accounts bound to selected SCC.


piątek, 29 lipca 2022

Configure timezone in your OpenShift cluster

You can configure timezone on your OpenShift RHEL CoreOS nodes using following machine config:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: worker-custom-timezone-configuration
spec:
  config:
    ignition:
      config: {}
      security:
        tls: {}
      timeouts: {}
      version: 2.2.0
    networkd: {}
    passwd: {}
    storage: {}
    systemd:
      units:
      - contents: |
          [Unit]
          Description=set timezone
          After=network-online.target

          [Service]
          Type=oneshot
          ExecStart=timedatectl set-timezone Europe/London

          [Install]
          WantedBy=multi-user.target
        enabled: true
        name: custom-timezone.service
  osImageURL: "" 

Containers don't typically inherit the host time zone configuration, as container images often set their own time zone (usually UTC). It is possible to change the timezone in pods (but the OCP platform pods) using one of the following methods:

1. Set an environment variable. This sets TZ in any containers to the timezone specified.

$ oc get deployments
$ oc set env deployments/dc_name TZ=Europe/London 

2. Mount /etc/localtime to use the timezone stored in a configmap

$ oc create configmap tz-london --from-file=localtime=/usr/share/zoneinfo/Europe/London
$ oc set volumes deployments/dc_name --add \
    --type=configmap --name=tz --configmap-name=tz-london \
    --mount-path=/etc/localtime --sub-path=localtime 

If you prefer first method are you are using Red Hat Base Universal Minimal images you'll need to reinstall tzdata package to populate /usr/share/zoneinfo

FROM registry.redhat.io/ubi8-minimal
RUN microdnf reinstall tzdata -y 

czwartek, 5 maja 2022

Multi tenant metrics collection from OpenShift built in Prometheus

In one of my previous posts I've described 3 ways to collect metrics stored in OpenShift built-in Prometheus metrics database. Now I'd like to show you how you can limit access to metrics per tenant projects (namespaces). 

Built-in Thanos Queries contains dedicated tenancy port which requires namespace parameter to access metrics of objects belonging to the project. If you want to expose this port outside of the cluster you'll need to create custom route (ingress):

kind: Route

apiVersion: route.openshift.io/v1

metadata:

  name: multitenant-thanos-querier

  namespace: openshift-monitoring

spec:

  host: ROUTE_HOSTNAME

  path: /api

  to:

    kind: Service

    name: thanos-querier

    weight: 100

  port:

    targetPort: tenancy

  tls:

    termination: reencrypt

  wildcardPolicy: None

 

Next you can execute follwing commands to query metrics from selected namespace: 

 

PROJECT=sample-app-prod

SA=querier

oc project $PROJECT

oc create sa $SA

TOKEN=$(oc sa get-token $SA)

URL=$(oc get route -n openshift-monitoring | grep multitenant | awk '{print $2}')

 

You don't need to assign cluster-monitoring-view role to the service account but only view role in the project where you want to query the metrics:  

 

oc adm policy add-cluster-role-to-user view -z $SA

 

In the thanos querier query you must enter namespace parameter to specify which namespace metrics you want to get:  

 

curl -v -k -H "Authorization: Bearer $TOKEN" "https://$URL/api/v1/query?namespace=$PROJECT&query=kube_pod_status_ready"

 

In the results you will only see values of metrics which are related to your selected project:  

{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"kube_pod_status_ready","condition":"false","container":"kube-rbac-proxy-main","endpoint":"https-main","job":"kube-state-metrics","namespace":"sample-app-prod","pod":"hello-quarkus-5859859f9f-vbhjh","prometheus":"openshift-monitoring/k8s","service":"kube-state-metrics"},"value":[1651068513.646,"0"]}]}} 

That's it. Now you only have access to metrics from the projects where you have the view role.