środa, 3 marca 2021

Detect failed login events in Openshift 4

In this blog post I'll try to answer the question how to detect failed login attempts in OpenShift 4. Typically you'll look for two major pieces of information to analyze the failed login events: user login being used and request source IP. 

There are a number of authentication flows available in OpenShift 4. In this post I'll focus on three most common: web console authentication using identity provider credentials, oc command line interface authentication using identity provider credentials and oc command line interface bearer token authentication.

1. OpenShift web console authentication flow

User authentication is performed by oauth-openshift pods located in openshift-authentication project. In order to log failed login attempts first you must change log level to Debug for these pods in authentication operator crd:

oc edit authentications.operator.openshift.io
...
spec:
  logLevel: Debug
... 

 
This should let you observe following logs whenever failed login attempt will occur:
 
oc -n openshift-authentication logs deployment.apps/oauth-openshift
 
I0303 09:44:55.129099 1 login.go:177] Login with provider "htpasswd_provider" failed for "webhacker"

This entry won't give you information about the source IP of failed login request hence you must configure ingress access logging following the documentation or simply use following command:

oc edit ingresscontroller -n openshift-ingress-operator
...
spec:
  logging:
access:
destination:
type: Container
...  

Once it is done you can search for following entries in the router access log to identify failed authentication requests source IP: 

oc -n openshift-ingress logs deployment.apps/router-default -c logs

2021-03-03T09:44:55.609880+00:00 router-default-6fd8d9cdc7-qn8b5 router-default-6fd8d9cdc7-qn8b5 haproxy[204]: 39.68.32.67:47621 [03/Mar/2021:09:44:42.604] fe_sni~ be_secure:openshift-console:console/pod:console-8f4c45fb5-hqh4w:console:https:10.129.0.20:8443 0/0/0/5/5 303 931 - - --VN 88/43/36/21/0 0/0 "GET /auth/login HTTP/1.1"

I recommend leveraging the built in OpenShift Logging stack to easily discover these logs using built in Kibana UI. Remember first to create infra index in Kibana UI and then you can use following filters to discover above mentioned log entries:

kubernetes.namespace_name:openshift-authentication AND message:"*failed for*"


kubernetes.namespace_name:openshift-ingress AND kubernetes.container_name:logs AND message:"*/auth/login*"


  2. OC
command line interface authentication flow

In this flow authentication is also performed by oauth-openshift pods located in openshift-authentication project the same way as in web console authentication flow hence you should search for the same failed login attempts logs as above:

oc -n openshift-authentication logs deployment.apps/oauth-openshift
 
I0303 10:31:58.977373 1 basicauth.go:50] Login with provider "htpasswd_provider" failed for login "clihacker"

Tricky part is to discover the source IP of the request. This time you'll need to look for access log entries sent from haproxy public_ssl frontend to oauth-openshift backend pod at the time of failed login attempt:

oc -n openshift-ingress logs deployment.apps/router-default -c logs

2021-03-03T10:31:59.057027+00:00 router-default-6fd8d9cdc7-v49zg router-default-6fd8d9cdc7-v49zg haproxy[176]: 39.68.32.67:47771 [03/Mar/2021:10:31:57.507] public_ssl be_tcp:openshift-authentication:oauth-openshift/pod:oauth-openshift-586c689c67-2vggm:oauth-openshift:https:10.128.0.25:6443 4/1/1549 4139 -- 4/3/2/0/0 0/0

Due to fact login requests in this flow are passed encrypted directly to oauth-openshift pod haproxy ingress controller will only log tcp log entries instead of full http log entries. 

If you are using Kibana UI you might use the same filter as before:


kubernetes.namespace_name:openshift-authentication AND message:"*failed for*"

 

In Kibana UI you can easily check what exactly oauth-openshift pod was handling the request by checking kubernetes.pod_name field and generate query filter:

#paste kubernetes.pod_name as POD value
POD=oauth-openshift-586c689c67-2vggm
IP=$(oc get pod/$POD -n openshift-authentication -o template --template '{{.status.podIP}}')

#generate query filter for Kibana
echo kubernetes.namespace_name:openshift-ingress AND kubernetes.container_name:logs AND message:"*public_ssl be_tcp:openshift-authentication:oauth-openshift/pod:$POD:oauth-openshift:https:$IP:6443*"

Copy and paste this query filter to Kibana UI


3. OC command line interface authentication using bearer token flow

This authentication flow is completely different from the two above. Authentication request is handled directly by API endpoint and not oauth authentication pods as before. API endpoint audit log is up and running by default in OpenShift 4 and you can access it as per the documentation

I found it complex to analyze API audit logs this way, hence I decided to leverage the built in OpenShift Logging stack again. First in order to aggregate API audit logs using built in OpenShift Logging stack you should create following log forwarder configuration:

echo "---
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  namespace: openshift-logging
  name: instance
spec:
  pipelines:
    - name: enable-default-log-store
      inputRefs:
        - application
        - infrastructure
        - audit
      outputRefs:
        - default
" | oc create -f - -n openshift-logging

Next you should create audit index in Kibana UI and you should search for failed login attempts using following filter:

requestURI:"/apis/user.openshift.io/v1/users/~" AND responseStatus.code:401


API audit log contains source IP which should show you where the failed login request comes from and of course there is no user login as the bearer token was not resolved to any existing OpenShift user identity. 

Remember in OpenShift API requests will arrive via load balancer hence the load balancer should retain client ip when proxying requests to API server.

If you would like to automate detection of failed logins in your OpenShift clusters you should leverage 3rd party solutions which can automatically filter logs described above. One option you can give a try is my elastalert-ocp project, however it has some drawbacks as described in README file.

Brak komentarzy:

Prześlij komentarz