Skip to content

Latest commit

 

History

History
185 lines (154 loc) · 7.09 KB

File metadata and controls

185 lines (154 loc) · 7.09 KB

Camel Kafka - Spring Boot example

Abstract

An example which shows how to authenticate on a Kafka cluster using OpenShift OAuth server, using the custom resources managed by the Streams operator

Introduction

Since OpenShift has the OAuth embedded server, the scenario is based on how to integrate OCP OAuth server with Kafka cluster, configuring the listener to manage the OAuth mechanism using the embedded server and the OCP service accounts tokens. Documentation

Trying out the example on OpenShift

Prerequisites

First, start with creating a new OpenShift project:

oc new-project kafka-oauth-ocp

Install operator Streams for Apache Kafka in the created namespace using OperatorHub Documentation

Deploy Kafka cluster

Deploy the kafka cluster with OAuth2 configuration using OCP API server as authorization server and KRaft

cat << EOF | oc apply -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
  name: multirole
  labels:
    strimzi.io/cluster: example-kafka-cluster
  namespace: kafka-oauth-ocp
spec:
  roles:
    - controller
    - broker
  storage:
    type: ephemeral
    kraftMetadata: shared
  replicas: 1
---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: example-kafka-cluster
  namespace: kafka-oauth-ocp
  annotations:
    strimzi.io/kraft: enabled
    strimzi.io/node-pools: enabled
spec:
  kafka:
    logging:
      type: inline
      loggers:
        kafka.root.logger.level: INFO
        log4j.logger.kafka.request.logger: INFO
        log4j.logger.io.strimzi.kafka.oauth: DEBUG
        log4j.logger.kafka.authorizer.logger: DEBUG
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      default.replication.factor: 1
      min.insync.replicas: 1
      transaction.state.log.min.isr: 1
    template:
      kafkaContainer:
          env:
            - name: OAUTH_SSL_TRUSTSTORE_LOCATION
              value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            - name: OAUTH_SSL_TRUSTSTORE_TYPE
              value: PEM
    authorization:
      type: simple
    storage:
      type: ephemeral
    listeners:
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: oauth
          validIssuerUri: 'https://kubernetes.default.svc'
          jwksEndpointUri: 'https://kubernetes.default.svc.cluster.local/openid/v1/jwks'
          serverBearerTokenLocation: /var/run/secrets/kubernetes.io/serviceaccount/token
          checkAccessTokenType: false
          includeAcceptHeader: false
          userNameClaim: >-
            ['kubernetes.io'].['serviceaccount'].['name']
          maxSecondsWithoutReauthentication: 3600
          customClaimCheck: >-
            @.['kubernetes.io'] && @.['kubernetes.io'].['namespace'] in
            ['kafka-oauth-ocp']
  entityOperator:
    topicOperator: {}
    userOperator: {}
EOF

Create the Kafka User to handle ACLs for service accounts (the name of the KafkaUser must match the service account name)

cat << EOF | oc apply -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
  labels:
    strimzi.io/cluster: example-kafka-cluster
  name: kafka-client-sa
  namespace: kafka-oauth-ocp
spec:
  authorization:
    type: simple
    acls:
      - host: '*'
        operations:
          - Read
          - Describe
          - Write
          - Create
        resource:
          name: helloWorld
          patternType: literal
          type: topic
      - resource:
          type: group
          name: '*'
          patternType: literal
        operations:
          - Read
        host: '*'
EOF

How to run

In order to use the dedicated service account, it needs to be created

oc create sa kafka-client-sa

The application is deployed using the openshift-maven-plugin that takes care of creating all the necessary OpenShift resources.

Simply use the following command to deploy the application:

mvn clean install -Popenshift

After the application pod reaches the Ready state you will see the expected output in the application’s pod log, where there should be the messages received by the Kafka consumer.

How it works

The Kafka cluster exposes a listener that needs a valid client Bearer token to be authenticated, and it needs to verify it by querying the OAuth server (using jwksEndpointUri property). The authentication to the OAuth server is granted by the local service account token provided in the Kafka cluster pod and configured with the serverBearerTokenLocation property, moreover the issuer can be different (eg: in not on-prem cloud) so please configure validIssuerUri properly with the claim iss of the client token. The TLS certificates exposed by the OCP API server are validated with the CA PEM file already mounted by default in the pod, and the Kafka properties are injected as environment variable using the pod template.

The application, in turn, authenticates to the Kafka cluster using its service account token, configured in the camel.component.kafka.sasl-jaas-config property. To trust the TLS certificates of the Kafka cluster the generated secret will be mounted in the application pod and the Camel properties will be injected as an environment variable in src/main/jkube/deployment.yaml

The server finishes the login process identifying the user as defined in the property userNameClaim, this is the claim of the token to understand the user to be authorized. (in this example the username is the name of the service account)

Once the authentication has been done, the Kafka cluster verifies the permissions on the resource through the KafkaUser custom resource where the name matches the authenticated username.

overview

Help and contributions

If you hit any problem using Camel or have some feedback, then please let us know.

We also love contributors, so get involved :-)

The Camel riders!