TLS@Kafka: What’s not in the manual

Gernot Pfanner
5 min readApr 15, 2021

Securing a Kafka connection with SSL encryption is not such a fun job. In this short guide, I will give a brief outline of the necessary tasks as well as my personal experiences. Concretely, we will consider the following setup:

Server configuration

For the start you have to configure the brokers to accept SSL connections from the clients. The necessary steps are very nicely described in the official Confluent documentation [1, 2]. First you create a key- and truststore for the broker and the client respectively. Loosely speaking, a keystore certifies the own identity whereas a truststore does the same for other parties (i.e. specifies whom to trust) [3]. In the second step, you generate a self-signed root certificate, which you then import into the broker and client truststore respectively.

In doing so, you should consider the following things:

  • The process is rather tedious and may have to be repeated later on (e.g. if the root certificate expires). Consequently it is a good idea to automate these steps by writing a (Shell/Python) script.
  • In a multi-broker scenario, it is convenient to use a common DNS-Alias for all brokers keytool -keystore server.keystore.jks -alias <DNS-Alias> which you can then use as a reference in your client applications.
  • In a docker environment, the injection of the SSL settings through environment variables can be really painful. First, these parameters do not match with their counterparts in the configuration file. Moreover, key- and truststore have to be located in /etc/kafka/secrets with the password being specified in a text file. For example, if you mount them from an external directory (e.g. ./keystores) to /etc/kafka/secrets, the blueprint for the docker.compose.yml reads as follows [4]
kafka:
image: confluentinc/cp-kafka
container_name: kafka
hostname: kafka
volumes:
...
- ./keystores/:/etc/kafka/secrets
environment:
...
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,SSL://kafka:9094
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092, SSL://kafka:9094
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore
KAFKA_SSL_KEYSTORE_CREDENTIALS: password.txt
KAFKA_SSL_KEY_CREDENTIALS: password.txt
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.server.truststore
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: password.txt
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM:
KAFKA_SSL_CLIENT_AUTH: requested
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
depends_on:
- zookeeper
networks
- testnet

Please note that

  • removing the Plaintext-Listener as well as requiring the client-authentication to be SSL will block unencrypted connections. However, the unencrypted listener can be useful in some migration scenarios (e.g. if not all of your clients are setup for SSL encryption) or as a fallback option (during the evaluation phase).
  • leaving the “identification algorithm”-setting blank, turns off hostname verification. This is recommended as an initial SSL setting but should be reconsidered later on for achieving an advanced level of security.

Client configuration

The basic client configuration for the kafka-console-producer /kafka-console-consumer is described in the official tutorial [2]. These tools are also very useful for testing your trust- and keystores.

For the configuration of your Spring Boot applications you have to consider the following scenarios separately

  • The consumer/producer is configured/managed by Spring Boot.
  • There is an explicit configuration class for the consumer/producer (annotated by @Configuration)

In the first case you only have to specify the right set of parameters, which read (again in the notation of the injected environment variables):

SPRING_KAFKA_PROPERTIES_SSL_TRUSTSTORE_PASSWORD   mypassword     
SPRING_KAFKA_PROPERTIES_SSL_TRUSTSTORE_LOCATION /app/kafka_client.truststore
SPRING_KAFKA_PROPERTIES_SSL_KEYSTORE_PASSWORD mypassword
SPRING_KAFKA_PROPERTIES_SSL_KEYSTORE_LOCATION /app/kafka_client.keystore
SPRING_KAFKA_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
SPRING_KAFKA_PROPERTIES_SECURITY_PROTOCOL SSL

On the other hand, if you have implemented an explicit configuration for the consumer/producer, you also need to adapt the corresponding class [5].

Debugging

The real challenge with SSL connections comes when things do not work. First, please note that only a Spring Boot Kafka Consumer will show connection errors on startup (so you should have a look at them first whenever possible). Secondly, most of the time you will get cryptic error messages and debugging can be quite cumbersome. To gain more insights in the actual problem you can try to start the application with special debug flags [6]

java -Djavax.net.debug=ssl,handshake -Djavax.net.ssl.keyStore=/app/kafka_client.keystore -Djavax.net.ssl.keyStorePassword=<password> -Djavax.net.ssl.trustStore=/app/kafka_client.truststore -Djavax.net.ssl.trustStorePassword=<password> -Djavax.net.ssl.HostnameVerifier=false -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true <program>

Additionally, it is often a good idea to check the certificate keystore by printing the content of the keystore/truststore [7]

keytool -list -v -keystore keystore.jks

Key- and Truststores in the Azure cloud

There are several ways to use Kafka in the Azure cloud. For example, natively, you can use Confluent Cloud [8], which has just released Azure Private Link [9] for securing network connectivity. Alternatively, if you decide to do it the docker way, you have to consider the location from where you retrieve your key- and truststores. In the case of Azure that would be the Key Vault [10]. However, you cannot store files in the vault, but only secrets (key/value pairs). This means that you have to transform your key/truststore to a base64-encoded string, which can be readily done with the following Linux command

base64 /app/kafka_client.truststore | tr -d "\n"; echo

To use your key-/truststore in your application you have to retrieve the corresponding strings from the Key Vault (e.g. as $keyvalue environment variable) and then decode it to a key- and truststore file respectively upon the start of the application

echo -n "$keyvalue" | base64 --decode > /app/kafka_client.truststore

However, if for some reason, your files are larger than 25 KB, this approach will not work because of the size limit of Key Vault objects [11]. There are some funny workarounds (such as concatenating multiple Key Vault Objects) but they are also a little bit risky. So please use them only if you need the thrill. There are also some more reasonable solutions, but they do have some drawbacks as well. For example, you can ship your trust/keystores along with the docker container. That is certainly convenient, however, maintenance can become tedious, if you have dozens of (microservice) containers (and e.g. want to replace these files). So, the overall morale here is that it is worth to put some thoughts in a scalable, resilient mechanism.

References

[1] https://docs.confluent.io/platform/current/kafka/authentication_ssl.html

[2] https://docs.confluent.io/platform/current/security/security_tutorial.html#generating-keys-certs

[3] https://www.baeldung.com/java-keystore-truststore-difference

[4] https://stackoverflow.com/questions/53968949/keystore-jks-exists-failed-exited-with-code-1-662-confluent-kafka

[5] https://codingnconcepts.com/spring-boot/configure-kafka-producer-and-consumer/

[6] https://access.redhat.com/solutions/973783

[7] https://www.sslshopper.com/article-most-common-java-keytool-keystore-commands.html

[8] https://www.confluent.de/confluent-cloud/

[9] https://www.confluent.io/blog/how-to-set-up-secure-networking-in-confluent-with-azure-private-link/

[10] https://azure.microsoft.com/de-de/services/key-vault/

[11] https://social.technet.microsoft.com/wiki/contents/articles/52480.azure-key-vault-overview.aspx

--

--