Setting Up a Single-Node Kafka Cluster using KRaft Mode – No More Zookeeper Dependency

Learn how to setup a Kafka cluster with a single node that acts as both controller and broker using Docker. This single-node cluster will be configured using Apache Kafka version 3.7.1, which uses the new Kafka Raft (KRaft) mode, without the traditional Zookeeper that is soon going to be phased-out starting with Kafka version 4.0.

While a single-node setup is not ideal for production due to the lack of fault tolerance and scalability, it is convenient and simpler option for development, testing, and experimentation purposes.

If you are looking for a multi-node setup, check out my other blog that describes setting up a 6-node Kafka cluster that has 3 dedicated controllers and 3 dedicated brokers here – Setting Up a Multi-Node KRaft-based Kafka Cluster – A Practical Guide

Introduction to Kafka Raft (KRaft)

Apache Kafka is a powerful distributed event streaming platform that can handle high throughput and scalability. One of the newer features in the Kafka ecosystem is KRaft (Kafka Raft Consensus Protocol), which simplifies metadata and cluster management without relying on Zookeeper.

Overview of the Cluster

This cluster will consist of a single node that functions as both the controller and the broker.

The cluster at its simplest will look like this –

Things to Note

  • Simple setup with minimum configuration – This blog focuses on setting up a working Kafka cluster with KRaft mode and the minimum configuration it requires. The goal is to help readers get a functional cluster up and running quickly, without diving into advanced configurations or handling all potential corner cases.
  • No SSL, Authentication, Authorization – This setup does not include SSL, authentication, or authorization configurations to keep the focus on getting a functional Kafka cluster running in KRaft mode.
  • No persistent storage – This setup does not include persistent storage, meaning data will not survive a container restart.
  • Host network – This setup uses the host network for simplicity, which allows all containers to share the same network stack as the host machine. While this simplifies configuration, it is generally recommended to use a dedicated network setup for better isolation and security in production environments.
  • Listener ports – In this setup, the single-node will have 2 listeners – 1 for controller traffic, listening on port 29091, and the other for broker/external client traffic listening on 29092 port.

Prerequisites for Creating the Cluster

  1. Docker – Before diving into the setup, ensure you have Docker installed on your machine. Familiarity with Docker commands will be advantageous.
  2. Apache Kafka Docker Image – This setup uses Docker for deploying all controller and broker nodes of the cluster. It uses Apache Kafka docker image with version 3.7.1 which can be pulled from docker hub using following command –
    • docker pull apache/kafka:3.7.1

Step-by-Step Guide to Deploying the Kafka Cluster

Step 1 – Generate a Cluster ID

The Cluster ID serves as a globally unique identifier for a Kafka cluster, distinguishing it from other clusters. All controllers and brokers must share the same Cluster ID to function as part of the same cluster.

Generate a cluster ID using the kafka-storage.sh script with random-uuid argument as follows –

docker run --rm \
apache/kafka:3.7.1 \
/opt/kafka/bin/kafka-storage.sh random-uuid

This will return a random UUID such as “IrzuggcFT-mWom7mj7PgtA”. We will use this UUID as cluster ID in the rest of the steps.

Step 2 – Deploy the Controller cum Broker Node

Deploy and the only node of this single node cluster using following docker command –

docker run --name kafka_single_node \
--network host \
-e KAFKA_NODE_ID=1 \
-e CLUSTER_ID=IrzuggcFT-mWom7mj7PgtA \
-e KAFKA_PROCESS_ROLES=broker,controller \
-e KAFKA_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_INTER_BROKER_LISTENER_NAME=BROKER \
-e KAFKA_LISTENERS=CONTROLLER://localhost:29091,BROKER://localhost:29092 \
-e KAFKA_ADVERTISED_LISTENERS=BROKER://localhost:29092 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,BROKER:PLAINTEXT \
-e KAFKA_CONTROLLER_QUORUM_VOTERS=1@localhost:29091 \
apache/kafka:3.7.1

Explanation of the env variables –

  • KAFKA_NODE_ID – Unique ID for each node. See official documentation here.
  • CLUSTER_ID – Unique ID for the cluster. Note that this is mandatory when running Kafka in KRaft mode, even if there is just 1 node in the cluster.
  • KAFKA_PROCESS_ROLES – For a single-node setup, this should be “broker,controller” since the same node plays both these roles. See official documentation here.
  • KAFKA_CONTROLLER_LISTENER_NAMES – Name of the listener used for all controller traffic. This could be any name of your choice provided you use the same name in the dependent config parameters (such as KAFKA_LISTENERS). See official documentation here.
  • KAFKA_INTER_BROKER_LISTENER_NAME – Name of the listener that will be used for inter-broker communication such as during data replication. Note that in case of a single-node cluster, even though there is just 1 node, this configuration is mandatory. See official documentation here.
  • KAFKA_LISTENERS – The host and port on which the node will be listening for requests. Note that for the current setup, we have 2 listeners – 1 for controller and the other for broker. The word “CONTROLLER” here is the name of the listener and is defined in the config parameter KAFKA_CONTROLLER_LISTENER_NAMES. Similarly the word “BROKER” here is the name of the inter-broker listener and is defined in the config parameter KAFKA_INTER_BROKER_LISTENER_NAME. See official documentation here.
  • KAFKA_ADVERTISED_LISTENERS – For the current setup, this is same as KAFKA_LISTENERS, however it can be different depending on the network setup. Note that for a single-node setup, we just need the BROKER listener here, since controller traffic is always only internal to the cluster, and is not advertised to the external world/clients. See official documentation here.
  • KAFKA_LISTENER_SECURITY_PROTOCOL_MAP – Security protocol to use for each listener. For this setup, we will be using PLAINTEXT (i.e. non-encrypted traffic) for both CONTROLLER and BROKER listeners. See official documentation here.
  • KAFKA_CONTROLLER_QUORUM_VOTERS – This property lists the nodes responsible for storing and managing the cluster’s metadata using the Raft consensus protocol. The nodes are listed using their node ID and their address in host/port format. See official documentation here.

That’s it for the cluster setup. Next you may want to verify the status of the cluster by following the next step.

Step 3 – Verify Cluster Status

Verify the status of the cluster by running kafka-metadata-quorum.sh script with argument describe --status as follows –

docker run --rm \
--network host \
apache/kafka:3.7.1 \
/opt/kafka/bin/kafka-metadata-quorum.sh \
--bootstrap-server localhost:29092 \
describe --status

Note that for the --bootstrap-server argument, we always provide connection details of the BROKER (host and listener port).

Running the above command returns current status of the cluster that includes the Cluster ID, the ID of the current leader controller, the IDs of all the controller and broker nodes, etc. as follows –

ClusterId:               IrzuggcFT-mWom7mj7PgtA
LeaderId:                1
LeaderEpoch:             1
HighWatermark:           16
MaxFollowerLag:          0
MaxFollowerLagTimeMs:    0
CurrentVoters:           [1]
CurrentObservers:        []

Here, “CurrentVoters” refers to the quorum voters, i.e. controller nodes, and “CurrentObservers” refers to the broker nodes. Note that the “CurrentObservers” list is empty since there is only 1 node and that node is actively working as a voter.

Next you might also want to run a console-consumer and a console-producer to test the cluster by following the below optional steps.

Optional Step 4 – Run a Console-Producer to Send Messages

Run a console-producer application using following docker command. Note that the same docker image used for creating the cluster can be used for running clients such as console-producer and console-consumer. No additional setup is required to run console clients.

docker run --rm -it \
--network host \
apache/kafka:3.7.1 \
/opt/kafka/bin/kafka-console-producer.sh --topic MY_FIRST_TOPIC \
--bootstrap-server localhost:29092

Once you run the above command, it returns a prompt where you can enter messages. Note the -it flag in the docker run command which means interactive mode. This is important especially for console producer since it prompts you and lets you enter new messages interactively.

Note that by default Kafka automatically creates topics when they are referenced by a producer or a consumer. This is governed by config parameter auto.create.topics.enable which by default is set to true.

If you prefer a programmatic Kafka producer using Java instead of a console producer, you can find a step-by-step guide on my other blog here – How to Build Your First Kafka Producer: A Step-by-Step Tutorial.

Optional Step 5 – Run a Console-Consumer to Receive Messages

In another terminal, start a console-consumer client using below docker command –

docker run --rm -it \
--network host \
apache/kafka:3.7.1 \
/opt/kafka/bin/kafka-console-consumer.sh --topic MY_FIRST_TOPIC \
--bootstrap-server localhost:29092 --partition 0

If everything is okay, you should start seeing messages sent by the console-producer application.

If you followed the above steps, and there are no errors in the logs, and if the console-consumer is able to receive messages sent by the console-producer, congratulations – you have deployed a functional single-node Kafka cluster running in KRaft mode with the minimum required configuration, ready to handle basic workloads.


Thank You!

Thank you for Reading! I hope this guide helped you set up your Kafka cluster with ease. If you have any questions, feedback, or suggestions, feel free to share your thoughts in the comments. I’d love to hear what you think!

3 thoughts on “Setting Up a Single-Node Kafka Cluster using KRaft Mode – No More Zookeeper Dependency”

  1. Pingback: A Practical Guide to Setting Up a 6 Node KRaft-based Kafka Cluster

  2. Pingback: Understanding Kafka Producer: A Comprehensive Guide

  3. Pingback: How to Build Your First Kafka Consumer: A Step-by-Step Tutorial » CodingJigs

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top