Dockerize Kafka Cluster for Debugging

Damindu Lakmal
2 min readNov 16, 2022

--

kafka dockerize

Setup Kafka cluster in locally is bit complicated due to maintainability, scalability and networking. Docker will provide proper guide line for our kafka setup.

Kafka is a real time streaming data source that enables organisation to manage and scale their data while providing reliable and high performance system.

Docker enables you to separate your applications from your infrastructure so you can easily access and communicate with host machine.

Advantages

  • Check status of the running cluster
  • Uniquely identify each broker as docker containers
  • Cluster communicate in same network

Docker Setup

Install docker in you local machine and check whether if it’s properly configure.

docker version

Docker compose should be install too to execute compose file which defining services, networks, and volumes for our Docker application.

docker-compose version

Create Docker Network

Kafka cluster configure in same network in docker environment. Zookeeper and brokers can comunicate securely and easily while they are within the same network. Let’s create network as kafka,

docker network create kafka

check docker networks,

docker network ls
docker networks

Create Compose File

version: "2"

services:
zookeeper:
image: bitnami/zookeeper:3.8.0-debian-11-r28
container_name: "zookeeper"
network_mode: kafka # network name
ports:
- "2181:2181"
volumes:
- "${ZOOKEEPER_DATA}:/bitnami" #zookeeper logs host path configure as ZOOKEEPER_DATA
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: docker.io/bitnami/kafka:3.1.1
container_name: "kafka-01"
hostname: "kafka-01"
network_mode: kafka # network name
ports:
- "9092:9092"
volumes:
- "${KAFKA_DATA}:/bitnami" #kafka broker 1 logs host path configure as KAFKA_DATA
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka-2:
image: docker.io/bitnami/kafka:3.1.1
container_name: "kafka-02"
hostname: "kafka-02"
network_mode: kafka # network name
ports:
- "9093:9092"
volumes:
- "${KAFKA_2_DATA}:/bitnami" #kafka broker 2 logs host path configure as KAFKA_2_DATA
environment:
- KAFKA_BROKER_ID=2
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9093
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
- kafka

Compose file has service that include zookeeper and two kafka brokers.

Key areas,

  • ZOOKEEPER_DATA, KAFKA_DATA and KAFKA_2_DATA should be configure as environment variable. ( those path will be saved data of kafka cluster)
  • Zookeeper run on port 2181
  • Kafka broker 1 run on port 9092
  • Kafka broker 2 run on port 9093

Create cluster from compose file,

docker-compose up -d 

If you need to execute env file along compose file,

docker-compose --env-file config.env up -d 

Summary

This was only an overview of dockerization. I’d recommended to use documentation of Docker to understand notations, operators and underline architecture. Docker is helpful for simplifying day to day development.

Thank you for reading this article. I hope you enjoyed it!

--

--