分享兩個鏈接
https://juejin.im/post/6844903495670169607
https://www.zhihu.com/question/53331259
Spring-Kafka相關教程
http://www.lxweimin.com/c/0c9d83802b0c
使用docker搭建kafka
1、下載zookeeper鏡像:
docker pull wurstmeister/zookeeper
2、下載kafka鏡像:
docker pull wurstmeister/kafka
3、根據鏡像創建并啟動zookeeper容器
docker run -d --name zookeeper -p 2181:2181 -t wurstmeister/zookeeper
docker run -itd --name zookeeper -p 2181:2181 wurstmeister/zookeeper
4.創建Kafka1容器,并啟動
docker run -d --name kafka # 運行后容器的名稱
-p 9092:9092 # 端口映射
# 在kafka集群中,每個kafka都有一個BROKER_ID來區分自己
-e KAFKA_BROKER_ID=0
# 配置zookeeper管理kafka的路徑192.168.56.122:2181/kafka
-e KAFKA_ZOOKEEPER_CONNECT=192.168.56.122:2181/kafka
# 把kafka的地址端口注冊給zookeeper
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.56.122:9092
# 配置kafka的監聽端口
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
# 容器時間同步虛擬機的時間
docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=47.98.128.88:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://47.98.128.88:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka
docker run -d --name kafka --publish 9092:9092 --link docker_zookeeper --env KAFKA_ZOOKEEPER_CONNECT=47.98.128.88:2181 --env KAFKA_ADVERTISED_HOST_NAME=47.98.128.88 --env KAFKA_ADVERTISED_PORT=9092 --volume /etc/localtime:/etc/localtime wurstmeister/kafka:latest
5.創建Kafka2容器,并啟動
docker run -d --name kafka2 -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=宿主機ip:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://宿主機ip:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -t wurstmeister/kafka
(注:需要多少個 按照格式創建即可 需要修改的是 port broker_id)
kafka如果在多臺服務器搭建: 其他服務器上啟動的kafka也一樣指向zookeeper服務所在的宿主機ip
簡單的kafka操作:
創建topic
kafka-topics.sh --create --zookeeper ip:2181 --replication-factor 3 --partitions 4 --topic test1
查看topic
kafka-topics.sh --list --zookeeper ip:2181
查看指定topic狀態
kafka-topics.sh --zookeeper ip:2181 --topic test1 --describe
生產者
kafka-console-producer.sh --broker-list ip:9092 --topic test1
消費者
kafka-console-consumer.sh --bootstrap-server ip:9092 --topic test1 --from-beginning
注:(下面這種方法需要安裝docker-compose命令,否則識別不了)
3、在自己選的目錄下(隨便一個目錄下)創建一個docker-compose.yml文件
內容如下:(注意里面的ip地址(192.168.0.101)改為你自己本地的ip地址。
version: '2'
services:
zoo1:
image: wurstmeister/zookeeper
restart: unless-stopped
hostname: zoo1
ports:
- "2181:2181"
container_name: zookeeper
# kafka version: 1.1.0
# scala version: 2.12
kafka1:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 47.98.128.88
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CREATE_TOPICS: "stream-in:2:1,stream-out:2:1"
depends_on:
- zoo1
container_name: kafka1
kafka2:
image: wurstmeister/kafka
ports:
- "9093:9093"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.0.101
KAFKA_ADVERTISED_PORT: 9093
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zoo1
container_name: kafka2
4、啟動docker-compose
docker-compose up -d