kafka環境搭建

 

一、使用技術版本

kafka_2.10-0.10.2.1.tar 

zookeeper-3.4.5.tar


二、環境搭建

因爲kafka環境依賴於zookeeper,所以先搭建zookeeper

1、zookeeper搭建

先創建一個文件夾,存放壓縮包,並解壓

root@VM-0-3-ubuntu:~# cd /wingcloud
root@VM-0-3-ubuntu:/wingcloud# ls
kafka_2.10-0.10.2.1.tar  zookeeper-3.4.5.tar
root@VM-0-3-ubuntu:/wingcloud# tar -xvf zookeeper-3.4.5.tar

解壓後,將文件夾放到/usr/local下

root@VM-0-3-ubuntu:/wingcloud# mv zookeeper-3.4.5 /usr/local/zk

進入/usr/local/zk/conf執行如下

root@VM-0-3-ubuntu:/wingcloud# cp zoo_sample.cfg zo.cfg
root@VM-0-3-ubuntu:/wingcloud# mv zo.cfg zoo.cfg
root@VM-0-3-ubuntu:/wingcloud# vim zoo.cfg

進入zoo.cfg後只需修改dataDir

修改如下,保存退出。

dataDir=/usr/local/zk/data/

返回到zk目錄下創建data目錄

root@VM-0-3-ubuntu:/usr/local/zk# mkdir data

添加zk的環境變量

root@VM-0-3-ubuntu:/usr/local/zk# cd ..
root@VM-0-3-ubuntu:/usr/local# vim /etc/profile
root@VM-0-3-ubuntu:/usr/local# ls
bin                  etc    include  logstash  qcloud  share  yd.socket.server
elasticsearch-2.4.6  games  lib      man       sbin    src    zk
root@VM-0-3-ubuntu:/usr/local# vim /etc/profile

ZK_HOME=/usr/local/zk
PATH=$ZK_HOME/bin:$PATH
root@VM-0-3-ubuntu:/usr/local# source /etc/profile

最後啓動zk,並查看其是否成功啓動

root@VM-0-3-ubuntu:/usr/local# zkServer.sh start
JMX enabled by default
Using config: /usr/local/zk/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@VM-0-3-ubuntu:/usr/local# zkServer.sh status
JMX enabled by default
Using config: /usr/local/zk/bin/../conf/zoo.cfg
Mode: standalone

2、kafka搭建

解壓並將文件轉移到/usr/local下

root@VM-0-3-ubuntu:/wingcloud# tar -xvf kafka_2.10-0.10.2.1.tar
root@VM-0-3-ubuntu:/wingcloud# ls
kafka_2.10-0.10.2.1  kafka_2.10-0.10.2.1.tar  zookeeper-3.4.5.tar
root@VM-0-3-ubuntu:/wingcloud# mv kafka_2.10-0.10.2.1 /usr/local
root@VM-0-3-ubuntu:/wingcloud# cd /usr/local
root@VM-0-3-ubuntu:/usr/local# ls
bin                  include              man     src
elasticsearch-2.4.6  kafka_2.10-0.10.2.1  qcloud  yd.socket.server
etc                  lib                  sbin    zk
games                logstash             share   zookeeper.out

修改server.properties

root@VM-0-3-ubuntu:/usr/local# cd kafka_2.10-0.10.2.1
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# ls
bin  config  libs  LICENSE  NOTICE  site-docs
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# cd config
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/config# ls
connect-console-sink.properties    consumer.properties
connect-console-source.properties  log4j.properties
connect-distributed.properties     producer.properties
connect-file-sink.properties       server.properties
connect-file-source.properties     tools-log4j.properties
connect-log4j.properties           zookeeper.properties
connect-standalone.properties
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/config# vim server.properties

在server.properties中找到Log Basics註釋,修改其下面的log.dirs如下

log.dirs=/usr/local/kafka_2.10-0.10.2.1/data/kafka-logs

其中也可以修改zk相關,因爲是本機這邊就不用配了,ok,保存退出

啓動kafka

root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/config# cd ..
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# ls
bin  config  libs  LICENSE  NOTICE  site-docs
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# cd bin
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin# ls
connect-distributed.sh               kafka-replica-verification.sh
connect-standalone.sh                kafka-run-class.sh
kafka-acls.sh                        kafka-server-start.sh
kafka-broker-api-versions.sh         kafka-server-stop.sh
kafka-configs.sh                     kafka-simple-consumer-shell.sh
kafka-console-consumer.sh            kafka-streams-application-reset.sh
kafka-console-producer.sh            kafka-topics.sh
kafka-consumer-groups.sh             kafka-verifiable-consumer.sh
kafka-consumer-offset-checker.sh     kafka-verifiable-producer.sh
kafka-consumer-perf-test.sh          windows
kafka-mirror-maker.sh                zookeeper-security-migration.sh
kafka-preferred-replica-election.sh  zookeeper-server-start.sh
kafka-producer-perf-test.sh          zookeeper-server-stop.sh
kafka-reassign-partitions.sh         zookeeper-shell.sh
kafka-replay-log-producer.sh
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin# ./kafka-server-start.sh  ../config/server.properties

啓動完成後,就算搭建完成了,接下來測試一下

需要另外再開一個終端,進入linux服務器

在新開的終端下需要創建topics,先進入kafka/bin下

解釋下命令:--zookeeper 127.0.0.1:2181指定zk。 --partitions 1分區設爲1個,因爲是單機。--replication-factor 1副本設爲1個。--topic wingcloud設置topic名字叫wingcloud

ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin$ ./kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --partitions 1  --replication-factor 1 --topic wingcloud
Created topic "wingcloud".

然後使用生產者,--broker-list 127.0.0.1:9092指kafka服務,集羣時寫多個即可

ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin$ ./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic wingcloud

啓動後,再開一個終端

在新開的終端下使用消費者

ubuntu@VM-0-3-ubuntu:~$ cd /usr/local
ubuntu@VM-0-3-ubuntu:/usr/local$ ls
bin                  include              man     src
elasticsearch-2.4.6  kafka_2.10-0.10.2.1  qcloud  yd.socket.server
etc                  lib                  sbin    zk
games                logstash             share   zookeeper.out
ubuntu@VM-0-3-ubuntu:/usr/local$ cd kafka_2.10-0.10.2.1
ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1$ ls
bin  config  data  libs  LICENSE  logs  NOTICE  site-docs
ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1$ cd bin
ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin$ ./kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 --topic wingcloud

消費者啓動之後,進行數據測試。

在之前的生產者終端輸入數據,在消費者終端也能顯示的話,測試成功,如下

好了kafka搭建完成

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章