JEESZ-kafka集羣安裝

  1. 在根目錄創建kafka文件夾(service1、service2、service3都創建)

[root@localhost /]# mkdir kafka

2.通過Xshell上傳文件到service1服務器:上傳kafka_2.9.2-0.8.1.1.tgz到/software文件夾

3.遠程copy將service1下的/software/kafka_2.9.2-0.8.1.1.tgz到service2、service3

[root@localhost software]# scp -r /software/kafka_2.9.2-0.8.1.1.tgz [email protected]:/software/

[root@localhost software]# scp -r /software/kafka_2.9.2-0.8.1.1.tgz [email protected]:/software/

3.copy /software/kafka_2.9.2-0.8.1.1.tgz到/kafka/目錄(service1、service2、service3都執行)

[root@localhost software]# cp /software/kafka_2.9.2-0.8.1.1.tgz /kafka/

4.安裝解壓kafka_2.9.2-0.8.1.1.tgz(service1、service2、service3都執行)

[root@localhost /]# cd /kafka/

[root@localhost kafka]# tar -zxvf kafka_2.9.2-0.8.1.1.tgz

5.創建kafka消息目錄(service1,service2,service3都要創建)

[root@localhost kafka]# mkdir kafkaLogs

  1. 修改kafka的配置文件(service1,service2,service3都要配置)

[root@localhost /]# cd /kafka/kafka_2.9.2-0.8.1.1/

[root@localhost kafka_2.9.2-0.8.1.1]# cd config/

[root@localhost config]# ls

consumer.properties log4j.properties producer.properties server.properties test-log4j.properties tools-log4j.properties zookeeper.properties

[root@localhost config]# vi server.properties

Licensed to the Apache Software Foundation (ASF) under one or more

contributor license agreements. See the NOTICE file distributed with

this work for additional information regarding copyright ownership.

The ASF licenses this file to You under the Apache License, Version 2.0

(the "License"); you may not use this file except in compliance with

the License. You may obtain a copy of the License at

#

#http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

The id of the broker. This must be set to a unique integer for each broker.

broker.id=0 ---唯一標識

############################# Socket Server Settings #############################

The port the socket server listens on

port=19092 --當前broker對外提供的TCP端口,默認9092

Hostname the broker will bind to. If not set, the server will bind to all interfaces

host.name=192.168.2.213 --一般是關閉狀態,我們要將它打開,如果dns解析失敗,會出現文件句柄泄露,不要小看dns解析失敗率,如果dns解析失敗率爲萬分之一,由於kafka的性能非常高,每個topic的每個分區,每秒可以處理十萬多條的數據,即使萬分之一的失敗率,每秒也要泄露10個文件句柄,很快句柄數就會泄露完畢,就會超過Linux打開文件的數,就會出現異常,所以我們配置ip,就不會進行dns解析

Hostname the broker will advertise to producers and consumers. If not set, it uses the

value for "host.name" if configured. Otherwise, it will use the value returned from

#Java.NET.InetAddress.getCanonicalHostName().

#advertised.host.name=

The port to publish to ZooKeeper for clients to use. If this is not set,

it will publish the same port that the broker binds to.

#advertised.port=

The number of threads handling network requests

num.network.threads=2 --broker網絡處理的線程數,一般不做處理

The number of threads doing disk I/O

num.io.threads=8 --broker io處理的線程數,這個數量一定要比log.dirs的目錄數要大

The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=1048576 --將發送的消息先放到緩衝區,當到達一定量的時候再一次性發出

The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=1048576 --kafka接受消息的緩衝區,當接受的數量達到一定量的時候再寫入磁盤

The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600 --像kafka發送或者請求消息的最大數,此設置不能超過java堆棧大小

############################# Log Basics #############################

A comma seperated list of directories under which to store log files

log.dirs=/kafka/kafkaLogs --多個目錄可以用,隔開

The default number of log partitions per topic. More partitions allow greater

parallelism for consumption, but this will also result in more files across

the brokers.

num.partitions=2 --一個topic默認分區數

############################# Log Flush Policy #############################

Messages are immediately written to the filesystem but by default we only fsync() to sync

the OS cache lazily. The following configurations control the flush of data to disk.

There are a few important trade-offs here:

1. Durability: Unflushed data may be lost if you are not using replication.

2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.

3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.

The settings below allow one to configure the flush policy to flush data after a period of time or

every N messages (or both). This can be done globally and overridden on a per-topic basis.

The number of messages to accept before forcing a flush of data to disk

#log.flush.interval.messages=10000

The maximum amount of time a message can sit in a log before we force a flush

#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

The following configurations control the disposal of log segments. The policy can

be set to delete segments after a period of time, or after a given size has accumulated.

A segment will be deleted whenever either of these criteria are met. Deletion always happens

from the end of the log.

The minimum age of a log file to be eligible for deletion

log.retention.hours=168

message.max.byte=5048576 --kafka每條消息容納的最大大小

default.replication.factor=2 --默認的複製因子,默認消息只有一個副本,不×××全,所以設置爲2,如果某個分區的消息失敗了,我們可以使用另一個分區的消息服務

replica.fetch.max.byte=5048576 --kafka每條消息容納的最大大小

A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

segments don't drop below log.retention.bytes.

#log.retention.bytes=1073741824

The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=536870912 --消息持久化的最大大小

The interval at which log segments are checked to see if they can be deleted according

to the retention policies

log.retention.check.interval.ms=60000

By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.

If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.

log.cleaner.enable=false --不使用log壓縮

############################# Zookeeper #############################

Zookeeper connection string (see zookeeper docs for details).

This is a comma separated host:port pairs, each corresponding to a zk

server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".

You can also append an optional chroot string to the urls to specify the

root directory for all kafka znodes.

zookeeper.connect=192.168.2.211:2181,192.168.2.212:2181,192.168.2.213:2181 --zk地址

Timeout in ms for connecting to zookeeper

zookeeper.connection.timeout.ms=1000000

7.啓動kafka服務

[root@localhost bin]# ./kafka-server-start.sh -daemon ../config/server.properties

[root@localhost bin]# jps

27413 Kafka

27450 Jps

17884 QuorumPeerMain

8.驗證kafka集羣

[root@localhost bin]# ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 1 --topic test

Created topic "test".

9.在service1上開啓producer程序

./kafka-console-producer.sh --broker-list 192.168.2.211:9092 --topic test

[root@localhost bin]# ./kafka-console-producer.sh --broker-list 192.168.2.211:9092 --topic test

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.

  1. 在service2上開啓consumer程序

[root@localhost bin]# ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.

11.在producer中發送消息:hello jeesz

[root@localhost bin]# ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.獲取源碼請查看詳情

hello jeesz

  1. 在consumer中接受到消息

[root@localhost bin]# ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.

hello jeesz

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章