Kafka 整合 Flume 實例 原

業務流程:

我們設計一個這樣的業務,由Flume去收集Nginx的access.log,然後將收集到的日誌信息發送到kafka指定的topic,然後由後續的業務來消費,本例中爲了方便觀察結果直接啓動一個控制檯的kafka消費者,把消費的日誌直接輸出。

1.配置Flume

創建一份Flume的配置文件並命名爲exec-memory-kafka.conf,配置內容如下:

# example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /usr/local/nginx/logs/access.log
a1.sources.r1.shell = /bin/sh -C

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.brokerList = 39.106.193.183:9093
a1.sinks.k1.topic = hello-mrpei
a1.sinks.k1.batchSize = 5
a1.sinks.k1.requiredAcks = 1

# Use a channel which buffers events in memory
a1.channels.c1.type = memory


# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2.啓動kafka消費者

[root@hadoop001 kafka_2.11-0.9.0.0]# ./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic hello-mrpei

3.啓動Flume實例

[root@VM_0_9_centos flume-1.6.0]# ./bin/flume-ng agent --conf /usr/server/flume-1.6.0/conf --conf-file /usr/server/flume-1.6.0/conf/exec-memory-kafka.conf --name a1 -Dflume.root.logger=INFO,console

4.刷新瀏覽器向Nginx發送HTTP請求

查看kafka消費者輸出

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章