flume+kafka+zookeeper 單機實現實時數據的獲取

之前在做大數據的時候,一直不知道數據是怎麼上傳到hdfs的,問了架構師用flume,自己也一直想玩一下flume,無奈沒太多的時間,今天有點時間,就查找資料,搭建了一個單機環境下的日誌監控。所有資料全部來源與網絡,我只是做了一個簡單的整合。

首先,第一步安裝flume。

1.安裝flume,首先要安裝好jvm。

2.下載flume。地址 http://mirror.bit.edu.cn/apache/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz

3.解壓項目,進入conf下面,默認的配置文件帶一個後綴是.template,去掉這個後綴。

4.修改flume-env.sh 設置jdk的安裝目錄,

5.校驗flume是否安裝成功,可以進入bin目錄下,輸入:

flume-ng version


查看是否輸出版本信息,即爲以成功。

6.測試flume的功能,修改配置文件:flume-conf.properties

       #Spool監測配置的目錄下新增的文件,並將文件中的數據讀取出來。需要注意兩點:
    #1) 拷貝到spool目錄下的文件不可以再打開編輯。
    #2) spool目錄下不可包含相應的子目錄

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
a1.sources.r1.spoolDir = /Users/haomaiche/Downloads/apache-flume-1.7.0-bin/logs
a1.sources.r1.fileHeader = true

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1



# logser可以看做是flume服務的名稱,每個flume都由sources、channels和sinks三部分組成
# sources可以看做是數據源頭、channels是中間轉存的渠道、sinks是數據後面的去向
logser.sources = src_launcherclick
logser.sinks = kfk_launcherclick
logser.channels = ch_launcherclick

# source
# 源頭類型是TAILDIR,就可以實時監控以追加形式寫入文件的日誌
logser.sources.src_launcherclick.type = TAILDIR
# positionFile記錄所有監控的文件信息
logser.sources.src_launcherclick.positionFile = /Users/haomaiche/Downloads/apache-flume-1.7.0-bin/log1/taildir_position.json
# 監控的文件組
logser.sources.src_launcherclick.filegroups = f1
# 文件組包含的具體文件,也就是我們監控的文件
logser.sources.src_launcherclick.filegroups.f1 = /Users/haomaiche/Downloads/apache-flume-1.7.0-bin/log/.*

# interceptor
# 寫kafka的topic即可
logser.sources.src_launcherclick.interceptors = i1 i2
logser.sources.src_launcherclick.interceptors.i1.type=static
logser.sources.src_launcherclick.interceptors.i1.key = type
logser.sources.src_launcherclick.interceptors.i1.value = launcher_click
logser.sources.src_launcherclick.interceptors.i2.type=static
logser.sources.src_launcherclick.interceptors.i2.key = topic
logser.sources.src_launcherclick.interceptors.i2.value = launcher_click

# channel
logser.channels.ch_launcherclick.type = memory
logser.channels.ch_launcherclick.capacity = 10000
logser.channels.ch_launcherclick.transactionCapacity = 1000

# kfk sink
# 指定sink類型是Kafka,說明日誌最後要發送到Kafka
logser.sinks.kfk_launcherclick.type = org.apache.flume.sink.kafka.KafkaSink
# Kafka broker
logser.sinks.kfk_launcherclick.brokerList = 10.0.5.203:9092

# Bind the source and sink to the channel
logser.sources.src_launcherclick.channels = ch_launcherclick
logser.sinks.kfk_launcherclick.channel = ch_launcherclick


安裝kafka:(zookeeper的安裝不再寫了,也很簡單,網上都是例子:http://blog.csdn.net/wo541075754/article/details/56483533)

1.下載kafka。地址:http://mirrors.cnnic.cn/apache/kafka/0.9.0.0/kafka_2.10-0.9.0.0.tgz

2.解壓,之後可以直接啓動:

     bin/kafka-server-start.sh config/server.properties &
結下來整合:

1.啓動flume:

./flume-ng agent -c . -f ../conf/flume-conf.properties -n logser -Dflume.root.logger=INFO
2.重新打開一個窗口,啓動kafka的消費模式,監聽:launcher_click 主題。命令:

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic launcher_click --from-beginning

3.新建一個窗口:對

    /Users/haomaiche/Downloads/apache-flume-1.7.0-bin/log這個目錄下的文件進行追加,命令:
echo "spool test2s nihaoi www.baidu.com" >> /Users/haomaiche/Downloads/apache-flume-1.7.0-bin/log/spool_text1sd.log
這個時候,在kafka的監聽窗口會打印:
$bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic launcher_click --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
spool test2s
spool test2s nihaoi www
spool test2s nihaoi www.baidu.com
下面是用java監聽輸出數據:
	1.maven
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>0.9.0.1</version>
</dependency>

	2.代碼:
	
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class KafkaConsumerExample {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(Arrays.asList("launcher_click"));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(100);
            for (ConsumerRecord<String, String> record : records)
                System.out.println("內容是:"+ record.value());
        }
    }
}
參考資料:
Flume1.5.0入門:安裝、部署、及flume的案例:http://www.aboutyun.com/thread-8917-1-1.html
kafka的安裝  http://www.cnblogs.com/wangyangliuping/p/5546465.html
flume實時日誌分析  http://itindex.net/detail/56956-flume-kafka-sparkstreaming
kafka的java代碼來源: http://blog.csdn.net/lnho2015/article/details/51353936










發佈了41 篇原創文章 · 獲贊 3 · 訪問量 3萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章