测试 flume 案例
前台打印测试(单节点flume测试)
# 定义这个 agent 中各个组件的名字
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# 描述和配置 source 组件:r1
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# 描述和配置 sink 组件:k1
a1.sinks.k1.type = logger
# 描述和配置 channel 组件,此处使用是内存缓存的方式
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# 描述和配置 source channel sink 之间的连接关系
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
利用 flume 检查端口号 44444,启动telnet发送数据在前段支架展示。
启动 flume 命令:
bin/flume-ng agent -c conf -f agentconf/netcat-logger.properties -n a1 -Dflume.root.logger=INFO,console
安装 telnet :yum install -y telnet
启动 telnet :telnet localhost 44444
采文件录到 HDFS
#定义三大组件的名称
agent1.sources = source1
agent1.sinks = sink1
agent1.channels = channel1
# 配置 source 组件
agent1.sources.source1.type = spooldir
agent1.sources.source1.spoolDir = /home/hadoop/logs/
agent1.sources.source1.fileHeader = false
#配置拦截器
agent1.sources.source1.interceptors = i1
agent1.sources.source1.interceptors.i1.type = host
agent1.sources.source1.interceptors.i1.hostHeader = hostname
# 配置 sink 组件
agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path=/test/flume_log/%y-%m-%d/%H-%M
agent1.sinks.sink1.hdfs.filePrefix = events
agent1.sinks.sink1.hdfs.maxOpenFiles = 5000
agent1.sinks.sink1.hdfs.batchSize= 100
agent1.sinks.sink1.hdfs.fileType = DataStream
agent1.sinks.sink1.hdfs.writeFormat =Text
agent1.sinks.sink1.hdfs.rollSize = 102400
agent1.sinks.sink1.hdfs.rollCount = 1000000
agent1.sinks.sink1.hdfs.rollInterval = 60
#agent1.sinks.sink1.hdfs.round = true
#agent1.sinks.sink1.hdfs.roundValue = 10
#agent1.sinks.sink1.hdfs.roundUnit = minute
agent1.sinks.sink1.hdfs.useLocalTimeStamp = true
# Use a channel which buffers events in memory
agent1.channels.channel1.type = memory
agent1.channels.channel1.keep-alive = 120
agent1.channels.channel1.capacity = 500000
agent1.channels.channel1.transactionCapacity = 600
# Bind the source and sink to the channel
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
启动 flume :
测试:
1、如果 HDFS 集群是高可用集群,那么必须要放入 core-site.xml 和 hdfs-site.xml 文件到$FLUME_HOME/conf 目录中
2、查看监控的/home/Hadoop/logs 文件夹中的文件是否被正确上传到 HDFS 上 3、在该目录中创建文件,或者从其他目录往该目录加入文件,验证是否新增的文件能被自动的上传到 HDFS
bin/flume-ng agent -c conf -f agentconf/spooldir-hdfs.properties -n agent1
采集数据到HDFS
agent1.sources = source1
agent1.sinks = sink1
agent1.channels = channel1
# Describe/configure tail -F source1
agent1.sources.source1.type = exec
agent1.sources.source1.command = tail -F /home/hadoop/logs/catalina.out
agent1.sources.source1.channels = channel1
#configure host for source
agent1.sources.source1.interceptors = i1
agent1.sources.source1.interceptors.i1.type = host
agent1.sources.source1.interceptors.i1.hostHeader = hostname
# Describe sink1
agent1.sinks.sink1.type = hdfs
#a1.sinks.k1.channel = c1
agent1.sinks.sink1.hdfs.path =hdfs://myha01/weblog/flume-event/%y-%m-%d/%H-%M
agent1.sinks.sink1.hdfs.filePrefix = tomcat_
agent1.sinks.sink1.hdfs.maxOpenFiles = 5000
agent1.sinks.sink1.hdfs.batchSize= 100
agent1.sinks.sink1.hdfs.fileType = DataStream
agent1.sinks.sink1.hdfs.writeFormat =Text
agent1.sinks.sink1.hdfs.rollSize = 102400
agent1.sinks.sink1.hdfs.rollCount = 1000000
agent1.sinks.sink1.hdfs.rollInterval = 60
agent1.sinks.sink1.hdfs.round = true
agent1.sinks.sink1.hdfs.roundValue = 10
agent1.sinks.sink1.hdfs.roundUnit = minute
agent1.sinks.sink1.hdfs.useLocalTimeStamp = true
# Use a channel which buffers events in memory
agent1.channels.channel1.type = memory
agent1.channels.channel1.keep-alive = 120
agent1.channels.channel1.capacity = 500000
agent1.channels.channel1.transactionCapacity = 600
# Bind the source and sink to the channel
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
启动 flume :
bin/flume-ng agent -c conf -f agentconf/tail-hdfs.properties -n agent1
采集数据到kafka
flume监控文件追加数据,采集到kafka中
kafka相关操作
kafka 各种shell操作:
1.每个节点启动 kafka
nohup kafka-server-start.sh /home/hadoop/kafka_2.12-2.2.2/config/server.properties >/home/hadoop/logs/kafka_logs/out.log 2>&1 &
2.创建 topic
kafka-topics.sh --create --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --replication-factor 3 --partitions 10 --topic kafka_test
3.查看已经创建的所有kafka topic
kafka-topics.sh --list --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181
4.查看某个指定的kafka topic详细信息
kafka-topics.sh --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --describe --topic kafka_test
5.开启生产者模拟生成数据
kafka-console-producer.sh --broker-list hadoop01:9092,hadoop02:9092,hadoop03:9092 --topic kafka_test
6.开启消费者模拟消费数据
kafka-console-consumer.sh --bootstrap-server hadoop01:9092,hadoop02:9092,hadoop03:9092 --from-beginning --topic kafka_test
7.查看某topic某分区的偏移量最大和最小值
kafka-run-class.sh kafka.tools.GetOffsetShell --topic kafka_test --time -1 --broker-list hadoop01:9092,hadoop02:9092,hadoop03:9092 --partitions 1
8.增加topic分区数
kafka-topic.sh --alter --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --topic kafka_test --partitions 20
kafka-topic.sh --alter --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --topic kafka_test --replication-factor 2
9.删除topic
kafka-topics.sh --delete --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --topic kafka_test
flume文件 exec-kafka.conf
agent1.sources = r1
agent1.channels = c1
agent1.sinks = k1
#define sources
agent1.sources.r1.type = exec
agent1.sources.r1.command = tail -F /home/hadoop/logs/flume.log
#define channels
agent1.channels.c1.type = memory
agent1.channels.c1.capacity = 1000
agent1.channels.c1.transactionCapacity = 100
#define sink
agent1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.k1.brokerList = hadoop01:9092,hadoop02:9092,hadoop03:9092
agent1.sinks.k1.topic = flume-kafka
agent1.sinks.k1.batchSize = 4
agent1.sinks.k1.requiredAcks = 1
#bind sources and sink to channel
agent1.sources.r1.channels = c1
agent1.sinks.k1.channel = c1
启动 flume 命令
/home/hadoop/flume-1.8.0/bin/flume-ng agent --conf /home/hadoop/flume-1.8.0/conf/ --name agent1 --conf-file /home/hadoop/flume-1.8.0/agentconf/exec-kafka.conf -Dflume.root.logger=DEBUG,console