前提:啓動zookeeper
tar -zxvf apache-storm-0.9.4.tar.gz
mv apache-storm-0.9.4 /home/
cd /home/apache-storm-0.9.4
mkdir logs
vi conf/storm.yaml
複製到其他節點上,下面是每次啓動storm的順序操作
home ]# scp -r apache-storm-0.9.4 root@hadoop2:/home/
啓動niumbus,hadoop1上敲如一下命令
./bin/storm nimbus >> ./logs/nimbus.out 2>&1 &
啓動supervisor集羣,hadoop1,hadoop2,hadoop3,hadoop4分別敲如一下命令
./bin/storm supervisor >> ./logs/supervisor.out 2>&1 &
hadoop1節點啓動UI即可
./bin/storm ui >> ./logs/ui.out 2>&1 &
core也是storm啓動的標誌
啓動logviewer
./bin/storm logviewer >> logs/logviewer.out 2>&1 &
tail -f logs/logviewer.log
啓動topology,執行任務時用
./bin/storm jar examples/storm-starter/storm-starter- topologies-0.9.5.jar storm.starter.WordCountTopology wordcount
每次啓動方式如上
hadoop1:8080 查看stormUI界面
配置DRPC服務,所有storm節點都要配置
vi conf/storm.yaml
啓動DRPC Server(看情況啓動)啓動drpc server 一定是你配置文件配置的server
./bin/storm drpc >> logs/drpc.out 2>&1 &
tail -f logs/drpc.log
1. 提交drpc的任務
bin/storm jar examples/storm-starter/storm-starter-topologies-0.9.4.jar storm.starter.BasicDRPCTopology basicDrpc
2. 可以在監控頁面看到提交的任務
搭建結束後的啓動順序
1. 啓動
cd /home/apache-storm-0.9.4/
a) hadoop1啓動nimbus,
./bin/storm nimbus >> logs/nimbus.out 2>&1 &
b) hadoop1-hadoop4在各個從節點啓動supervisor,
./bin/storm supervisor >> logs/supervisor.out 2>&1 &
c) hadoop1啓動storm ui ,./bin/storm ui >> logs/ui.out 2>&1 &
根據實際情況啓動drpc服務
./bin/storm drpc >> logs/drpc.out 2>&1 &