jstorm安裝配置

jstorm安裝配置

  • 前言
  • 下載
  • 配置啓動

前言

jstorm介紹

jstorm

JStorm 是一個類似Hadoop MapReduce的系統, 用戶按照指定的接口實現一個任務,然後將這個任務遞交給JStorm系統,Jstorm將這個任務跑起來,並且按7 * 24小時運行起來,一旦中間一個Worker 發生意外故障, 調度器立即分配一個新的Worker替換這個失效的Worker。
因此,從應用的角度,JStorm應用是一種遵守某種編程規範的分佈式應用。從系統角度, JStorm是一套類似MapReduce的調度系統。 從數據的角度,JStorm是一套基於流水線的消息處理機制。
實時計算現在是大數據領域中最火爆的一個方向,因爲人們對數據的要求越來越高,實時性要求也越來越快,傳統的Hadoop MapReduce,逐漸滿足不了需求,因此在這個領域需求不斷。

storm組件和Hadoop組件對比

storm Hadoop
角色 Nimbus JobTracker
Supervisor TaskTracker
Worker Child
應用名稱 Topology Job
編程接口 Spout/Bolt Mapper/Reducer

優點

在storm和jstorm出現之前,市面上有很多實時計算引擎,但自Storm和Jstorm出現之後,基本上可以說一統江湖,其具有以下優點:
- 開發迅速:接口簡單,容易上手,只要遵守Topology、Spout、Bolt的編程規範即可開發出一個擴展性極好的應用,底層Rpc、Worker之間冗餘、數據分流之類的動作完全不用考慮。
- >擴展性極好:配置一下併發數,即可線性擴展性能
- >健壯性強:但worker失效或者機器出現故障時,自動分配新的worker替換失效的worker
- >數據準確性:可以採用Ack機制,保證數據不丟失。如果對精度有更高要求,採用事物機制,保證數據準確

應用場景

JStorm處理數據的方式是基於消息的流水線處理,因此特別適合無狀態計算,也就是計算單元依賴的數據全部在接受的消息中可以找到,並且最好一個數據流不依賴於另一個數據流。
因此,jstorm常用於:
- >日誌分析:從日誌中分析出特定的數據,並將分析結果存入外部存儲器,如數據庫
- >管道系統:如將數據從一個系統輸出到另一個系統,比如將數據從數據庫同步到Hadoop
- >消息轉化器:將接收到的消息按照某種格式進行轉化,存儲到另外一個系統,如消息中間件
- >統計分析器:從日誌或消息中,提煉出某個字段,然後做count或sum計算,最後將統計值植入外部存儲器

安裝環境

環境:ubuntu14-64位,jdk1.7,tomcat7,zookeeper-3.4.6,jstorm2.1.0

下載

jstorm下載

這裏下載最新版Jstorm(我下載的時候是2.1.0),文件名爲jstorm-2.1.0.tar.bz2。

zookeeper下載

這裏下載zookeeper3.4.6

配置

zookeeper安裝配置

文件解壓

cd命令進入zookeeper3.4.6所在目錄,解壓文件:

sudo tar -zxvf zookeeper-3.4.6.tar.gz

配置環境變量

vim /etc/profile

加入以下內容:

export ZOOKEEPER_HOME=/home/yyp/developTools/jstorm/zookeeper-3.4.6
export PATH=$PATH:$JSTORM_HOME/bin:$ZOOKEEPER_HOME/bin
CLASSPATH=$ZOOKEEPER_HOME/lib

進入$ZOOKEEPER_HOME/conf目錄,將zoo_sample.cfg重命名爲zoo.cfg:

cd $ZOOKEEPER_HOME/conf
mv zoo_sample.cfg zoo.cfg

對比zoo.cfg文件,將裏面的內容和下面的保持一致:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=yourIP:2888:3888

啓動zookeeper

cd $ZOOKEEPER_HOME/bin
./zkServer.sh start 

./zkServer.sh status    

命令看到如圖所示,表示我們啓動成功。

jstorm安裝

文件解壓

進入jstorm2.1.0的壓縮包所在目錄,

tar -jxvf jstorm-2.1.0.tar.bz2 /the palace you want to unzip

配置環境變量

打開文件~/.bashrc

vim ~/.bashrc

添加如下內容:

export JSTORM_HOME=/home/yyp/developTools/jstorm/jstorm2.1.0/deploy/jstorm
export PATH=$PATH:$JSTORM_HOME/bin:$ZOOKEEPER_HOME/bin

Jstorm配置文件

進入$JSTORM_HOME的conf目錄:

cd $JSTORM_HOME/conf
vim storm.yaml

我本機的內容如下:

########### These MUST be filled in for a storm configuration
 storm.zookeeper.servers:
     - "localhost"

 storm.zookeeper.root: "/jstorm"

# cluster.name: "default"

 #nimbus.host/nimbus.host.start.supervisor is being used by $JSTORM_HOME/bin/start.sh
 #it only support IP, please don't set hostname
 # For example
 # nimbus.host: "10.132.168.10, 10.132.168.45"
 #nimbus.host: "localhost"
 #nimbus.host.start.supervisor: false

# %JSTORM_HOME% is the jstorm home directory
 storm.local.dir: "%JSTORM_HOME%/data"
 # please set absolute path, default path is JSTORM_HOME/logs
# jstorm.log.dir: "absolute path"

# java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib"



# if supervisor.slots.ports is null, 
# the port list will be generated by cpu cores and system memory size 
# for example, 
# there are cpu_num = system_physical_cpu_num/supervisor.slots.port.cpu.weight
# there are mem_num = system_physical_memory_size/(worker.memory.size * supervisor.slots.port.mem.weight) 
# The final port number is min(cpu_num, mem_num)
# supervisor.slots.ports.base: 6800
# supervisor.slots.port.cpu.weight: 1.2
# supervisor.slots.port.mem.weight: 0.7
# supervisor.slots.ports: null
# supervisor.slots.ports:
#    - 6800
#    - 6801
#    - 6802
#    - 6803

# Default disable user-define classloader
# If there are jar conflict between jstorm and application, 
# please enable it 
# topology.enable.classloader: false

# enable supervisor use cgroup to make resource isolation
# Before enable it, you should make sure:
#   1. Linux version (>= 2.6.18)
#   2. Have installed cgroup (check the file's existence:/proc/cgroups)
#   3. You should start your supervisor on root
# You can get more about cgroup:
#   http://t.cn/8s7nexU
# supervisor.enable.cgroup: false


### Netty will send multiple messages in one batch  
### Setting true will improve throughput, but more latency
# storm.messaging.netty.transfer.async.batch: true

### if this setting  is true, it will use disruptor as internal queue, which size is limited
### otherwise, it will use LinkedBlockingDeque as internal queue , which size is unlimited
### generally when this setting is true, the topology will be more stable,
### but when there is a data loop flow, for example A -> B -> C -> A
### and the data flow occur blocking, please set this as false
# topology.buffer.size.limited: true

### default worker memory size, unit is byte
# worker.memory.size: 2147483648

# Metrics Monitor
# topology.performance.metrics: it is the switch flag for performance 
# purpose. When it is disabled, the data of timer and histogram metrics 
# will not be collected.
# topology.alimonitor.metrics.post: If it is disable, metrics data
# will only be printed to log. If it is enabled, the metrics data will be
# posted to alimonitor besides printing to log.
# topology.performance.metrics: true
# topology.alimonitor.metrics.post: false

# UI MultiCluster
# Following is an example of multicluster UI configuration
# ui.clusters:
#     - {
#         name: "jstorm",
#         zkRoot: "/jstorm",
#         zkServers:
#             [ "localhost"],
#         zkPort: 2181,
#       }

storm.yaml配置文件介紹

  • storm.zookeeper.servers: 表示zookeeper 的地址,
  • nimbus.host: 表示nimbus的地址
  • storm.zookeeper.root: 表示jstorm在zookeeper中的根目錄,當多個JStorm共享一個ZOOKEEPER時,需要設置該選項,默認即爲“/jstorm”
  • storm.local.dir: 表示jstorm臨時數據存放目錄,需要保證jstorm程序對該目錄有寫權限
  • java.library.path: zeromq 和java zeromq library的安裝目錄,默認”/usr/local/lib:/opt/local/lib:/usr/lib”
  • supervisor.slots.ports: 表示supervisor 提供的端口slot列表,注意不要和其他端口發生衝突,默認是68xx,而storm的是67xx
  • supervisor.disk.slot: 表示提供數據目錄,當一臺機器有多塊磁盤時,可以提供磁盤讀寫slot,方便有重IO操作的應用。
  • topology.enable.classloader: false, 默認關閉classloader,如果應用的jar與jstorm的依賴的jar發生衝突,比如應用使用thrift9,但jstorm使用thrift7時,就需要打開classloader
  • nimbus.groupfile.path: 如果需要做資源隔離,比如數據倉庫使用多少資源,技術部使用多少資源,無線部門使用多少資源時,就需要打開分組功能, 設置一個配置文件的絕對路徑,改配置文件如源碼中group_file.ini所示
  • storm.local.dir: jstorm使用的本地臨時目錄,如果一臺機器同時運行storm和jstorm的話, 則不要共用一個目錄,必須將二者分離開

安裝jstorm web-UI

在提交 topology.jar的節點上(我這裏是單機模式,就在當前計算機了,如果是集羣模式,要在安裝web-UI的機器上執行),執行以下命令:

mkdri ~/.jstorm
cp $JSTORM_HOME/conf/storm.yaml ~/.jstorm

進入tomcat下面的webapps目錄,

cp $JSTORM_HOME/jstorm-2.1.0.war ./
mv ROOT ROOT.old
ln -s jstorm-2.1.0 ROOT(注意,不是ln -s jstorm-2.1.0.war ROOT)
cd ../bin
./startup.sh

啓動jstorm

在nimbus節點上執行

jstorm nimbus &

在supervisor節點上執行

jstorm supervisor &

訪問localhost:8080,看到如下界面,表示我們啓動成功

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章