Flume+kafka+storm整合

本篇文章轉自http://shiyanjun.cn/archives/934.html,謝謝作者的分享

在基於Hadoop平臺的很多應用場景中,我們需要對數據進行離線和實時分析,離線分析可以很容易地藉助於Hive來實現統計分析,但是對於實時的需求Hive就不合適了。實時應用場景可以使用Storm,它是一個實時處理系統,它爲實時處理類應用提供了一個計算模型,可以很容易地進行編程處理。爲了統一離線和實時計算,一般情況下,我們都希望將離線和實時計算的數據源的集合統一起來作爲輸入,然後將數據的流向分別經由實時系統和離線分析系統,分別進行分析處理,這時我們可以考慮將數據源(如使用Flume收集日誌)直接連接一個消息中間件,如Kafka,可以整合Flume+Kafka,Flume作爲消息的Producer,生產的消息數據(日誌數據、業務請求數據等等)發佈到Kafka中,然後通過訂閱的方式,使用Storm的Topology作爲消息的Consumer,在Storm集羣中分別進行如下兩個需求場景的處理:

  • 直接使用Storm的Topology對數據進行實時分析處理
  • 整合Storm+HDFS,將消息處理後寫入HDFS進行離線分析處理

實時處理,只要開發滿足業務需要的Topology即可,不做過多說明。這裏,我們主要從安裝配置Kafka、Storm,以及整合Kafka+Storm、整合Storm+HDFS、整合Kafka+Storm+HDFS這幾點來配置實踐,滿足上面提出的一些需求。配置實踐使用的軟件包如下所示:

  • zookeeper-3.4.5.tar.gz
  • kafka_2.9.2-0.8.1.1.tgz
  • apache-storm-0.9.2-incubating.tar.gz
  • hadoop-2.2.0.tar.gz

程序配置運行所基於的操作系統爲CentOS 5.11。

Kafka安裝配置

我們使用3臺機器搭建Kafka集羣:

192.168.4.142   h1
192.168.4.143   h2
192.168.4.144   h3

在安裝Kafka集羣之前,這裏沒有使用Kafka自帶的Zookeeper,而是獨立安裝了一個Zookeeper集羣,也是使用這3臺機器,保證Zookeeper集羣正常運行。 首先,在h1上準備Kafka安裝文件,執行如下命令:

cd /usr/local/
wget http://mirror.bit.edu.cn/apache/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz
tar xvzf kafka_2.9.2-0.8.1.1.tgz
ln -s /usr/local/kafka_2.9.2-0.8.1.1 /usr/local/kafka
chown -R kafka:kafka /usr/local/kafka_2.9.2-0.8.1.1 /usr/local/kafka

修改配置文件/usr/local/kafka/config/server.properties,修改如下內容:

broker.id=0
zookeeper.connect=h1:2181,h2:2181,h3:2181

然後,將配置好的安裝文件同步到其他的h2、h3節點上:

scp -r /usr/local/kafka_2.9.2-0.8.1.1/ h2:/usr/local/
scp -r /usr/local/kafka_2.9.2-0.8.1.1/ h3:/usr/local/

最後,在h2、h3節點上配置,執行如下命令:

cd /usr/local/
ln -s /usr/local/kafka_2.9.2-0.8.1.1 /usr/local/kafka
chown -R kafka:kafka /usr/local/kafka_2.9.2-0.8.1.1 /usr/local/kafka

並修改配置文件/usr/local/kafka/config/server.properties內容如下所示:

broker.id=1  # 在h1修改

broker.id=2  # 在h2修改

因爲Kafka集羣需要保證各個Broker的id在整個集羣中必須唯一,需要調整這個配置項的值(如果在單機上,可以通過建立多個Broker進程來模擬分佈式的Kafka集羣,也需要Broker的id唯一,還需要修改一些配置目錄的信息)。 在集羣中的h1、h2、h3這三個節點上分別啓動Kafka,分別執行如下命令:

bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

可以通過查看日誌,或者檢查進程狀態,保證Kafka集羣啓動成功。 我們創建一個名稱爲my-replicated-topic5的Topic,5個分區,並且複製因子爲3,執行如下命令:

bin/kafka-topics.sh --create --zookeeper h1:2181,h2:2181,h3:2181 --replication-factor 3 --partitions 5 --topic my-replicated-topic5

查看創建的Topic,執行如下命令:

bin/kafka-topics.sh --describe --zookeeper h1:2181,h2:2181,h3:2181 --topic my-replicated-topic5

結果信息如下所示:

Topic:my-replicated-topic5	PartitionCount:5	ReplicationFactor:3	Configs:
  Topic: my-replicated-topic5	Partition: 0	Leader: 0	Replicas: 0,2,1	Isr: 0,2,1
  Topic: my-replicated-topic5	Partition: 1	Leader: 0	Replicas: 1,0,2	Isr: 0,2,1
  Topic: my-replicated-topic5	Partition: 2	Leader: 2	Replicas: 2,1,0	Isr: 2,0,1
  Topic: my-replicated-topic5	Partition: 3	Leader: 0	Replicas: 0,1,2	Isr: 0,2,1
  Topic: my-replicated-topic5	Partition: 4	Leader: 2	Replicas: 1,2,0	Isr: 2,0,1

上面Leader、Replicas、Isr的含義如下:

Partition: 分區
Leader   : 負責讀寫指定分區的節點
Replicas : 複製該分區log的節點列表
Isr      : "in-sync" replicas,當前活躍的副本列表(是一個子集),並且可能成爲Leader

我們可以通過Kafka自帶的bin/kafka-console-producer.sh和bin/kafka-console-consumer.sh腳本,來驗證演示如果發佈消息、消費消息。 在一個終端,啓動Producer,並向我們上面創建的名稱爲my-replicated-topic5的Topic中生產消息,執行如下腳本:

bin/kafka-console-producer.sh --broker-list h1:9092,h2:9092,h3:9092 --topic my-replicated-topic5

在另一個終端,啓動Consumer,並訂閱我們上面創建的名稱爲my-replicated-topic5的Topic中生產的消息,執行如下腳本:

bin/kafka-console-consumer.sh --zookeeper h1:2181,h2:2181,h3:2181 --from-beginning --topic my-replicated-topic5

可以在Producer終端上輸入字符串消息行,然後回車,就可以在Consumer終端上看到消費者消費的消息內容。 也可以參考Kafka的Producer和Consumer的Java API,通過API編碼的方式來實現消息生產和消費的處理邏輯。

Storm安裝配置

Storm集羣也依賴Zookeeper集羣,要保證Zookeeper集羣正常運行。Storm的安裝配置比較簡單,我們仍然使用下面3臺機器搭建:

192.168.4.142   h1
192.168.4.143   h2
192.168.4.144   h3

首先,在h1節點上,執行如下命令安裝:

cd /usr/local/
wget http://mirror.bit.edu.cn/apache/incubator/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.tar.gz
tar xvzf apache-storm-0.9.2-incubating.tar.gz
ln -s /usr/local/apache-storm-0.9.2-incubating /usr/local/storm
chown -R storm:storm /usr/local/apache-storm-0.9.2-incubating /usr/local/storm

然後,修改配置文件conf/storm.yaml,內容如下所示:

storm.zookeeper.servers:
  - "h1"
  - "h2"
  - "h3"
storm.zookeeper.port: 2181
#
nimbus.host: "h1"

supervisor.slots.ports:
    - 6700
    - 6701
    - 6702
    - 6703

storm.local.dir: "/tmp/storm"

將配置好的安裝文件,分發到其他節點上:

scp -r /usr/local/apache-storm-0.9.2-incubating/ h2:/usr/local/
scp -r /usr/local/apache-storm-0.9.2-incubating/ h3:/usr/local/

最後,在h2、h3節點上配置,執行如下命令:

cd /usr/local/
ln -s /usr/local/apache-storm-0.9.2-incubating /usr/local/storm
chown -R storm:storm /usr/local/apache-storm-0.9.2-incubating /usr/local/storm

Storm集羣的主節點爲Nimbus,從節點爲Supervisor,我們需要在h1上啓動Nimbus服務,在從節點h2、h3上啓動Supervisor服務:

bin/storm nimbus &
bin/storm supervisor &

爲了方便監控,可以啓動Storm UI,可以從Web頁面上監控Storm Topology的運行狀態,例如在h2上啓動:

bin/storm ui &

這樣可以通過訪問 http://h2:8080/ 來查看Topology的運行狀況。

整合Kafka+Storm

消息通過各種方式進入到Kafka消息中間件,比如可以通過使用Flume來收集日誌數據,然後在Kafka中路由暫存,然後再由實時計算程序Storm做實時分析,這時我們就需要將在Storm的Spout中讀取Kafka中的消息,然後交由具體的Spot組件去分析處理。實際上,apache-storm-0.9.2-incubating這個版本的Storm已經自帶了一個集成Kafka的外部插件程序storm-kafka,可以直接使用,例如我使用的Maven依賴配置,如下所示:

<dependency>
      <groupId>org.apache.storm</groupId>
      <artifactId>storm-core</artifactId>
      <version>0.9.2-incubating</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.storm</groupId>
      <artifactId>storm-kafka</artifactId>
      <version>0.9.2-incubating</version>
    </dependency>
    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka_2.9.2</artifactId>
      <version>0.8.1.1</version>
      <exclusions>
        <exclusion>
          <groupId>org.apache.zookeeper</groupId>
          <artifactId>zookeeper</artifactId>
        </exclusion>
        <exclusion>
          <groupId>log4j</groupId>
          <artifactId>log4j</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

下面,我們開發了一個簡單WordCount示例程序,從Kafka讀取訂閱的消息行,通過空格拆分出單個單詞,然後再做詞頻統計計算,實現的Topology的代碼,如下所示:

package org.shirdrn.storm.examples;

import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
import java.util.concurrent.atomic.AtomicInteger;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

import storm.kafka.BrokerHosts;
import storm.kafka.KafkaSpout;
import storm.kafka.SpoutConfig;
import storm.kafka.StringScheme;
import storm.kafka.ZkHosts;
import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.generated.AlreadyAliveException;
import backtype.storm.generated.InvalidTopologyException;
import backtype.storm.spout.SchemeAsMultiScheme;
import backtype.storm.task.OutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.TopologyBuilder;
import backtype.storm.topology.base.BaseRichBolt;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Tuple;
import backtype.storm.tuple.Values;

public class MyKafkaTopology {

  public static class KafkaWordSplitter extends BaseRichBolt {

    private static final Log LOG = LogFactory.getLog(KafkaWordSplitter.class);
    private static final long serialVersionUID = 886149197481637894L;
    private OutputCollector collector;
      
    @Override
    public void prepare(Map stormConf, TopologyContext context,
        OutputCollector collector) {
      this.collector = collector;		    
    }

    @Override
    public void execute(Tuple input) {
      String line = input.getString(0);
      LOG.info("RECV[kafka -> splitter] " + line);
      String[] words = line.split("\\s+");
      for(String word : words) {
        LOG.info("EMIT[splitter -> counter] " + word);
        collector.emit(input, new Values(word, 1));
      }
      collector.ack(input);
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
      declarer.declare(new Fields("word", "count"));	    
    }
      
  }
    
  public static class WordCounter extends BaseRichBolt {

    private static final Log LOG = LogFactory.getLog(WordCounter.class);
    private static final long serialVersionUID = 886149197481637894L;
    private OutputCollector collector;
    private Map<String, AtomicInteger> counterMap;
      
    @Override
    public void prepare(Map stormConf, TopologyContext context,
        OutputCollector collector) {
      this.collector = collector;    
      this.counterMap = new HashMap<String, AtomicInteger>();
    }

    @Override
    public void execute(Tuple input) {
      String word = input.getString(0);
      int count = input.getInteger(1);
      LOG.info("RECV[splitter -> counter] " + word + " : " + count);
      AtomicInteger ai = this.counterMap.get(word);
      if(ai == null) {
        ai = new AtomicInteger();
        this.counterMap.put(word, ai);
      }
      ai.addAndGet(count);
      collector.ack(input);
      LOG.info("CHECK statistics map: " + this.counterMap);
    }

    @Override
    public void cleanup() {
      LOG.info("The final result:");
      Iterator<Entry<String, AtomicInteger>> iter = this.counterMap.entrySet().iterator();
      while(iter.hasNext()) {
        Entry<String, AtomicInteger> entry = iter.next();
        LOG.info(entry.getKey() + "\t:\t" + entry.getValue().get());
      }
        
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
      declarer.declare(new Fields("word", "count"));	    
    }
  }
    
  public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException, InterruptedException {
    String zks = "h1:2181,h2:2181,h3:2181";
    String topic = "my-replicated-topic5";
    String zkRoot = "/storm"; // default zookeeper root configuration for storm
    String id = "word";
      
    BrokerHosts brokerHosts = new ZkHosts(zks);
    SpoutConfig spoutConf = new SpoutConfig(brokerHosts, topic, zkRoot, id);
    spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
    spoutConf.forceFromStart = true;
    spoutConf.zkServers = Arrays.asList(new String[] {"h1", "h2", "h3"});
    spoutConf.zkPort = 2181;
      
    TopologyBuilder builder = new TopologyBuilder();
    builder.setSpout("kafka-reader", new KafkaSpout(spoutConf), 5); // Kafka我們創建了一個5分區的Topic,這裏並行度設置爲5
    builder.setBolt("word-splitter", new KafkaWordSplitter(), 2).shuffleGrouping("kafka-reader");
    builder.setBolt("word-counter", new WordCounter()).fieldsGrouping("word-splitter", new Fields("word"));
      
    Config conf = new Config();
      
    String name = MyKafkaTopology.class.getSimpleName();
    if (args != null && args.length > 0) {
      // Nimbus host name passed from command line
      conf.put(Config.NIMBUS_HOST, args[0]);
      conf.setNumWorkers(3);
      StormSubmitter.submitTopologyWithProgressBar(name, conf, builder.createTopology());
    } else {
      conf.setMaxTaskParallelism(3);
      LocalCluster cluster = new LocalCluster();
      cluster.submitTopology(name, conf, builder.createTopology());
      Thread.sleep(60000);
      cluster.shutdown();
    }
  }
}

上面程序,在本地調試(使用LocalCluster)不需要輸入任何參數,提交到實際集羣中運行時,需要傳遞一個參數,該參數爲Nimbus的主機名稱。 通過Maven構建,生成一個包含依賴的single jar文件(不要把Storm的依賴包添加進去),例如storm-examples-0.0.1-SNAPSHOT.jar,在提交Topology程序到Storm集羣之前,因爲用到了Kafka,需要拷貝一下依賴jar文件到Storm集羣中的lib目錄下面:

cp /usr/local/kafka/libs/kafka_2.9.2-0.8.1.1.jar /usr/local/storm/lib/
cp /usr/local/kafka/libs/scala-library-2.9.2.jar /usr/local/storm/lib/
cp /usr/local/kafka/libs/metrics-core-2.2.0.jar /usr/local/storm/lib/
cp /usr/local/kafka/libs/snappy-java-1.0.5.jar /usr/local/storm/lib/
cp /usr/local/kafka/libs/zkclient-0.3.jar /usr/local/storm/lib/
cp /usr/local/kafka/libs/log4j-1.2.15.jar /usr/local/storm/lib/
cp /usr/local/kafka/libs/slf4j-api-1.7.2.jar /usr/local/storm/lib/
cp /usr/local/kafka/libs/jopt-simple-3.2.jar /usr/local/storm/lib/

然後,就可以提交我們開發的Topology程序了:

bin/storm jar /home/storm/storm-examples-0.0.1-SNAPSHOT.jar org.shirdrn.storm.examples.MyKafkaTopology h1

可以通過查看日誌文件(logs/目錄下)或者Storm UI來監控Topology的運行狀況。如果程序沒有錯誤,可以使用前面我們使用的Kafka Producer來生成消息,就能看到我們開發的Storm Topology能夠實時接收到並進行處理。

整合Storm+HDFS

Storm實時計算集羣從Kafka消息中間件中消費消息,有實時處理需求的可以走實時處理程序,還有需要進行離線分析的需求,如寫入到HDFS進行分析。下面實現了一個Topology,代碼如下所示:

package org.shirdrn.storm.examples;

import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Map;
import java.util.Random;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.storm.hdfs.bolt.HdfsBolt;
import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.DelimitedRecordFormat;
import org.apache.storm.hdfs.bolt.format.FileNameFormat;
import org.apache.storm.hdfs.bolt.format.RecordFormat;
import org.apache.storm.hdfs.bolt.rotation.FileRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.TimedRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.TimedRotationPolicy.TimeUnit;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.bolt.sync.SyncPolicy;

import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.generated.AlreadyAliveException;
import backtype.storm.generated.InvalidTopologyException;
import backtype.storm.spout.SpoutOutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.TopologyBuilder;
import backtype.storm.topology.base.BaseRichSpout;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Values;
import backtype.storm.utils.Utils;

public class StormToHDFSTopology {

     public static class EventSpout extends BaseRichSpout {

  private static final Log LOG = LogFactory.getLog(EventSpout.class);
  private static final long serialVersionUID = 886149197481637894L;
  private SpoutOutputCollector collector;
  private Random rand;
  private String[] records;
         
  @Override
  public void open(Map conf, TopologyContext context,
    SpoutOutputCollector collector) {
       this.collector = collector;    
       rand = new Random();
       records = new String[] {
         "10001     ef2da82d4c8b49c44199655dc14f39f6     4.2.1     HUAWEI G610-U00     HUAWEI     2     70:72:3c:73:8b:22     2014-10-13 12:36:35",
         "10001     ffb52739a29348a67952e47c12da54ef     4.3     GT-I9300     samsung     2     50:CC:F8:E4:22:E2     2014-10-13 12:36:02",
         "10001     ef2da82d4c8b49c44199655dc14f39f6     4.2.1     HUAWEI G610-U00     HUAWEI     2     70:72:3c:73:8b:22     2014-10-13 12:36:35"
       };
  }


  @Override
  public void nextTuple() {
       Utils.sleep(1000);
       DateFormat df = new SimpleDateFormat("yyyy-MM-dd_HH-mm-ss");
       Date d = new Date(System.currentTimeMillis());
       String minute = df.format(d);
       String record = records[rand.nextInt(records.length)];
       LOG.info("EMIT[spout -> hdfs] " + minute + " : " + record);
       collector.emit(new Values(minute, record));
  }

  @Override
  public void declareOutputFields(OutputFieldsDeclarer declarer) {
       declarer.declare(new Fields("minute", "record"));         
  }


     }
    
     public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException, InterruptedException {
  // use "|" instead of "," for field delimiter
  RecordFormat format = new DelimitedRecordFormat()
          .withFieldDelimiter(" : ");

  // sync the filesystem after every 1k tuples
  SyncPolicy syncPolicy = new CountSyncPolicy(1000);

  // rotate files 
  FileRotationPolicy rotationPolicy = new TimedRotationPolicy(1.0f, TimeUnit.MINUTES);

  FileNameFormat fileNameFormat = new DefaultFileNameFormat()
          .withPath("/storm/").withPrefix("app_").withExtension(".log");

  HdfsBolt hdfsBolt = new HdfsBolt()
          .withFsUrl("hdfs://h1:8020")
          .withFileNameFormat(fileNameFormat)
          .withRecordFormat(format)
          .withRotationPolicy(rotationPolicy)
          .withSyncPolicy(syncPolicy);
         
  TopologyBuilder builder = new TopologyBuilder();
  builder.setSpout("event-spout", new EventSpout(), 3);
  builder.setBolt("hdfs-bolt", hdfsBolt, 2).fieldsGrouping("event-spout", new Fields("minute"));
         
  Config conf = new Config();
         
  String name = StormToHDFSTopology.class.getSimpleName();
  if (args != null && args.length > 0) {
       conf.put(Config.NIMBUS_HOST, args[0]);
       conf.setNumWorkers(3);
       StormSubmitter.submitTopologyWithProgressBar(name, conf, builder.createTopology());
  } else {
       conf.setMaxTaskParallelism(3);
       LocalCluster cluster = new LocalCluster();
       cluster.submitTopology(name, conf, builder.createTopology());
       Thread.sleep(60000);
       cluster.shutdown();
  }
     }

}

上面的處理邏輯,可以對HdfsBolt進行更加詳細的配置,如FileNameFormat、SyncPolicy、FileRotationPolicy(可以設置在滿足什麼條件下,切出一個新的日誌,如可以指定多長時間切出一個新的日誌文件,可以指定一個日誌文件大小達到設置值後,再寫一個新日誌文件),更多設置可以參考storm-hdfs,。 上面代碼在打包的時候,需要注意,使用storm-starter自帶的Maven打包配置,可能在將Topology部署運行的時候,會報錯,可以使用maven-shade-plugin這個插件,如下配置所示:

<plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-shade-plugin</artifactId>
           <version>1.4</version>
           <configuration>
             <createDependencyReducedPom>true</createDependencyReducedPom>
           </configuration>
           <executions>
             <execution>
               <phase>package</phase>
               <goals>
                 <goal>shade</goal>
               </goals>
               <configuration>
                 <transformers>
                   <transformer
                       implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
                   <transformer
                       implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                     <mainClass></mainClass>
                   </transformer>
                 </transformers>
               </configuration>
             </execution>
           </executions>
         </plugin>

整合Kafka+Storm+HDFS

上面分別對整合Kafka+Storm和Storm+HDFS做了實踐,可以將後者的Spout改成前者的Spout,從Kafka中消費消息,在Storm中可以做簡單處理,然後將數據寫入HDFS,最後可以在Hadoop平臺上對數據進行離線分析處理。下面,寫了一個簡單的例子,從Kafka消費消息,然後經由Storm處理,寫入到HDFS存儲,代碼如下所示:

package org.shirdrn.storm.examples;

import java.util.Arrays;
import java.util.Map;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.storm.hdfs.bolt.HdfsBolt;
import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.DelimitedRecordFormat;
import org.apache.storm.hdfs.bolt.format.FileNameFormat;
import org.apache.storm.hdfs.bolt.format.RecordFormat;
import org.apache.storm.hdfs.bolt.rotation.FileRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.TimedRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.TimedRotationPolicy.TimeUnit;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.bolt.sync.SyncPolicy;

import storm.kafka.BrokerHosts;
import storm.kafka.KafkaSpout;
import storm.kafka.SpoutConfig;
import storm.kafka.StringScheme;
import storm.kafka.ZkHosts;
import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.generated.AlreadyAliveException;
import backtype.storm.generated.InvalidTopologyException;
import backtype.storm.spout.SchemeAsMultiScheme;
import backtype.storm.task.OutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.TopologyBuilder;
import backtype.storm.topology.base.BaseRichBolt;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Tuple;
import backtype.storm.tuple.Values;

public class DistributeWordTopology {
    
     public static class KafkaWordToUpperCase extends BaseRichBolt {

  private static final Log LOG = LogFactory.getLog(KafkaWordToUpperCase.class);
  private static final long serialVersionUID = -5207232012035109026L;
  private OutputCollector collector;
         
  @Override
  public void prepare(Map stormConf, TopologyContext context,
    OutputCollector collector) {
       this.collector = collector;	    
  }

  @Override
  public void execute(Tuple input) {
       String line = input.getString(0).trim();
       LOG.info("RECV[kafka -> splitter] " + line);
       if(!line.isEmpty()) {
    String upperLine = line.toUpperCase();
    LOG.info("EMIT[splitter -> counter] " + upperLine);
    collector.emit(input, new Values(upperLine, upperLine.length()));
       }
       collector.ack(input);
  }

  @Override
  public void declareOutputFields(OutputFieldsDeclarer declarer) {
       declarer.declare(new Fields("line", "len"));         
  }
         
     }
    
     public static class RealtimeBolt extends BaseRichBolt {

  private static final Log LOG = LogFactory.getLog(KafkaWordToUpperCase.class);
  private static final long serialVersionUID = -4115132557403913367L;
  private OutputCollector collector;
         
  @Override
  public void prepare(Map stormConf, TopologyContext context,
    OutputCollector collector) {
       this.collector = collector;	    
  }

  @Override
  public void execute(Tuple input) {
       String line = input.getString(0).trim();
       LOG.info("REALTIME: " + line);
       collector.ack(input);
  }

  @Override
  public void declareOutputFields(OutputFieldsDeclarer declarer) {
      
  }

     }

     public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException, InterruptedException {

  // Configure Kafka
  String zks = "h1:2181,h2:2181,h3:2181";
  String topic = "my-replicated-topic5";
  String zkRoot = "/storm"; // default zookeeper root configuration for storm
  String id = "word";
  BrokerHosts brokerHosts = new ZkHosts(zks);
  SpoutConfig spoutConf = new SpoutConfig(brokerHosts, topic, zkRoot, id);
  spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
  spoutConf.forceFromStart = true;
  spoutConf.zkServers = Arrays.asList(new String[] {"h1", "h2", "h3"});
  spoutConf.zkPort = 2181;
         
  // Configure HDFS bolt
  RecordFormat format = new DelimitedRecordFormat()
          .withFieldDelimiter("\t"); // use "\t" instead of "," for field delimiter
  SyncPolicy syncPolicy = new CountSyncPolicy(1000); // sync the filesystem after every 1k tuples
  FileRotationPolicy rotationPolicy = new TimedRotationPolicy(1.0f, TimeUnit.MINUTES); // rotate files
  FileNameFormat fileNameFormat = new DefaultFileNameFormat()
          .withPath("/storm/").withPrefix("app_").withExtension(".log"); // set file name format
  HdfsBolt hdfsBolt = new HdfsBolt()
          .withFsUrl("hdfs://h1:8020")
          .withFileNameFormat(fileNameFormat)
          .withRecordFormat(format)
          .withRotationPolicy(rotationPolicy)
          .withSyncPolicy(syncPolicy);
         
  // configure & build topology
  TopologyBuilder builder = new TopologyBuilder();
  builder.setSpout("kafka-reader", new KafkaSpout(spoutConf), 5);
  builder.setBolt("to-upper", new KafkaWordToUpperCase(), 3).shuffleGrouping("kafka-reader");
  builder.setBolt("hdfs-bolt", hdfsBolt, 2).shuffleGrouping("to-upper");
  builder.setBolt("realtime", new RealtimeBolt(), 2).shuffleGrouping("to-upper");
         
  // submit topology
  Config conf = new Config();
  String name = DistributeWordTopology.class.getSimpleName();
  if (args != null && args.length > 0) {
       String nimbus = args[0];
       conf.put(Config.NIMBUS_HOST, nimbus);
       conf.setNumWorkers(3);
       StormSubmitter.submitTopologyWithProgressBar(name, conf, builder.createTopology());
  } else {
       conf.setMaxTaskParallelism(3);
       LocalCluster cluster = new LocalCluster();
       cluster.submitTopology(name, conf, builder.createTopology());
       Thread.sleep(60000);
       cluster.shutdown();
  }
     }

}

上面代碼中,名稱爲to-upper的Bolt將接收到的字符串行轉換成大寫以後,會將處理過的數據向後面的hdfs-bolt、realtime這兩個Bolt各發一份拷貝,然後由這兩個Bolt分別根據實際需要(實時/離線)單獨處理。 打包後,在Storm集羣上部署並運行這個Topology:

bin/storm jar ~/storm-examples-0.0.1-SNAPSHOT.jar org.shirdrn.storm.examples.DistributeWordTopology h1

可以通過Storm UI查看Topology運行情況,可以查看HDFS上生成的數據。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章