Hadoop 2.2.0和HBase-0.98 安裝snappy

1、安裝需要的依賴包及軟件

需要安裝的依賴包有:

gcc、c++、 autoconf、automake、libtool

需要安裝的配套軟件有:

Java6、Maven

關於上面的依賴包,如果在ubuntu下,使用sudo apt-get install * 命令安裝,如果在centos下,使用sudo yum install *命令來安裝。

關於配套的Java和Maven的安裝,參考博文《Linux下Java、Maven、Tomcat的安裝》。

2、下載snappy-1.1.2

可供下載的地址:

地址一:https://code.google.com/p/snappy/wiki/Downloads?tm=2

地址二:http://download.csdn.net/detail/iam333/7725883

3、編譯並動態安裝

下載後解壓到某個文件夾,此處假設解壓的地址位home目錄。再執行如下命令如下:

$ cd ~/snappy-1.1.2
$ sudo ./configure
$ sudo ./make
$ sudo make install
然後執行如下命令查看是否安裝成功。

$ cd /usr/local/lib
$ ll libsnappy.*
-rw-r--r-- 1 root root 233506 Aug 7 11:56 libsnappy.a
-rwxr-xr-x 1 root root    953 Aug 7 11:56 libsnappy.la
lrwxrwxrwx 1 root root     18 Aug 7 11:56 libsnappy.so -> libsnappy.so.1.2.1
lrwxrwxrwx 1 root root     18 Aug 7 11:56 libsnappy.so.1 -> libsnappy.so.1.2.1
-rwxr-xr-x 1 root root 147758 Aug 7 11:56 libsnappy.so.1.2.1
如果安裝過程中沒有遇到錯誤,且/usr/local/lib目錄下有上面的文件,表示安裝成功。

4、hadoop-snappy源碼編譯

1)下載源碼,兩種方式

a、安裝svn,如果是ubuntu,使用sudo apt-get install subversion;如果是centos,使用sudo yum install subversion命令安裝。

b、使用svn 從谷歌的svn倉庫中checkout源碼,使用如下命令:

$ svn checkout http://hadoop-snappy.googlecode.com/svn/trunk/ hadoop-snappy
這樣就在執行命令的目錄下將hadoop-snappy的源碼拷貝出來放在hadoop-snappy目錄中。

不過因爲谷歌的服務在大陸總是出問題,所以也可以選擇直接下載,地址:http://download.csdn.net/detail/iam333/7726023

2)編譯hadoop-snappy源碼

切換到hadoop-snappy源碼的目錄下,執行如下命令:

a、如果上面安裝snappy使用的是默認路徑,命令爲:

mvn package
b、如果上面安裝的snappy使用的是自定義路徑,則命令爲:

mvn package [-Dsnappy.prefix=SNAPPY_INSTALLATION_DIR]
其中SNAPPY_INSTALLATION_DIR位snappy安裝路徑。

編譯過程中可能出現的問題:

a)/root/modules/hadoop-snappy/maven/build-compilenative.xml:62: Execute failed: java.io.IOException: Cannot run program “autoreconf” (in directory “/root/modules/hadoop-snappy/target/native-src”): java.io.IOException: error=2, No such file or directory

解決方案:說明缺少文件,但是這個文件是在target下的,是編譯過程中自動生成的,原本就不該存在,這是問什麼呢?其實根本問題不是缺文件,而是Hadoop Snappy是需要一定的前置條件。所以請參考最上面的安裝依賴包介紹安裝依賴包。

b)出現如下錯誤提示:

[exec] make: *** [src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo] Error 1
 
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (compile) on project hadoop-snappy: An Ant BuildException has occured: The following error occurred while executing this line:
[ERROR] /home/ngc/Char/snap/hadoop-snappy/hadoop-snappy-read-only/maven/build-compilenative.xml:75: exec returned: 
解決方案:Hadoop Snappy的官方文檔僅僅列出了需要gcc,而沒有列出需要什麼版本的gcc。而實際上,Hadoop Snappy是需要gcc4.4的。如果gcc版本高於默認的4.4版本,就會報錯。

假設使用的系統爲centos,使用如下命令:(注:ubuntu需要將sudo yum install 換成sudo apt-get install)

$ sudo yum install gcc-4.4
$ sudo rm /usr/bin/gcc
$ sudo ln -s /usr/bin/gcc-4.4 /usr/bin/gcc
使用如下命令查看是否替換成功:

$ gcc --version
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3)
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
c)出現如下錯誤提示:

[exec] /bin/bash ./libtool --tag=CC   --mode=link gcc -g -Wall -fPIC -O2 -m64 -g -O2 -version-info 0:1:0 -L/usr/local//lib -o libhadoopsnappy.la -rpath /usr/local/lib src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.lo src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.lo  -ljvm -ldl 
[exec] /usr/bin/ld: cannot find -ljvm
[exec] collect2: ld returned 1 exit status
[exec] make: *** [libhadoopsnappy.la] 錯誤 1
[exec] libtool: link: gcc -shared  -fPIC -DPIC  src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o   -L/usr/local//lib -ljvm -ldl  -O2 -m64 -O2   -Wl,-soname -Wl,libhadoopsnappy.so.0 -o .libs/libhadoopsnappy.so.0.0.1
這是因爲沒有把安裝jvm的libjvm.so symbolic鏈接到usr/local/lib。如果你的系統是64位,可到/root/bin/jdk1.6.0_37/jre/lib/amd64/server/察看libjvm.so 鏈接到的地方,這裏修改如下,使用命令:

$ sudo ln -s /usr/local/jdk1.6.0_45/jre/lib/amd64/server/libjvm.so /usr/local/lib/
問題即可解決。


5、Hadoop 2.2.0配置snappy

hadoop-snappy編譯成功後,會在hadoop-snappy目錄下的target目錄中生成一些文件,其中有一個文件名爲:hadoop-snappy-0.0.1-SNAPSHOT.tar.gz

1)解壓target下hadoop-snappy-0.0.1-SNAPSHOT.tar.gz,解壓後,複製lib文件

$ sudo cp -r ~/snappy-hadoop/target/hadoop-snappy-0.0.1-SNAPSHOT/lib/native/Linux-amd64-64/* $HADOOP_HOME/lib/native/Linux-amd64-64/
2)將target下的hadoop-snappy-0.0.1-SNAPSHOT.jar複製到$HADOOP_HOME/lib 下。

3)配置$HADOOP_HOME/etc/hadoop/hadoop-env.sh,添加:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/
4) 配置$HADOOP_HOME/etc/hadoop/mapred-site.xml,這個文件中,所有跟壓縮有關的配置選項有:
<property>
  <name>mapred.output.compress</name>
  <value>false</value>
  <description>Should the job outputs be compressed?
  </description>
</property>
 
<property>
  <name>mapred.output.compression.type</name>
  <value>RECORD</value>
  <description>If the job outputs are to compressed as SequenceFiles, how should
               they be compressed? Should be one of NONE, RECORD or BLOCK.
  </description>
</property>
 
<property>
  <name>mapred.output.compression.codec</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec</value>
  <description>If the job outputs are compressed, how should they be compressed?
  </description>
</property>
 
<property>
  <name>mapred.compress.map.output</name>
  <value>false</value>
  <description>Should the outputs of the maps be compressed before being
               sent across the network. Uses SequenceFile compression.
  </description>
</property>
 
<property>
  <name>mapred.map.output.compression.codec</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec</value>
  <description>If the map outputs are compressed, how should they be 
               compressed?
  </description>
</property>
可以根據自己的需要,去進行配置。其中,codec的類型如下:

<property>
    <name>io.compression.codecs</name>
    <value>
      org.apache.hadoop.io.compress.GzipCodec,
      org.apache.hadoop.io.compress.DefaultCodec,
      org.apache.hadoop.io.compress.BZip2Codec,
      org.apache.hadoop.io.compress.SnappyCodec
    </value>
  </property>
SnappyCodec就代表了snappy壓縮方式。

5)配置好了以後,重啓hadoop集羣即可。

6、HBase 0.98配置snappy

1)配置HBase lib/native/Linux-amd64-64/ 中的lib文件。簡單起見,我們只需要將$HADOOP_HOME/lib/native/Linux-amd64-64/下lib文件,全部複製到相應HBase目錄下:

$ sudo cp -r $HADOOP_HOME/lib/native/Linux-amd64-64/* $HBASE_HOME/lib/native/Linux-amd64-64/
2)配置HBase環境變量hbase-env.sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/
export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/
export CLASSPATH=$CLASSPATH:$HBASE_LIBRARY_PATH
注意:別忘記了在habase-env.sh的開始位置配置HADOOP_HOME和HBASE_HOME。

3)配置好之後,重啓HBase即可。

4)驗證是否安裝成功

在HBase的安裝目錄下,執行如下語句:

$ bin/hbase shell
2014-08-07 15:11:35,874 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.98.2-hadoop2, r1591526, Wed Apr 30 20:17:33 PDT 2014

hbase(main):001:0> 
然後執行創建語句:

hbase(main):001:0> create 'test_snappy', {NAME => 'cf', COMPRESSION => 'SNAPPY'}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/q/hbase/hbase-0.98.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/q/hadoop2x/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
0 row(s) in 1.2580 seconds

=> Hbase::Table - test_snappy
hbase(main):002:0> 
查看創建的test_snappy表:

hbase(main):002:0> describe 'test_snappy'
DESCRIPTION                                                                                                                               ENABLED                                                                    
 'test_snappy', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSIO true                                                                       
 N => 'SNAPPY', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOC                                                                            
 KCACHE => 'true'}                                                                                                                                                                                                   
1 row(s) in 0.0420 seconds
可以看到,COMPRESSION => 'SNAPPY'。

接下來,插入數據試試:

hbase(main):003:0> put 'test_snappy', 'key1', 'cf:q1', 'value1'
0 row(s) in 0.0790 seconds

hbase(main):004:0>
遍歷test_snappy表試試:

hbase(main):004:0> scan 'test_snappy'
ROW                                                    COLUMN+CELL                                                                                                                                                   
 key1                                                  column=cf:q1, timestamp=1407395814255, value=value1                                                                                                           
1 row(s) in 0.0170 seconds

hbase(main):005:0> 
以上過程均能正確執行,說明配置正確。

錯誤解決方案:

a)配置後,啓動hbase出現如下異常:

WARN [main] util.CompressionTest: Can't instantiate codec: snappy
java.io.IOException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:96)
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:62)
at org.apache.hadoop.hbase.regionserver.HRegionServer.checkCodecs(HRegionServer.java:660)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:538)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
說明還沒有配置好,好好檢查hbase-env.sh中的配置,看自己是否配置正確。

轉載請註明出處:http://blog.csdn.net/iAm333

發佈了109 篇原創文章 · 獲贊 11 · 訪問量 33萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章