hue 安裝以及集成hbase

hue 是cdh中自帶的組件,也可單獨安裝(麻煩),hue 是hadoop ui,利用它可以很直觀的操作和查看hadoop生態應用。一般安裝cloudera manager之後會自動帶有hue管理界面

first 安裝所需依賴 

yum install  -y  maven git   npm  cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain krb5-devel libffi-devel 
libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel 

 

second :下載編譯安裝

wget https://github.com/cloudera/hue/archive/master.zip
unzip master.zip
cd hue
make apps

直到沒有明顯的錯誤爲止

3 配置

配置文件:desktop/conf/pseudo-distributed.ini 主要的配置項

[hadoop]
[[hdfs_clusters]]
  [[[default]]]
  fs_defaultfs=hdfs://192.168.42.135:9000
[[yarn_clusters]]
  [[[default]]]
  resourcemanager_host=192.168.42.135u
  resourcemanager_port=8011
  resourcemanager_api_url=http://192.168.42.135:8088
[beeswax]
hive_server_host=192.168.42.135
hive_server_port=10000
hive_conf_dir=/data/hive/conf

 

啓動hue:build/env/bin/hue runserver 0.0.0.0:8000

啓動之後,可以通過瀏覽器訪問。http://192.168.42.135:8000/,會提示創建用戶,隨便輸入,一般可以登錄頁面,如果上面的配置出錯,加載hive表,或者hdfs文件出錯。

如果出錯 粘貼錯誤然後百度一下大部分可以解決

 

hue 集成hbase 

首先需要hbase開啓thrift服務,多個節點開啓一臺即可

需要注意的是,是開啓thrift服務,而不是thrift2 

/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start thrift2 thrift 開啓thrift 不能是thrift2 9090 端口

以下主要參考 https://blog.csdn.net/zhangshenghang/article/details/85776134

 

hbase-site.xml 中添加(也可通過web界面添加)

<property>
  <name>hbase.thrift.support.proxyuser</name>
  <value>true</value>
</property>
 
<property>
  <name>hbase.regionserver.thrift.http</name>
  <value>true</value>
</property>

core-site.xml 中添加(也可通過web界面添加)

<property>
        <name>hadoop.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hbase.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hbase.groups</name>
        <value>*</value>
    </property>

然後在通過web界面查看是否添加到其上面

然後在修改desktop/conf/pseudo-distributed.ini

對應的hbase配置項

附配置項 

[hadoop]

  # Configuration for HDFS NameNode
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://localhost:8020

      # NameNode logical name.
      ## logical_name=

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      webhdfs_url=http://localhost:50070/webhdfs/v1

      # Change this if your HDFS cluster is Kerberos-secured
      ## security_enabled=false

      # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
      # have to be verified against certificate authority
      ## ssl_cert_ca_verify=True

      # Directory of the Hadoop configuration
       hadoop_conf_dir=/etc/hadoop/conf

  # Configuration for YARN (MR2)
  # ------------------------------------------------------------------------
  [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
       resourcemanager_host=localhost

      # The port where the ResourceManager IPC listens on
       resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      submit_to=True

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
       resourcemanager_api_url=http://localhost:8088

      # URL of the ProxyServer API
       proxy_api_url=http://localhost:8088

      # URL of the HistoryServer API
       history_server_api_url=http://localhost:19888

      # URL of the Spark History Server
      ## spark_history_server_url=http://localhost:18088

      # Change this if your Spark History Server is Kerberos-secured
      ## spark_history_server_security_enabled=false

      # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
      # have to be verified against certificate authority
      ## ssl_cert_ca_verify=True

    # HA support by specifying multiple clusters.
    # Redefine different properties there.
    # e.g.
 

 

 

[hbase]

  # Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.

  # Use full hostname. If hbase.thrift.ssl.enabled in hbase-site is set to true, https will be used otherwise it will use http

  # If using Kerberos we assume GSSAPI SASL, not PLAIN.

  hbase_clusters=(Cluster|localhost:9090)

 

  # HBase configuration directory, where hbase-site.xml is located.

  hbase_conf_dir=/etc/hbase/conf

 

  # Hard limit of rows or columns per row fetched before truncating.

  truncate_limit = 500

 

  # Should come from hbase-site.xml, do not set. 'framed' is used to chunk up responses, used with the nonblocking server in Thrift but is not supported in Hue.

  # 'buffered' used to be the default of the HBase Thrift Server. Default is buffered when not set in hbase-site.xml.

   thrift_transport=buffered

 

  # Choose whether Hue should validate certificates received from the server.

   ssl_cert_ca_verify=true

 

重啓即可看到hbase,此過程較長,需等待

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章