背景
我们使用Hive Server 1已经很长时间了,用户ad-hoc query,hive-web, wormhole,运营工具等都是通过hive server来提交语句。但是hive server极其不稳定,经常会莫名奇妙假死,导致client端所有的connection都被block住了。对此我们不得不配置一个crontab检查脚本,会不断执行"show tables"语句来检测server是否假死,如果假死,只能杀死daemon进程重启。另外Hive Server 1的concurrency支持不好,如果一个用户在连接中设置了一些环境变量,绑定到一个thrift worker thread, 用户断开连接,另一个用户也创建了一个连接,他有可能也被分配到之前的worker thread,会复用之前的配置。这是因为thrift不支持检测client是否断开链接,它也就无法清除session状态信息。同时session绑定到worker thread的方式很难做HA。Hive Server 2中已经完美支持了session, client端每次RPC call的时候会带上一个SessionID, Server端会mapping到保存状态信息的Session State,使得任何一个worker thread都可以执行同一个Session的不同语句,而不会绑死在同一个上。
Hive 0.11 包含了Hive Server 1 和 Hive Server 2,还包含1的原因是为了做到向下兼容性。从长远来看都会以Hive Server 2作为首选。
配置
- <property>
- <name>hive.server2.thrift.port</name>
- <value>10000</value>
- </property>
- <property>
- <name>hive.server2.thrift.bind.host</name>
- <value>test84.hadoop</value>
- </property>
- <property>
- <name>hive.server2.authentication</name>
- <value>KERBEROS</value>
- <description>
- Client authentication types.
- NONE: no authentication check
- LDAP: LDAP/AD based authentication
- KERBEROS: Kerberos/GSSAPI authentication
- CUSTOM: Custom authentication provider
- (Use with property hive.server2.custom.authentication.class)
- </description>
- </property>
- <property>
- <name>hive.server2.authentication.kerberos.principal</name>
- <value>hadoop/[email protected]</value>
- </property>
- <property>
- <name>hive.server2.authentication.kerberos.keytab</name>
- <value>/etc/hadoop.keytab</value>
- </property>
- <property>
- <name>hive.server2.enable.doAs</name>
- <value>true</value>
- </property>
执行命令$HIVE_HOME/bin/hive --service hiveserver2或者$HIVE_HOME/bin/hiveserver2 会调用org.apache.hive.service.server.HiveServer2的main方法来启动
- 2013-09-17 14:59:21,081 INFO server.HiveServer2 (HiveStringUtils.java:startupShutdownMessage(604)) - STARTUP_MSG:
- /************************************************************
- STARTUP_MSG: Starting HiveServer2
- STARTUP_MSG: host = test84.hadoop/10.1.77.84
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 0.11.0
- STARTUP_MSG: classpath = 略.................
- 2013-09-17 14:59:21,957 INFO security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab(633)) - Login successful for user hadoop/[email protected] using keytab file /etc/hadoop.keytab
- 2013-09-17 14:59:21,958 INFO service.AbstractService (AbstractService.java:init(89)) - Service:OperationManager is inited.
- 2013-09-17 14:59:21,958 INFO service.AbstractService (AbstractService.java:init(89)) - Service:SessionManager is inited.
- 2013-09-17 14:59:21,958 INFO service.AbstractService (AbstractService.java:init(89)) - Service:CLIService is inited.
- 2013-09-17 14:59:21,959 INFO service.AbstractService (AbstractService.java:init(89)) - Service:ThriftCLIService is inited.
- 2013-09-17 14:59:21,959 INFO service.AbstractService (AbstractService.java:init(89)) - Service:HiveServer2 is inited.
- 2013-09-17 14:59:21,959 INFO service.AbstractService (AbstractService.java:start(104)) - Service:OperationManager is started.
- 2013-09-17 14:59:21,960 INFO service.AbstractService (AbstractService.java:start(104)) - Service:SessionManager is started.
- 2013-09-17 14:59:21,960 INFO service.AbstractService (AbstractService.java:start(104)) - Service:CLIService is started.
- 2013-09-17 14:59:22,007 INFO metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(409)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
- 2013-09-17 14:59:22,032 INFO metastore.ObjectStore (ObjectStore.java:initialize(222)) - ObjectStore, initialize called
- 2013-09-17 14:59:22,955 INFO metastore.ObjectStore (ObjectStore.java:getPMF(267)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
- 2013-09-17 14:59:23,000 INFO metastore.ObjectStore (ObjectStore.java:setConf(205)) - Initialized ObjectStore
- 2013-09-17 14:59:23,909 INFO metastore.HiveMetaStore (HiveMetaStore.java:logInfo(452)) - 0: get_databases: default
- 2013-09-17 14:59:23,912 INFO HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(238)) - ugi=hadoop/[email protected] ip=unknown-ip-addr cmd=get_databases: default
- 2013-09-17 14:59:23,933 INFO service.AbstractService (AbstractService.java:start(104)) - Service:ThriftCLIService is started.
- 2013-09-17 14:59:23,948 INFO service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started.
- 2013-09-17 14:59:24,025 INFO security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab(633)) - Login successful for user hadoop/[email protected] using keytab file /etc/hadoop.keytab
- 2013-09-17 14:59:24,047 INFO thrift.ThriftCLIService (ThriftCLIService.java:run(435)) - ThriftCLIService listening on test84.hadoop/10.1.77.84:10000
- $HIVE_HOME/bin/hive --service hiveserver2 --hiveconf fs.hdfs.impl.disable.cache=true --hiveconf fs.file.impl.disable.cache=true
- -dpsh-3.2$ bin/beeline
- Beeline version 0.11.0 by Apache Hive
- beeline> !connect jdbc:hive2://test84.hadoop:10000/default;principal=hadoop/[email protected]
- scan complete in 2ms
- Connecting to jdbc:hive2://test84.hadoop:10000/default;principal=hadoop/[email protected]
- Enter username for jdbc:hive2://test84.hadoop:10000/default;principal=hadoop/[email protected]:
- Enter password for jdbc:hive2://test84.hadoop:10000/default;principal=hadoop/[email protected]:
- Connected to: Hive (version 0.11.0)
- Driver: Hive (version 0.11.0)
- Transaction isolation: TRANSACTION_REPEATABLE_READ
- 0: jdbc:hive2://test84.hadoop:10000/default> select count(1) from abc;
- +------+
- | _c0 |
- +------+
- | 0 |
- +------+
- 1 row selected (29.277 seconds)
- 0: jdbc:hive2://test84.hadoop:10000/default> !q
- Closing: org.apache.hive.jdbc.HiveConnection
- import java.sql.Connection;
- import java.sql.DriverManager;
- import java.sql.ResultSet;
- import java.sql.ResultSetMetaData;
- import java.sql.SQLException;
- import java.sql.Statement;
- public class HiveTest {
- public static void main(String[] args) throws SQLException {
- try {
- Class.forName("org.apache.hive.jdbc.HiveDriver");
- } catch (ClassNotFoundException e) {
- e.printStackTrace();
- }
- Connection conn = DriverManager
- .getConnection(
- "jdbc:hive2://test84.hadoop:10000/default;principal=hadoop/[email protected]",
- "", "");
- Statement stmt = conn.createStatement();
- String sql = "select * from abc";
- System.out.println("Running: " + sql);
- ResultSet res = stmt.executeQuery(sql);
- ResultSetMetaData rsmd = res.getMetaData();
- int columnCount = rsmd.getColumnCount();
- for (int i = 1; i <= columnCount; i++) {
- System.out.println(rsmd.getColumnTypeName(i) + ":"
- + rsmd.getColumnName(i));
- }
- while (res.next()) {
- System.out.println(String.valueOf(res.getInt(1)) + "\t"
- + res.getString(2));
- }
- }
- }