前階段線上在做Hive升級(CDH4.2.0 Hive 0.10——> Apache Hive0.11 with our patches)和Shark上線踩了不少坑,先來說一個Hiveserver的問題.
beeline進入後隨便執行一個查詢就會報錯:
USERxxx don’t have write privilegs under /tmp/hive-hdfs
不對啊,已經啓用了impersonation怎麼還會去hdfs下的scratchdir寫入臨時文件呢?查看下代碼發現原來CDH4.2的Hive的impersonation和hive0.11在這處的判斷行爲是不同的:
Hive0.11 apache:只有在啓用kerberos才使用hive-xxx作爲scratchdir否則使用hiveserver的start user的scratchdir
if ( cliService.getHiveConf().getVar(ConfVars.HIVE_SERVER2_AUTHENTICATION) .equals(HiveAuthFactory.AuthTypes.KERBEROS.toString()) && cliService.getHiveConf(). getBoolVar(ConfVars.HIVE_SERVER2_ENABLE_DOAS) ) { String delegationTokenStr = null; try { delegationTokenStr = cliService.getDelegationTokenFromMetaStore(userName); } catch (UnsupportedOperationException e) { // The delegation token is not applicable in the given deployment mode } sessionHandle = cliService.openSessionWithImpersonation(userName, req.getPassword(), req.getConfiguration(), delegationTokenStr); } else { sessionHandle = cliService.openSession(userName, req.getPassword(), req.getConfiguration()); }
Cloudera4.2.0的Hive0.10是隻要啓用了impersonation就使用獨自的scratchdir...
if (cliService.getHiveConf(). getBoolVar(HiveConf.ConfVars.HIVE_SERVER2_KERBEROS_IMPERSONATION)) { String delegationTokenStr = null; try { delegationTokenStr = cliService.getDelegationTokenFromMetaStore(userName); } catch (UnsupportedOperationException e) { // The delegation token is not applicable in the given deployment mode } sessionHandle = cliService.openSessionWithImpersonation(userName, req.getPassword(), req.getConfiguration(), delegationTokenStr); } else { sessionHandle = cliService.openSession(userName, req.getPassword(), req.getConfiguration()); }
並且這個作爲一個Hiveserver的bug在0.13被修復:https://issues.apache.org/jira/browse/HIVE-5486
workaround也簡單,就是把/tmp/hive-hdfs改成777就好了=。=坑爹啊