0032-如何在CDH啓用Kerberos的情況下安裝及使用Sentry(二)

溫馨提示:要看高清無碼套圖,請使用手機打開並單擊圖片放大查看。

5.Sentry列權限管理


1.在集羣所有節點添加fayson_r用戶

[root@ip-172-31-6-148 cdh-shell-bak]# useradd fayson_r
[root@ip-172-31-6-148 cdh-shell-bak]# id fayson_r
uid=504(fayson_r) gid=504(fayson_r) groups=504(fayson_r)
[root@ip-172-31-6-148 cdh-shell-bak]# 

2.創建Kerberos用戶fayson_r

[root@ip-172-31-6-148 ~]# kadmin.local
Authenticating as principal hive/[email protected] with password.
kadmin.local:  addprinc [email protected]
WARNING: no policy specified for [email protected]; defaulting to no policy
Enter password for principal "[email protected]": 
Re-enter password for principal "[email protected]": 
Principal "[email protected]" created.
kadmin.local:  

3.使用hive用戶登錄Kerberos

使用beeline連接HiveServer2,創建columnread角色並授權test表s1列的讀權限,將columnread角色授權給fayson_r用戶組

[root@ip-172-31-6-148 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hive/[email protected]

Valid starting     Expires            Service principal
09/07/17 15:27:58  09/08/17 15:27:58  krbtgt/[email protected]
        renew until 09/12/17 15:27:58
[root@ip-172-31-6-148 ~]# beeline 
Beeline version 1.1.0-cdh5.12.1 by Apache Hive
beeline> create role columnread;
No current connection
beeline> !connect jdbc:hive2://localhost:10000/;principal=hive/[email protected]
...
0: jdbc:hive2://localhost:10000/> create role columnread;
...
INFO  : OK
No rows affected (0.183 seconds)
0: jdbc:hive2://localhost:10000/> grant select(s1) on table test to role columnread;
...
INFO  : OK
No rows affected (0.105 seconds)
0: jdbc:hive2://localhost:10000/> grant role columnread to group fayson_r;
...
INFO  : OK
No rows affected (0.105 seconds)
0: jdbc:hive2://localhost:10000/> 

4.fayson_r用戶測試

使用fayson_r登錄Kerberos,通過beeline連接HiveServer2

[root@ip-172-31-6-148 ~]# kdestroy
[root@ip-172-31-6-148 ~]# kinit fayson_r
Password for [email protected]:
[root@ip-172-31-6-148 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: [email protected]

Valid starting     Expires            Service principal
09/08/17 03:16:47  09/09/17 03:16:47  krbtgt/[email protected]
        renew until 09/15/17 03:16:47
[root@ip-172-31-6-148 ~]# beeline
Beeline version 1.1.0-cdh5.12.1 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000/;principal=hive/[email protected]
scan complete in 2ms
Connecting to jdbc:hive2://localhost:10000/;principal=hive/[email protected]
Connected to: Apache Hive (version 1.1.0-cdh5.12.1)
Driver: Hive JDBC (version 1.1.0-cdh5.12.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/> show databases;
...
INFO  : OK
+----------------+--+
| database_name  |
+----------------+--+
| default        |
+----------------+--+
1 row selected (0.336 seconds)
0: jdbc:hive2://localhost:10000/> show tables;
...
INFO  : OK
+-----------+--+
| tab_name  |
+-----------+--+
| test      |
+-----------+--+
1 row selected (0.202 seconds)
0: jdbc:hive2://localhost:10000/> select * from test;
Error: Error while compiling statement: FAILED: SemanticException No valid privileges
 User fayson_r does not have privileges for QUERY
 The required privileges: Server=server1->Db=default->Table=test->Column=s2->action=select; (state=42000,code=40000)
0: jdbc:hive2://localhost:10000/> select s1 from test;
...
INFO  : OK
+---------+--+
|   s1    |
+---------+--+
| a       |
| 1       |
| 111     |
| a       |
| 1       |
| 2       |
| testaa  |
| 1       |
| 2       |
| 3       |
| 222     |
+---------+--+
11 rows selected (0.433 seconds)
0: jdbc:hive2://localhost:10000/> select count(*) from test;
Error: Error while compiling statement: FAILED: SemanticException No valid privileges
 User fayson_r does not have privileges for QUERY
 The required privileges: Server=server1->Db=default->Table=test->action=select; (state=42000,code=40000)
0: jdbc:hive2://localhost:10000/> select count(s1) from test;
...
INFO  : OK
+------+--+
| _c0  |
+------+--+
| 11   |
+------+--+
1 row selected (33.012 seconds)
0: jdbc:hive2://localhost:10000/>

5.瀏覽HDFS目錄

[root@ip-172-31-6-148 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: [email protected]

Valid starting     Expires            Service principal
09/08/17 03:16:47  09/09/17 03:16:47  krbtgt/[email protected]
        renew until 09/15/17 03:16:47
[root@ip-172-31-6-148 ~]# hadoop fs -ls /user/hive/warehouse
ls: Permission denied: user=fayson_r, access=READ_EXECUTE, inode="/user/hive/warehouse":hive:hive:drwxrwx--x
[root@ip-172-31-6-148 ~]# hadoop fs -ls /user/hive/warehouse/test
ls: Permission denied: user=fayson_r, access=READ_EXECUTE, inode="/user/hive/warehouse/test":hive:hive:drwxrwx--x
[root@ip-172-31-6-148 ~]# 

6.使用admin用戶登錄Hue,創建fayson_r用戶

使用fayson_r用戶登錄

不能對test表所有列進行查詢

可以對test表的s1列進行查詢

不能通過File Brwoser瀏覽test表的數據目錄/user/hive/warehouse/test

測試總結:

fayson_r用戶所屬用戶組爲fayson_r,該組只擁有對test表s1列的讀權限,因此在select和count的時候只能對s1列進行select和count,fayson_r用戶無權限瀏覽/user/hive/warehouse下的所有目錄;使用hue只能對test表s1列進行select和count操作,無權限瀏覽/user/hive/warehouse目錄及目錄下所有子目錄。

注意:Sentry只支持SELECT的列授權,不能用於INSERT和ALL的列授權。

6.備註


在集羣啓用Sentry服務後,由於Sentry不支持Hive CLI權限管理,所以建議禁用Hive CLI。

  1. 如何限制用戶使用Hive CLI操作

進入Hive服務,修改hadoop.proxyuser.hive.group配置,此配置會覆蓋HDFS服務中hive代理用戶組配置,默認值爲空則繼承HDFS服務中的hive代理用戶配置

以上配置表示hue、hive、impala用戶組可以使用Hive CLI,配置完成重啓Hive及其相關服務。

注:如果配置爲空則表示禁止所有用戶組,需要注意配置爲空會導致Hue不可用,hive用戶不能通過beeline和Hive CLI訪問Hive。

2.測試配置是否生效

l 使用hive用戶登錄Kerberos,然後使用Hive CLI操作

[root@ip-172-31-6-148 251-hive-HIVEMETASTORE]# kinit -kt hive.keytab hive/[email protected]
[root@ip-172-31-6-148 251-hive-HIVEMETASTORE]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hive/[email protected]

Valid starting     Expires            Service principal
09/07/17 13:33:21  09/08/17 13:33:21  krbtgt/[email protected]
        renew until 09/12/17 13:33:21
[root@ip-172-31-6-148 251-hive-HIVEMETASTORE]# 

登錄Hive CLI,進行SQL操作

[root@ip-172-31-6-148 251-hive-HIVEMETASTORE]# hive
...
hive> show databases;
OK
default
Time taken: 1.881 seconds, Fetched: 1 row(s)
hive> show tables;
OK
test
test_hive_delimiter
test_table
Time taken: 0.034 seconds, Fetched: 3 row(s)
hive> select * from test;
OK
a       b
1       2
111     222
a       b
1       2
2       333
testaa  testbbb
1       test
2       fayson
3       zhangsan
222     2323
Time taken: 0.477 seconds, Fetched: 11 row(s)
hive> select count(*) from test;
...
OK
11
Time taken: 31.143 seconds, Fetched: 1 row(s)
hive> 

  • 使用hue用戶登錄Kerberos進行測試
[root@ip-172-31-6-148 259-hue-HUE_SERVER]# kdestroy
[root@ip-172-31-6-148 259-hue-HUE_SERVER]# kinit -kt hue.keytab hue/[email protected]
[root@ip-172-31-6-148 259-hue-HUE_SERVER]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hue/[email protected]

Valid starting     Expires            Service principal
09/07/17 13:37:22  09/08/17 13:37:22  krbtgt/[email protected]
        renew until 09/12/17 13:37:22
[root@ip-172-31-6-148 259-hue-HUE_SERVER]# 

通過Hive CLI操作

[root@ip-172-31-6-148 259-hue-HUE_SERVER]# hive
...
hive> show databases;
OK
default
Time taken: 1.892 seconds, Fetched: 1 row(s)
hive> show tables;
OK
test
test_hive_delimiter
test_table
Time taken: 0.036 seconds, Fetched: 3 row(s)
hive> select * from test;
FAILED: SemanticException Unable to determine if hdfs://ip-172-31-6-148.fayson.com:8020/user/hive/warehouse/test is encrypted: org.apache.hadoop.security.AccessControlException: Permission denied: user=hue, access=READ, inode="/user/hive/warehouse/test":hive:hive:drwxrwx--x
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkAccessAcl(DefaultAuthorizationProvider.java:363)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:256)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:168)
        at org.apache.sentry.hdfs.SentryAuthorizationProvider.checkPermission(SentryAuthorizationProvider.java:178)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3530)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3513)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:3484)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6624)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getEZForPath(FSNamesystem.java:9267)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getEZForPath(NameNodeRpcServer.java:1637)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getEZForPath(AuthorizationProviderProxyClientProtocol.java:928)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getEZForPath(ClientNamenodeProtocolServerSideTranslatorPB.java:1360)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)

hive> 

  • 使用impala用戶登錄Kerberos測試
[root@ip-172-31-6-148 253-impala-STATESTORE]# kdestroy
[root@ip-172-31-6-148 253-impala-STATESTORE]# kinit -kt impala.keytab impala/[email protected]
[root@ip-172-31-6-148 253-impala-STATESTORE]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: impala/[email protected]

Valid starting     Expires            Service principal
09/07/17 13:41:25  09/08/17 13:41:25  krbtgt/[email protected]
        renew until 09/12/17 13:41:25
[root@ip-172-31-6-148 253-impala-STATESTORE]# 

使用Hive CLI操作

[root@ip-172-31-6-148 253-impala-STATESTORE]# hive
...
hive> show databases;
OK
default
Time taken: 1.941 seconds, Fetched: 1 row(s)
hive> show tables;
OK
test
test_hive_delimiter
test_table
Time taken: 0.037 seconds, Fetched: 3 row(s)
hive> select * from test;
OK
a       b
1       2
111     222
a       b
1       2
2       333
testaa  testbbb
1       test
2       fayson
3       zhangsan
222     2323
Time taken: 0.523 seconds, Fetched: 11 row(s)
hive> 

  • 使用fayson用戶測試,此用戶被禁止使用HiveCLI

使用fayson用戶登錄Kerberos測試

[root@ip-172-31-6-148 ~]# kdestroy
[root@ip-172-31-6-148 ~]# kinit fayson
Password for [email protected]: 
[root@ip-172-31-6-148 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: [email protected]

Valid starting     Expires            Service principal
09/07/17 13:44:29  09/08/17 13:44:29  krbtgt/[email protected]
        renew until 09/14/17 13:44:29
[root@ip-172-31-6-148 ~]# 

使用Hive CLI操作

[root@ip-172-31-6-148 ~]# hive
...
hive> show databases;
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
hive> show tables;
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
hive> 

測試總結:

通過Hive配置(hadoop.proxyuser.hive.groups)可以限制用戶組使用HiveCLI訪問Hive,未配置在內的用戶組是不可以通過Hive CLI訪問(如fayson用戶)。在測試中發現impala和hive用戶可以通過Hive CLI訪問hive表並未受權限控制,而hue用戶只能show databases和show tables不能select表。報HDFS訪問權限異常,由於hive表的屬主均爲hive,hive和impala用戶都屬於hive組,hue用戶不屬於hive組導致。

注意:hadoop.proxyuser.hive.groups是針對用戶組限制,如配置了hive用戶組可以通過Hive CLI訪問Hive,則屬於hive組的所有用戶均可以通過Hive CLI訪問Hive表且不受Sentry權限控制。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章