about云開發

 找回密碼
 立即注冊

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區

打印 上一主題 下一主題

[實操演練] CDH6.2環境中啟用Kerberos

[復制鏈接]
跳轉到指定樓層
樓主
問題導讀:


1、如何為安裝Kerberos?
2、CDH集群如何啟用Kerberos?
3、Kerberos如何使用?
4、常見的錯誤有哪些?



一、Kerberos概述:
Kerberos是一個用于安全認證第三方協議,并不是Hadoop專用,你也可以將其用于其他系統,它采用了傳統的共享密鑰的方式,實現了在網絡環境不一定保證安全的環境下,client和server之間的通信,適用于client/server模型,由MIT開發和實現。而使用Cloudera Manager可以較為輕松的實現界面化的Kerberos集成,

Kerberos協議:
Kerberos協議主要用于計算機網絡的身份鑒別(Authentication), 其特點是用戶只需輸入一次身份驗證信息就可以憑借此驗證獲得的票據(ticket-granting ticket)訪問多個服務,即SSO(Single Sign On)。由于在每個Client和Service之間建立了共享密鑰,使得該協議具有相當的安全性。

二、安裝步驟
安裝環境:
OS:CentOS7.5
CDH6.2

1. KDC服務安裝及配置
將KDC服務安裝在Cloudera Manager Server所在服務器上(KDC服務可根據自己需要安裝在其他服務器)

(1) 在CM服務器上安裝KDC服務
[[email protected]~]# yum -y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation

(2) 修改vi /etc/krb5.conf文件
[Shell] 純文本查看 復制代碼
[[email protected] ~]# vi /etc/krb5.conf                
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
 default_realm = HADOOP.COM
 #default_ccache_name = KEYRING:persistent:%{uid}
 
[realms]
 HADOOP.COM = {
  kdc = master
  admin_server = master
 }
 
[domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM

(3) 修改/var/kerberos/krb5kdc/kadm5.acl配置
*/[email protected]      *
*/[email protected]      *

(4) 修改/var/kerberos/krb5kdc/kdc.conf配置
[Shell] 純文本查看 復制代碼
(base) [[email protected] ~]# vim /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 HADOOP.COM = {
  #master_key_type = aes256-cts
  max_renewable_life= 7d 0h 0m 0s
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia
128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

(5) 創建Kerberos數據庫
[Shell] 純文本查看 復制代碼
(base) [[email protected] ~]# kdb5_util create –r HADOOP.COM -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOP.COM',
master key name 'K/[email protected]'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:
此處需要輸入Kerberos數據庫的密碼。


(6) 創建Kerberos的管理賬號
[Shell] 純文本查看 復制代碼
(base) [[email protected] ~]# kadmin.local
Authenticating as principal root/[email protected] with password.
kadmin.local:  addprinc admin/[email protected]
WARNING: no policy specified for admin/[email protected]; defaulting to no policy
Enter password for principal "admin/[email protected]":
Re-enter password for principal "admin/[email protected]":
Principal "admin/[email protected]" created.
kadmin.local:  exit

Kerberos管理員賬號及密碼

(7) 將Kerberos服務添加到自啟動服務,并啟動krb5kdc和kadmin服務
[Shell] 純文本查看 復制代碼
[[email protected] ~]# systemctl enable krb5kdc
Created symlink from /etc/systemd/system/multi-user.target.wants/krb5kdc.service to /usr/lib/systemd/system/krb5kdc.service.
[[email protected] ~]# systemctl enable kadmin
Created symlink from /etc/systemd/system/multi-user.target.wants/kadmin.service to /usr/lib/systemd/system/kadmin.service.
[[email protected] ~]# systemctl start krb5kdc
[[email protected] ~]# systemctl start kadmin

(8) 測試Kerberos的管理員賬號
[Shell] 純文本查看 復制代碼
[[email protected] ~]# kinit admin/[email protected]
Password for admin/[email protected]:
kinit: Password incorrect while getting initial credentials
[[email protected] ~]# kinit admin/[email protected]
Password for admin/[email protected]:
[[email protected] ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/[email protected]

Valid starting       Expires              Service principal
06/25/2019 19:11:28  06/26/2019 19:11:28  krbtgt/[email protected]
        renew until 07/02/2019 19:11:28

(9).為集群安裝所有Kerberos客戶端,包括Cloudera Manager
使用批處理腳本為集群所有節點安裝Kerberos客戶端
[[email protected] ~]# pssh -h hostlist.txt -i yum -y install krb5-libs krb5-workstation

(10) 在Cloudera Manager Server服務器上安裝額外的包
[[email protected] ~]# yum -y install openldap-clients

(11) 將KDC Server上的krb5.conf文件拷貝到所有Kerberos客戶端
使用批處理腳本將Kerberos服務端的krb5.conf配置文件拷貝至集群所有節點的/etc目錄下:
[[email protected] ~]# pscp -h hostlist.txt /etc/krb5.conf /etc/

2. CDH集群啟用Kerberos
(1).在KDC中給Cloudera Manager添加管理員賬號
[Shell] 純文本查看 復制代碼
(base) [[email protected] ~]# kadmin.local
Authenticating as principal admin/[email protected] with password.
kadmin.local:  addprinc cloudera-scm/[email protected]
WARNING: no policy specified for cloudera-scm/[email protected]; defaulting to no policy
Enter password for principal "cloudera-scm/[email protected]":
Re-enter password for principal "cloudera-scm/[email protected]":
Principal "cloudera-scm/[email protected]" created.
kadmin.local:  exit

(2).進入Cloudera Manager的“管理”->“安全”界面
(3).選擇“啟用Kerberos”,進入如下界面

(4).確保如下列出的所有檢查項都已完成,然后全部點擊勾選

(5).點擊“繼續”,配置相關的KDC信息,包括類型、KDC服務器、KDC Realm、加密類型以及待創建的Service Principal(hdfs,yarn,,hbase,hive等)的更新生命期等
(6).不建議讓Cloudera Manager來管理krb5.conf, 點擊“繼續”

(7).輸入Cloudera Manager的Kerbers管理員賬號,一定得和之前創建的賬號一致,點擊“繼續”

(8).點擊“繼續”啟用Kerberos

(9).Kerberos啟用完成,點擊“繼續

3. Kerberos使用
創建haley測試用戶,執行Hive和MapReduce任務,需要在集群所有節點創建haley用戶

(1).使用kadmin創建一個haley的principal
[Shell] 純文本查看 復制代碼
(base) [[email protected] ~]# kadmin.local
Authenticating as principal admin/[email protected] with password.
kadmin.local:  addprinc [email][email protected][/email]
WARNING: no policy specified for [email][email protected][/email]; defaulting to no policy
Enter password for principal "[email protected]":
Re-enter password for principal "[email protected]":
Principal "[email protected]" created.
kadmin.local:  exit
(2).使用haley用戶登錄Kerberos
[Shell] 純文本查看 復制代碼
[[email protected] ~]# kdestroy
[[email protected] ~]# kinit haley
Password for [email][email protected][/email]:
(base) [[email protected] ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: [email][email protected][/email]
 
Valid starting       Expires              Service principal
06/26/2019 17:29:17  06/27/2019 17:29:17  krbtgt/[email protected]
        renew until 07/03/2019 17:29:17

(3).在集群所有節點添加haley用戶
添加haley用戶
[Shell] 純文本查看 復制代碼
[[email protected] ~]# pssh -h hostlist.txt -i useradd haley
[1] 17:32:15 [SUCCESS] datanode2
[2] 17:32:15 [SUCCESS] master
[3] 17:32:15 [SUCCESS] datanode3
[4] 17:32:15 [SUCCESS] datanode1

把haley用戶添加到hdfs,hadoop用戶組中
[Shell] 純文本查看 復制代碼
[[email protected] ~]# pssh -h hostlist.txt -i usermod -G hdfs,hadoop haley
[1] 17:51:11 [SUCCESS] datanode2
[2] 17:51:11 [SUCCESS] master
[3] 17:51:11 [SUCCESS] datanode3
[4] 17:51:12 [SUCCESS] datanode1
[[email protected] ~]# pssh -h hostlist.txt -i usermod -G hadoop haley
[1] 17:51:54 [SUCCESS] datanode2
[2] 17:51:54 [SUCCESS] datanode1
[3] 17:51:54 [SUCCESS] master
[4] 17:51:54 [SUCCESS] datanode3

(4).運行MapReduce作業
[[email protected] ~]# hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 1

(5).使用beeline連接hive進行測試
[Shell] 純文本查看 復制代碼
[[email protected] ~]# beeline
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See [url]http://www.slf4j.org/codes.html#multiple_bindings[/url] for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Beeline version 2.1.1-cdh6.2.0 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000/;principal=hive/[email protected]
Connecting to jdbc:hive2://localhost:10000/;principal=hive/[email protected]
Connected to: Apache Hive (version 2.1.1-cdh6.2.0)
Driver: Hive JDBC (version 2.1.1-cdh6.2.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/> show databases;
INFO  : Compiling command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8): show databases
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8); Time taken: 0.164 seconds
INFO  : Executing command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8); Time taken: 0.007 seconds
INFO  : OK
+----------------+
| database_name  |
+----------------+
| default        |
| dw_ttt         |
| pdd            |
| taotoutiao     |
+----------------+
4 rows selected (0.281 seconds)
0: jdbc:hive2://localhost:10000/>


三、總結

問題1:在配置kerberos時報下面的錯
[Plain Text] 純文本查看 復制代碼
/opt/cloudera/cm/bin/gen_credentials.sh failed with exit code 1 and output of <<
+ export PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ CMF_REALM=HADOOP.COM
+ KEYTAB_OUT=/var/run/cloudera-scm-server/cmf3121646876397998512.keytab
+ PRINC=kafka_mirror_maker/[email protected]
+ MAX_RENEW_LIFE=432000
+ KADMIN='kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/[email protected] -r HADOOP.COM'
+ RENEW_ARG=
+ '[' 432000 -gt 0 ']'
+ RENEW_ARG='-maxrenewlife "432000 sec"'
+ '[' -z /etc/krb5.conf ']'
+ echo 'Using custom config path '\''/etc/krb5.conf'\'', contents below:'
+ cat /etc/krb5.conf
+ kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/[email protected] -r HADOOP.COM -q 'addprinc -maxrenewlife "432000 sec" -randkey kafka_mirror_maker/[email protected]'
Couldn't open log file /var/log/kadmind.log: Permission denied
WARNING: no policy specified for kafka_mirror_maker/[email protected]; defaulting to no policy
add_principal: Operation requires ``add'' privilege while creating "kafka_mirror_maker/[email protected]".
+ '[' 432000 -gt 0 ']'
++ kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/[email protected] -r HADOOP.COM -q 'getprinc -terse kafka_mirror_maker/[email protected]'
++ tail -1
++ cut -f 12
Couldn't open log file /var/log/kadmind.log: Permission denied
get_principal: Operation requires ``get'' privilege while retrieving "kafka_mirror_maker/[email protected]".
+ RENEW_LIFETIME='Authenticating as principal cloudera-scm/[email protected] with keytab /var/run/cloudera-scm-server/cmf487375145055296868.keytab.'
+ '[' Authenticating as principal cloudera-scm/[email protected] with keytab /var/run/cloudera-scm-server/cmf487375145055296868.keytab. -eq 0 ']'
/opt/cloudera/cm/bin/gen_credentials.sh: line 35: [: too many arguments
+ kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/[email protected] -r HADOOP.COM -q 'xst -k /var/run/cloudera-scm-server/cmf3121646876397998512.keytab kafka_mirror_maker/[email protected]'
Couldn't open log file /var/log/kadmind.log: Permission denied
kadmin: Operation requires ``change-password'' privilege while changing kafka_mirror_maker/[email protected]'s key
+ chmod 600 /var/run/cloudera-scm-server/cmf3121646876397998512.keytab
chmod: cannot access ‘/var/run/cloudera-scm-server/cmf3121646876397998512.keytab’: No such file or directory
>>

原因:沒有配置/var/kerberos/krb5kdc/kadm5.acl文件的用戶權限:
*/[email protected]

配置完成后重啟kerberos服務
servcie krb5kdc restart
servcie kadmin restart

問題2:kerberos用戶執行mapreduce任務報錯
[Shell] 純文本查看 復制代碼
[[email protected] ~]# hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 1
WARNING: Use "yarn jar" to launch YARN applications.
Number of Maps  = 10
Samples per Map = 1
org.apache.hadoop.security.AccessControlException: Permission denied: user=haley, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)
       at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)
       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)
       at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)
       at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
       at java.security.AccessController.doPrivileged(Native Method)
       at javax.security.auth.Subject.doAs(Subject.java:422)
       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
       at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
       at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
       at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2335)
       at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2309)
       at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1247)
       at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1244)
       at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
       at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1261)
       at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1236)
       at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2260)
       at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:283)
       at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:360)
       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:368)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
       at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
       at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
       at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=haley, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)
       at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)
       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)
       at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)
       at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
       at java.security.AccessController.doPrivileged(Native Method)
       at javax.security.auth.Subject.doAs(Subject.java:422)
       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
      at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)
       at org.apache.hadoop.ipc.Client.call(Client.java:1445)
       at org.apache.hadoop.ipc.Client.call(Client.java:1355)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
       at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
       at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:640)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
       at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
       at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
       at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
       at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
       at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2333)
       ... 24 more

解決:
添加supergroup用戶組,把haley新用戶加入到用戶組
[r[email protected] ~]# pssh -h hostlist.txt -i groupadd supergroup
[[email protected] ~]# pssh -h hostlist.txt -i usermod -G supergroup haley

參考:
https://www.jb51.net/article/94875.htm
https://www.jianshu.com/p/692c4a7676ab
https://mp.weixin.qq.com/s?__biz=MzI4OTY3MTUyNg==&mid=2247495377&idx=1&sn=7370d0fd397132718ad3023c451c4f78&chksm=ec293ed8db5eb7ce3f3799c5e130db5cbb10965efcdc6993d77cb3c60ca759907ee4977f97eb&scene=21#wechat_redirect



最新經典文章,歡迎關注公眾號

來源:CSDN

作者:常飛夢

原文:《CDH6.2環境中啟用Kerberos》

https://blog.csdn.net/lichangzai/article/details/93861348





您需要登錄后才可以回帖 登錄 | 立即注冊

本版積分規則

關閉

推薦上一條 /3 下一條

QQ|小黑屋|about云開發-學問論壇|社區 ( 京ICP備12023829號 )

GMT+8, 2019-8-19 15:43 , Processed in 1.234375 second(s), 28 queries , Gzip On.

Powered by Discuz! X3.4 Licensed

© 2018 Comsenz Inc.Designed by u179

快速回復 返回頂部 返回列表
梭哈电子游艺