問題1
問題:org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /data/program/hadoop/hdfs/data: namenode clusterID = CID-715d917d-2477-41a8-97fe-6b22ae9bad6e; datanode clusterID = CID-11a94f7e-0ba2-4e00-8057-23de4244f219
2017-02-26 00:26:46,150 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to /192.168.1.131:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.
2017-02-26 00:26:46,152 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to /192.168.1.131:9000
2017-02-26 00:26:46,255 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)
2017-02-26 00:26:48,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-02-26 00:26:48,261 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
解決辦法:每次namenode format會(huì)重新創(chuàng)建一個(gè)namenodeId,而data目錄包含了上次format時(shí)的id,namenode format清空了namenode下的數(shù)據(jù),但是沒有清空datanode下的數(shù)據(jù),導(dǎo)致啟動(dòng)時(shí)失敗,所要做的就是每次fotmat前,清空data下的所有目錄。
方法1:停掉集群,刪除問題節(jié)點(diǎn)的data目錄下的所有內(nèi)容。即hdfs-site.xml文件中配置的dfs.data.dir目錄。重新格式化namenode。
方法2:先停掉集群,然后將datanode節(jié)點(diǎn)目錄/dfs/data/current/VERSION中的修改為與namenode一致即可。
問題2
問題:在配置免密登錄的時(shí)候,配置的步驟沒有錯(cuò)誤,但卻無法免密登錄。
解決辦法:首先查看登錄的日志:cat /var/log/secure,然后分析原因。在日志中顯示如下:
Authentication refused:bad ownership or modes for directory /root/.ssh
該問題是因?yàn)闄?quán)限問題,sshd為了安全,對(duì)屬主的目錄和文件權(quán)限有所要求,如果權(quán)限不對(duì),則ssh的免密碼登錄不生效。.ssh目錄的權(quán)限一般為755或者700。rsa_id.pub以及authorized_keys的權(quán)限一般為644,rsa權(quán)限必須為600。
問題3
問題:2017-02-26 00:37:12,419 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1117540795-127.0.0.1-1488040411210 (Datanode Uuid null) service to 192.168.1.131/192.168.1.131:9000 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.1.133, hostname=192.168.1.133): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=6bc06fed-eec5-482b-9e2b-e74483edb50f, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-24c1246c-c1eb-4152-a856-f114c169c884;nsid=1820489334;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.?
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.
at org.apache.hadoop.ipc.RPC$Server.call(RPC.
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
at org.apache.hadoop.ipc.Server$Handler.run(Server.
解決辦法:該問題是由于未設(shè)置host的緣故,之后重新設(shè)置好host即可。
問題4
問題:FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager fromhadoop133 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.
at org.apache.hadoop.service.AbstractService.start(AbstractService.
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.
at org.apache.hadoop.service.AbstractService.start(AbstractService.
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.
Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager fromhadoop133 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.
... 6 more
解決辦法:yarn-site.xml中的內(nèi)存參數(shù)(yarn.nodemanager.resource.memory-mb)設(shè)置的問題,好像不能設(shè)置1G以下的(2g最好)。
問題5
問題:hadoop No FileSystem for scheme hdfs
解決辦法:這個(gè)很有可能是客戶端Hadoop版本和服務(wù)端版本不一致導(dǎo)致的,或者導(dǎo)入的jar包缺失,要確保導(dǎo)入的依賴包完整。
問題6
問題:Hadoop Permission denied: user=GavinCee, access=WRITE, inode="/test":root:supergroup:drwxr-xr-x
解決辦法:到服務(wù)器上修改hadoop的配置文件:conf/hdfs-core.xml, 找到 dfs.permissions 的配置項(xiàng) , 將value值改為 false。
dfs.permissions
false
If "true", enable permission checking in HDFS.If "false", permission checking is turned off,but all other behavior is unchanged.Switching from one parameter value to the other does not change the mode,owner or group of files or directories.
修改完重啟下hadoop的進(jìn)程才能生效。
ps,個(gè)人開發(fā)方便故如此設(shè)置,謹(jǐn)慎的還是要?jiǎng)?chuàng)建個(gè)用戶并授予權(quán)限。
問題7
問題:將本地文件復(fù)制到hdfs上去或者在hafs上新建文件時(shí)會(huì)出現(xiàn)以下錯(cuò)誤:Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /test. Name node is in safe mode.
解決辦法:hdfs在啟動(dòng)開始時(shí)會(huì)進(jìn)入安全模式,這時(shí)文件系統(tǒng)中的內(nèi)容不允許修改也不允許刪除,直到安全模式結(jié)束。安全模式主要是為了系統(tǒng)啟動(dòng)的時(shí)候檢查各個(gè)DataNode上數(shù)據(jù)塊的有效性,同時(shí)根據(jù)策略必要的復(fù)制或者刪除部分?jǐn)?shù)據(jù)塊。運(yùn)行期通過命令也可以進(jìn)入安全模式。在實(shí)踐過程中,系統(tǒng)啟動(dòng)的時(shí)候去修改和刪除文件也會(huì)有安全模式不允許修改的出錯(cuò)提示,只需要等待一會(huì)兒即可。
可以等待其自動(dòng)退出安全模式,也可以使用手動(dòng)命令來離開安全模式,如下:
關(guān)閉成功。
分享: