ОШИБКА Когда я загружаю файл в hadoop hadoop в Docker, все датододы исключаются из этой операции. - PullRequest
0 голосов
/ 29 марта 2019

Я учусь запускать кластер hadoop с помощью Docker.Я запускаю 3 узла (mater, slave1, slave2) на одном сервере (centos 7).Все 3 контейнера - это Centos 6. Затем я использую «hadoop fs -put» для загрузки файла, это успешно.

Но когда я пытаюсь использовать приложение (используйте порт hadoop 9000), чтобы загрузить тот же файл измой компьютер в hdfs, это не удалось.Я вижу информацию об ошибке в журнале master namenode, но никакой информации во всех узлах журнала datanode при запуске операции загрузки.

Это журнал namenode на master:

2019-03-29 06:53:06,509 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 50 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 62 
2019-03-29 06:53:36,280 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=172.17.0.4:50010, 172.17.0.5:50010, 172.17.0.3:50010 for /knime/temp.csv
2019-03-29 06:54:36,424 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 7 Total time for transactions(ms): 51 Number of transactions batched in Syncs: 1 Number of syncs: 6 SyncTimes(ms): 86 
2019-03-29 06:54:36,488 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2019-03-29 06:54:36,488 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2019-03-29 06:54:36,489 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2019-03-29 06:54:36,489 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=172.17.0.5:50010, 172.17.0.3:50010 for /knime/temp.csv
2019-03-29 06:55:36,576 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 11 Total time for transactions(ms): 51 Number of transactions batched in Syncs: 3 Number of syncs: 8 SyncTimes(ms): 94 
2019-03-29 06:55:36,639 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2019-03-29 06:55:36,639 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2019-03-29 06:55:36,639 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2019-03-29 06:55:36,640 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=172.17.0.3:50010 for /knime/temp.csv
2019-03-29 06:56:36,728 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 15 Total time for transactions(ms): 51 Number of transactions batched in Syncs: 5 Number of syncs: 10 SyncTimes(ms): 109 
2019-03-29 06:56:36,770 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2019-03-29 06:56:36,770 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 3 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2019-03-29 06:56:36,770 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2019-03-29 06:56:36,772 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000, call Call#34 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 222.66.117.24:29468
java.io.IOException: File /knime/temp.csv could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1726)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2567)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:829)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489)

, когда я запускаю "telnet masterip 9000" на slave1, он возвращает:

[root@slave1 /]# telnet 172.17.0.3 9000
Trying 172.17.0.3...
Connected to 172.17.0.3.
Escape character is '^]'.
'^]'

|?? ")org.apache.hadoop.ipc.RPC$VersionMismatch*>Server IPC version 9 cannot communicate with client version 130:@Connection closed by foreign host.

, когда я запускаю 'jps 'на всех узлах, он возвращает:

[root@master /]# jps
624 SecondaryNameNode
434 DataNode
771 ResourceManager
869 NodeManager
1606 Jps
335 NameNode
[root@slave1 /]# jps
195 NodeManager
91 DataNode
459 Jps
[root@slave2 /]# jps
112 DataNode
216 NodeManager
473 Jps

Я пробовал много методов, которые еще не сработали.Например, «Отключить режим безопасности», «Установить малый размер блока», «Закрыть брандмауэр».

Так как я могу это сделать?Я хочу попросить о помощи, спасибо!

...