Невозможно запустить Namenode в Hadoop, но можно отформатировать namenode - PullRequest
0 голосов
/ 10 ноября 2019

Я установил hadoop в C: \ hadoop-3.0.0Переменная среды: HADOOP _HOME, JAVA_HOME, а также путь к корзинеЗаменен файл winutils-master в папке binКонфигурация выполнена для всего файла конфигурацииДобавил haddop.dll в папку System32 и перезапустил системуИспользуя окно 10 64bitЯ могу правильно отформатировать namenode без каких-либо проблем, но при запуске namenode появляется ошибка нижеФормат Наменоде

************************************************************/
2019-11-10 12:59:50,785 INFO namenode.NameNode: createNameNode [-format]
2019-11-10 12:59:51,037 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-8a8c109d-6508-4def-aa9f-fe65da3c80ae
2019-11-10 12:59:52,254 INFO namenode.FSEditLog: Edit logging is async:true
2019-11-10 12:59:52,276 INFO namenode.FSNamesystem: KeyProvider: null
2019-11-10 12:59:52,276 INFO namenode.FSNamesystem: fsLock is fair: true
2019-11-10 12:59:52,292 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-11-10 12:59:52,292 INFO namenode.FSNamesystem: fsOwner             = Raman (auth:SIMPLE)
2019-11-10 12:59:52,292 INFO namenode.FSNamesystem: supergroup          = supergroup
2019-11-10 12:59:52,292 INFO namenode.FSNamesystem: isPermissionEnabled = true
2019-11-10 12:59:52,292 INFO namenode.FSNamesystem: HA Enabled: false
2019-11-10 12:59:52,376 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-11-10 12:59:52,407 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-11-10 12:59:52,407 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-11-10 12:59:52,407 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-11-10 12:59:52,407 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Nov 10 12:59:52
2019-11-10 12:59:52,407 INFO util.GSet: Computing capacity for map BlocksMap
2019-11-10 12:59:52,423 INFO util.GSet: VM type       = 32-bit
2019-11-10 12:59:52,423 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2019-11-10 12:59:52,423 INFO util.GSet: capacity      = 2^22 = 4194304 entries
2019-11-10 12:59:52,492 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2019-11-10 12:59:52,508 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManager: defaultReplication         = 1
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManager: maxReplication             = 512
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManager: minReplication             = 1
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2019-11-10 12:59:52,508 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2019-11-10 12:59:52,639 INFO util.GSet: Computing capacity for map INodeMap
2019-11-10 12:59:52,639 INFO util.GSet: VM type       = 32-bit
2019-11-10 12:59:52,639 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2019-11-10 12:59:52,639 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2019-11-10 12:59:52,677 INFO namenode.FSDirectory: ACLs enabled? false
2019-11-10 12:59:52,677 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2019-11-10 12:59:52,677 INFO namenode.FSDirectory: XAttrs enabled? true
2019-11-10 12:59:52,677 INFO namenode.NameNode: Caching file names occurring more than 10 times
2019-11-10 12:59:52,693 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true
2019-11-10 12:59:52,708 INFO util.GSet: Computing capacity for map cachedBlocks
2019-11-10 12:59:52,708 INFO util.GSet: VM type       = 32-bit
2019-11-10 12:59:52,708 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2019-11-10 12:59:52,708 INFO util.GSet: capacity      = 2^19 = 524288 entries
2019-11-10 12:59:52,724 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-11-10 12:59:52,724 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-11-10 12:59:52,724 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-11-10 12:59:52,724 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2019-11-10 12:59:52,724 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-11-10 12:59:52,739 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2019-11-10 12:59:52,739 INFO util.GSet: VM type       = 32-bit
2019-11-10 12:59:52,739 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2019-11-10 12:59:52,739 INFO util.GSet: capacity      = 2^16 = 65536 entries
Re-format filesystem in Storage Directory C:\hadoop-3.0.0\data\namenode ? (Y or N) Y
2019-11-10 12:59:54,991 INFO namenode.FSImage: Allocated new BlockPoolId: BP-143962282-192.168.1.3-1573370994979
2019-11-10 12:59:54,991 INFO common.Storage: Will remove files: [C:\hadoop-3.0.0\data\namenode\current\fsimage_0000000000000000000, C:\hadoop-3.0.0\data\namenode\current\fsimage_0000000000000000000.md5, C:\hadoop-3.0.0\data\namenode\current\seen_txid, C:\hadoop-3.0.0\data\namenode\current\VERSION]
2019-11-10 12:59:55,114 INFO common.Storage: Storage directory C:\hadoop-3.0.0\data\namenode has been successfully formatted.
2019-11-10 12:59:55,130 INFO namenode.FSImageFormatProtobuf: Saving image file C:\hadoop-3.0.0\data\namenode\current\fsimage.ckpt_0000000000000000000 using no compression
2019-11-10 12:59:55,286 INFO namenode.FSImageFormatProtobuf: Image file C:\hadoop-3.0.0\data\namenode\current\fsimage.ckpt_0000000000000000000 of size 390 bytes saved in 0 seconds.
2019-11-10 12:59:55,417 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2019-11-10 12:59:55,417 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-9VGJDM6/192.168

Стартовый Наменод

2019-11-10 12:31:51,553 WARN checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/C:/hadoop-3.0.0/data/datanode
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:606)
        at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:952)
        at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:118)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:99)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:216)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
        at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Кто-нибудь может помочь?

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...