getLocalPathForWrite () NullPointerException.Hadoop MapReduce WordCountExample - PullRequest
0 голосов
/ 21 февраля 2019

Я пытаюсь выполнить файл WordCount.jar на кластере Hadoop с одним узлом.

Но когда я пытаюсь запустить команду:

hadoop jar C:/hadoop-2.7.6/WordCount.jar /top/input/wordcount2.txt /top/output/output.txt

В журналах менеджера узлов я получаю странный NPE и не могу понять, почему это происходит.Вот лог:

java.lang.NullPointerException
        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:345)
        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
        at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:475)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1105)
19/02/21 18:11:28 INFO container.ContainerImpl: Container container_1550760383937_0004_02_000001 transitioned from LOCALIZING to LOCALIZATION_FAILED
19/02/21 18:11:28 ERROR nodemanager.DeletionService: Exception during execution of task in DeletionService
java.lang.NullPointerException
        at org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:274)
        at org.apache.hadoop.fs.FileContext.delete(FileContext.java:761)
        at org.apache.hadoop.yarn.server.nodemanager.DeletionService$FileDeletionTask.run(DeletionService.java:272)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/02/21 18:11:28 WARN nodemanager.NMAuditLogger: USER=poyar    OPERATION=Container Finished - Failed   TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: LOCALIZATION_FAILED    APPID=application_1550760383937_0004    CONTAINERID=container_1550760383937_0004_02_000001
19/02/21 18:11:28 INFO container.ContainerImpl: Container container_1550760383937_0004_02_000001 transitioned from LOCALIZATION_FAILED to DONE
19/02/21 18:11:28 INFO application.ApplicationImpl: Removing container_1550760383937_0004_02_000001 from application application_1550760383937_0004
19/02/21 18:11:28 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/02/21 18:11:28 INFO containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1550760383937_0004
19/02/21 18:11:28 INFO monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1550760383937_0004_01_000001
19/02/21 18:11:28 INFO monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1550760383937_0004_02_000001
19/02/21 18:11:29 INFO nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1550760383937_0004_02_000001]
19/02/21 18:11:29 INFO application.ApplicationImpl: Application application_1550760383937_0004 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
19/02/21 18:11:29 INFO containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1550760383937_0004
19/02/21 18:11:29 INFO application.ApplicationImpl: Application application_1550760383937_0004 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
19/02/21 18:11:29 INFO loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1550760383937_0004, with delay of 10800 seconds

Вот конфигурационные файлы:

hdfs-site:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>

    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/hadoop-2.7.6/data/namenode</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/hadoop-2.7.6/data/datanode</value>
    </property>
</configuration>

yarn-site:

<configuration>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>

<property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
   <name>yarn.nodemanager.disk-health-checker.enable</name>
   <value>false</value>
</property>

<property>
   <name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
   <value>0.01</value>
</property>

<property>
   <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
   <value>99</value>
</property>

</configuration>

mapred-site:

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

core-site:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

Исходный код, который я только что скопировал, вставлен из примера подсчета слов Apache: пример

Может кто-нибудь предложить что-нибудь.

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...