Пример hadoop mapreduce может иногда работать, иногда терпеть неудачу, что случилось? - PullRequest
0 голосов
/ 06 ноября 2018

Я запустил пример преобразования карты hadoop командой

hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount input output

и иногда это работало:

18/11/06 00:37:06 INFO client.RMProxy: Connecting to ResourceManager at node-0/10.10.1.1:8032
18/11/06 00:37:06 INFO input.FileInputFormat: Total input paths to process : 1
18/11/06 00:37:06 INFO mapreduce.JobSubmitter: number of splits:1
18/11/06 00:37:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541484532513_0006
18/11/06 00:37:06 INFO impl.YarnClientImpl: Submitted application application_1541484532513_0006
18/11/06 00:37:06 INFO mapreduce.Job: The url to track the job: http://node-0:8088/proxy/application_1541484532513_0006/
18/11/06 00:37:06 INFO mapreduce.Job: Running job: job_1541484532513_0006
18/11/06 00:37:11 INFO mapreduce.Job: Job job_1541484532513_0006 running in uber mode : false
18/11/06 00:37:11 INFO mapreduce.Job:  map 0% reduce 0%
18/11/06 00:37:15 INFO mapreduce.Job:  map 100% reduce 0%
18/11/06 00:37:18 INFO mapreduce.Job:  map 100% reduce 100%
18/11/06 00:37:18 INFO mapreduce.Job: Job job_1541484532513_0006 completed successfully
18/11/06 00:37:18 INFO mapreduce.Job: Counters: 44
    File System Counters
        FILE: Number of bytes read=216
        FILE: Number of bytes written=231641
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Job Counters 
        Launched map tasks=1
        Launched reduce tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=1300
        Total time spent by all reduces in occupied slots (ms)=1265
        Total time spent by all map tasks (ms)=1300
        Total time spent by all reduce tasks (ms)=1265
        Total vcore-seconds taken by all map tasks=1300
        Total vcore-seconds taken by all reduce tasks=1265
        Total megabyte-seconds taken by all map tasks=1331200
        Total megabyte-seconds taken by all reduce tasks=1295360
    Map-Reduce Framework
        Map input records=1
        Map output records=2
        Map output bytes=20
        Map output materialized bytes=30
        Input split bytes=135
        Combine input records=2
        Combine output records=2
        Reduce input groups=2
        Reduce shuffle bytes=30
        Reduce input records=2
        Reduce output records=2
        Spilled Records=4
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=14
        CPU time spent (ms)=660
        Physical memory (bytes) snapshot=402006016
        Virtual memory (bytes) snapshot=4040646656
        Total committed heap usage (bytes)=402653184
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=32
    File Output Format Counters 
        Bytes Written=28

или журналы могут быть ниже:

18/11/06 00:35:17 INFO mapreduce.Job: Task Id : attempt_1541484532513_0003_m_000000_1, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


18/11/06 00:35:21 INFO mapreduce.Job: Task Id : attempt_1541484532513_0003_m_000000_2, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


18/11/06 00:35:25 INFO mapreduce.Job:  map 100% reduce 0%
18/11/06 00:35:29 INFO mapreduce.Job:  map 100% reduce 100%
18/11/06 00:35:29 INFO mapreduce.Job: Job job_1541484532513_0003 completed successfully
18/11/06 00:35:29 INFO mapreduce.Job: Counters: 46
    File System Counters
        FILE: Number of bytes read=216
        FILE: Number of bytes written=231635
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Job Counters 
        Failed map tasks=3
        Launched map tasks=4
        Launched reduce tasks=1
        Other local map tasks=3
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=6266
        Total time spent by all reduces in occupied slots (ms)=1290
        Total time spent by all map tasks (ms)=6266
        Total time spent by all reduce tasks (ms)=1290
        Total vcore-seconds taken by all map tasks=6266
        Total vcore-seconds taken by all reduce tasks=1290
        Total megabyte-seconds taken by all map tasks=6416384
        Total megabyte-seconds taken by all reduce tasks=1320960
    Map-Reduce Framework
        Map input records=1
        Map output records=2
        Map output bytes=20
        Map output materialized bytes=30
        Input split bytes=135
        Combine input records=2
        Combine output records=2
        Reduce input groups=2
        Reduce shuffle bytes=30
        Reduce input records=2
        Reduce output records=2
        Spilled Records=4
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=14
        CPU time spent (ms)=680
        Physical memory (bytes) snapshot=404619264
        Virtual memory (bytes) snapshot=4036009984
        Total committed heap usage (bytes)=402653184
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=32
    File Output Format Counters 
        Bytes Written=28

Это странно! Должно работать с таким журналом! Он сказал, что job.jar не существует.

Но иногда это не удавалось с теми же операциями.

18/11/06 00:36:41 INFO mapreduce.Job: Task Id : attempt_1541484532513_0005_r_000000_1, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_15414845
18/11/06 00:36:46 INFO mapreduce.Job: Task Id : attempt_1541484532513_0005_r_000000_2, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0005/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0005/job.jar does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


18/11/06 00:36:52 INFO mapreduce.Job:  map 100% reduce 100%
18/11/06 00:36:52 INFO mapreduce.Job: Job job_1541484532513_0005 failed with state FAILED due to: Task failed task_1541484532513_0005_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

18/11/06 00:36:52 INFO mapreduce.Job: Counters: 35
    File System Counters
        FILE: Number of bytes read=186
        FILE: Number of bytes written=115831
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Job Counters 
        Failed map tasks=1
        Failed reduce tasks=4
        Launched map tasks=2
        Launched reduce tasks=4
        Other local map tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=2217
        Total time spent by all reduces in occupied slots (ms)=8012
        Total time spent by all map tasks (ms)=2217
        Total time spent by all reduce tasks (ms)=8012
        Total vcore-seconds taken by all map tasks=2217
        Total vcore-seconds taken by all reduce tasks=8012
        Total megabyte-seconds taken by all map tasks=2270208
        Total megabyte-seconds taken by all reduce tasks=8204288
    Map-Reduce Framework
        Map input records=1
        Map output records=2
        Map output bytes=20
        Map output materialized bytes=30
        Input split bytes=135
        Combine input records=2
        Combine output records=2
        Spilled Records=2
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=7
        CPU time spent (ms)=250
        Physical memory (bytes) snapshot=252555264
        Virtual memory (bytes) snapshot=2014208000
        Total committed heap usage (bytes)=201326592
    File Input Format Counters 
        Bytes Read=32

Что случилось с моим экспериментом? Это моя ошибка или собственные проблемы примера hadoop? Есть кто-нибудь, кто сталкивался с такой же проблемой? Будем благодарны за любые советы и решения.

1 Ответ

0 голосов
/ 07 ноября 2018

Поскольку ваша работа не выполняется, когда она находится в режиме Uber, проблема заключается в том, что мастер приложений не может получить доступ к HDFS или к этим папкам в HDFS.

Пока мы находим реальное решение вашей проблемы, вы можете отключить режим uber для своей работы, например так:

hadoop jar hadoop-mapreduce-examples-2.7.1.jar -D mapreduce.job.ubertask.enable=false wordcount input output

Чтобы полностью устранить проблему, начните с очистки конфигураций ApplicationMaster AM .

РЕДАКТИРОВАТЬ: Может быть, ваша проблема в /etc/hosts. Не могли бы вы распечатать их содержимое на обеих машинах. Возможно, у вас нет сопоставления от 10.10.1.2 до localhost на 10.10.1.2 машине.

...