Контейнер убит по запросу.Код выхода: 143 java.io.IOException: Задание не выполнено - PullRequest
0 голосов
/ 12 июня 2019

$ HADOOP_HOME / bin / hadoop jar ProductSalePerCountry.jar / inputMapReduce / mapreduce_output_sales

19/06/11 20:47:01 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/06/11 20:47:02 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/06/11 20:47:02 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/06/11 20:47:02 INFO mapred.FileInputFormat: Total input files to process : 1
19/06/11 20:47:02 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1252)
    at java.lang.Thread.join(Thread.java:1326)
    at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:973)
    at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:624)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:801)
19/06/11 20:47:02 INFO mapreduce.JobSubmitter: number of splits:2
19/06/11 20:47:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1560285969214_0001
19/06/11 20:47:03 INFO impl.YarnClientImpl: Submitted application application_1560285969214_0001
19/06/11 20:47:03 INFO mapreduce.Job: The url to track the job: http://hadoop1:8088/proxy/application_1560285969214_0001/
19/06/11 20:47:03 INFO mapreduce.Job: Running job: job_1560285969214_0001
19/06/11 20:47:18 INFO mapreduce.Job: Job job_1560285969214_0001 running in uber mode : false
19/06/11 20:47:18 INFO mapreduce.Job:  map 0% reduce 0%
19/06/11 20:47:18 INFO mapreduce.Job: Job job_1560285969214_0001 failed with state FAILED due to: Application application_1560285969214_0001 failed 2 times due to AM Container for appattempt_1560285969214_0001_000002 exited with  exitCode: -103
Failing this attempt.Diagnostics: Container [pid=19902,containerID=container_1560285969214_0001_02_000001] is running beyond virtual memory limits. Current usage: 116.7 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1560285969214_0001_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 19910 19902 19902 19902 (java) 439 19 2793091072 29136 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djava.io.tmpdir=/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1560285969214_0001/container_1560285969214_0001_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1560285969214_0001/container_1560285969214_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 
    |- 19902 19900 19902 19902 (bash) 0 0 11538432 745 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djava.io.tmpdir=/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1560285969214_0001/container_1560285969214_0001_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1560285969214_0001/container_1560285969214_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/usr/local/hadoop/logs/userlogs/application_1560285969214_0001/container_1560285969214_0001_02_000001/stdout 2>/usr/local/hadoop/logs/userlogs/application_1560285969214_0001/container_1560285969214_0001_02_000001/stderr  

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://hadoop1:8088/cluster/app/application_1560285969214_0001 Then click on links to logs of each attempt.
. Failing the application.
19/06/11 20:47:18 INFO mapreduce.Job: Counters: 0
java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873)
    at SalesCountry.SalesCountryDriver.main(SalesCountryDriver.java:38)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:148)

yarn-site.xml

<?xml version="1.0"?>
<configuration>
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
    <name>dfs.replication</name>
    <value>4</value>
    <description>Duplicate Data on slave nodes.</description>
</property>
</configuration>

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
    <name>mapred.jobtracker.address</name>
    <value>node-master:54311</value>
    <description>The host and port that the MapReduce job tracker runs at. If "$
</property>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>yarn.app.mapreduce.am.resource.mb</name>
    <value>512</value>
</property>
<property>
    <name>mapreduce.map.memory.mb</name>
    <value>256</value>
</property>
<property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>256</value>
</property>
</configuration>

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>Parent dir for tmp dirs.</description>
</property>
<property>
    <name>fs.defaultFS </name>
    <value>hdfs://hadoop1:54310</value>
    <description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

рабов

hadoop1
hadoop2
hadoop3
hadoop4

/ etc / hosts

127.0.0.1       localhost
192.52.33.99    hadoop4
192.52.33.82    hadoop3
192.52.34.114   hadoop2
192.52.34.131   hadoop1

~ / .bashrc

export HADOOP_HOME=/usr/local/hadoop
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL

Пример программы https://www.guru99.com/create-your-first-hadoop-program.html

Извините, я не знаю, что является причиной ошибки.Возможно, есть несколько ошибок.

Что не так с конфигурацией.Задача состоит в том, чтобы вместо учебника использовались один ведущий и три подчиненных (кластер) вместо использования только одного узла в учебнике.

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...