Sq oop не работает на моем Ubuntu 18.04 с имел oop 3.1.3 - PullRequest
0 голосов
/ 02 мая 2020

Я получаю ошибку ниже на моей машине Ubutnttu (18.0.4) при запуске sq oop (1.4.7, Had oop -3.1.3)

используемая команда: sq oop import --connect jdb c: mysql: // localhost / myhad oop - имя_пользователя hiveuser --password xxxx - табличный сотрудник --split-by --target-dir / employee2

Ошибка:

2020-04-30 15:28:01,570 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
Thu Apr 30 15:28:01 IST 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2020-04-30 15:28:01,727 INFO db.DBInputFormat: Using read commited transaction isolation
2020-04-30 15:28:01,736 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
2020-04-30 15:28:01,772 INFO mapred.LocalJobRunner: map task executor complete.
2020-04-30 15:28:01,801 WARN mapred.LocalJobRunner: job_local1054959073_0001
java.lang.Exception: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class employee not found
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class employee not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2638)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getInputClass(DBConfiguration.java:403)
at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.createDBRecordReader(DataDrivenDBInputFormat.java:270)
at org.apache.sqoop.mapreduce.db.DBInputFormat.createRecordReader(DBInputFormat.java:266)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:527)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: Class employee not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2542)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2636)
... 12 more
2020-04-30 15:28:02,089 INFO mapreduce.Job: Job job_local1054959073_0001 running in uber mode : false
2020-04-30 15:28:02,094 INFO mapreduce.Job:  map 0% reduce 0%
2020-04-30 15:28:02,104 INFO mapreduce.Job: Job job_local1054959073_0001 failed with state FAILED due to: NA
2020-04-30 15:28:02,143 INFO mapreduce.Job: Counters: 0
2020-04-30 15:28:02,151 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2020-04-30 15:28:02,155 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 3.8645 seconds (0 bytes/sec)
2020-04-30 15:28:02,160 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2020-04-30 15:28:02,160 INFO mapreduce.ImportJobBase: Retrieved 0 records.
2020-04-30 15:28:02,160 ERROR tool.ImportTool: Import failed: Import job failed!

Пожалуйста, сообщите

1 Ответ

0 голосов
/ 07 мая 2020

Вы должны указать --bin-dir для sq oop import. Вы можете указать любой каталог.

Из официальной документации

Процесс импорта компилирует исходный код в файлы .class и .jar; они обычно хранятся в / tmp. Вы можете выбрать альтернативный целевой каталог с помощью --bindir. Например, --bindir /scratch.

sqoop import --connect jdbc:mysql://localhost/myhadoop --username hiveuser --password xxxx --table employee --split-by --target-dir /employee2
--bindir /tmp
...