Служба Nifi не запускается после установки на кластере Ambari - PullRequest
0 голосов
/ 09 апреля 2020

Надеюсь, вы хорошо!

После установки службы Nifi на кластере HDP Ambari, Ambari предупредил меня, что ему нужно перезагрузить компьютер после установки.

Результат был следующим: я могу исправить эту ошибку?

stderr:

error: rpmdb: BDB0113 Thread/process 22390/140716827359040 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/hook.py", line 43, in <module>
    BeforeStartHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/hook.py", line 37, in hook
    setup_unlimited_key_jce_policy()
  File "/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/shared_initialization.py", line 197, in setup_unlimited_key_jce_policy
    __setup_unlimited_key_jce_policy(custom_java_home=params.java_home, custom_jdk_name=params.jdk_name, custom_jce_name = params.jce_policy_zip)
  File "/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/shared_initialization.py", line 235, in __setup_unlimited_key_jce_policy
    if jcePolicyInfo.is_unlimited_key_jce_policy():
  File "/usr/lib/ambari-agent/lib/resource_management/core/resources/jcepolicyinfo.py", line 39, in is_unlimited_key_jce_policy
    quiet = True)[0] == 0
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 115, in call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 308, in _call
    raise ExecuteTimeoutException(err_msg)
resource_management.core.exceptions.ExecuteTimeoutException: Execution of '/usr/jdk64/jdk1.8.0_112/bin/java -jar /var/lib/ambari-agent/tools/jcepolicyinfo.jar -tu' was killed due timeout after 5 seconds
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/hook.py', 'START', '/var/lib/ambari-agent/data/command-203.json', '/var/lib/ambari-agent/cache/stack-hooks/before-START', '/var/lib/ambari-agent/data/structured-out-203.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']

stdout:

2020-04-08 18:52:12,383 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187
2020-04-08 18:52:12,427 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2020-04-08 18:52:13,149 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187
2020-04-08 18:52:13,163 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2020-04-08 18:52:13,244 - Group['livy'] {}
2020-04-08 18:52:13,282 - Group['spark'] {}
2020-04-08 18:52:13,283 - Group['hdfs'] {}
2020-04-08 18:52:13,283 - Group['hadoop'] {}
2020-04-08 18:52:13,283 - Group['nifi'] {}
2020-04-08 18:52:13,284 - Group['users'] {}
2020-04-08 18:52:13,284 - Group['knox'] {}
2020-04-08 18:52:13,285 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-04-08 18:52:13,373 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-04-08 18:52:13,375 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-04-08 18:52:13,376 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-04-08 18:52:13,378 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-04-08 18:52:13,379 - User['nifi'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['nifi'], 'uid': None}
2020-04-08 18:52:13,381 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2020-04-08 18:52:13,382 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2020-04-08 18:52:13,384 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-04-08 18:52:13,386 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-04-08 18:52:13,391 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2020-04-08 18:52:13,393 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-04-08 18:52:13,395 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-04-08 18:52:13,397 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2020-04-08 18:52:13,398 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-04-08 18:52:13,445 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2020-04-08 18:52:13,461 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2020-04-08 18:52:13,461 - Group['hdfs'] {}
2020-04-08 18:52:13,462 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2020-04-08 18:52:13,463 - FS Type: HDFS
2020-04-08 18:52:13,463 - Directory['/etc/hadoop'] {'mode': 0755}
2020-04-08 18:52:13,494 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2020-04-08 18:52:13,512 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2020-04-08 18:52:13,535 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2020-04-08 18:52:13,626 - Skipping Execute[('setenforce', '0')] due to not_if
2020-04-08 18:52:13,626 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2020-04-08 18:52:13,629 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2020-04-08 18:52:13,629 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2020-04-08 18:52:13,630 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2020-04-08 18:52:13,645 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2020-04-08 18:52:13,669 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2020-04-08 18:52:13,683 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2020-04-08 18:52:13,705 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2020-04-08 18:52:13,707 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2020-04-08 18:52:13,827 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2020-04-08 18:52:13,859 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2020-04-08 18:52:13,873 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2020-04-08 18:52:13,916 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
2020-04-08 18:52:24,343 - Skipping stack-select on NIFI because it does not exist in the stack-select package structure.
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/hook.py', 'START', '/var/lib/ambari-agent/data/command-203.json', '/var/lib/ambari-agent/cache/stack-hooks/before-START', '/var/lib/ambari-agent/data/structured-out-203.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2020-04-08 18:52:37,788 - Skipping stack-select on NIFI because it does not exist in the stack-select package structure.

Command failed after 1 tries

Я никогда не устанавливал до службы Nifi в Ambari Cluster.

Тест виртуальной машины, версия Ambari - 2.7.1.0, версия HDP - 3.0.1.0, MPack установлен - hdf-ambari-mpack-3.3.0.0-165.tar.gz, в кластере не установлены Kerberos или Ranger (пока) , VM Ram 8 Gb - борется

...