HBase для использования HDFS HA - PullRequest
1 голос
/ 02 мая 2020
  1. Я пытаюсь настроить hbase ha с oop HA.
  2. Я настроил Had oop HA и протестировал его.
  3. Но при настройке HBase при запуске я получаю следующую ошибку:
2020-05-02 16:11:09,336 INFO  [main] ipc.RpcServer: regionserver/cluster-hadoop-01/172.18.20.3:16020: started 10 reader(s) listening on port=16020
2020-05-02 16:11:09,473 INFO  [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2020-05-02 16:11:09,840 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
    at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2896)
    at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
    at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2911)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2894)
    ... 5 more
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdfscluster
    at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:417)
    at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:351)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:285)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:160)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2812)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
    at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:309)
    at org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:358)
    at org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:334)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:683)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:626)
    ... 10 more
Caused by: java.net.UnknownHostException: hdfscluster
    ... 26 more
  • Я думаю, что моя установка HBase не распознает мою службу имен hdfscluster .
  • Я пытался иметь oop 2.X и oop 3.X.
    • Имеет oop 2.X: Имеет oop 2.10.0 & HBase 1.6.0 и JDK 1.8.0_251 & ZooKeeper 3.6.0.
    • Имеет oop 3.X : Имел oop 3.2.1 и HBase 2.2.4 и JDK 1.8.0_251 и ZooKeeper 3.6.0.
    • Версия ОС: Ubuntu 16.04.6

Мой основной сайт. xml имеет

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hdfscluster</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/data/hadoop/tmp</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>cluster-hadoop-01:2181,cluster-hadoop-02:2181,cluster-hadoop-03:2181</value>
    </property>
</configuration>

Мой hdfs-сайт. xml имеет

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/data/hadoop/data/hdfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/data/hadoop/data/hdfs/data</value>
    </property>

    <property>
        <name>dfs.nameservices</name>
        <value>hdfscluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.hdfscluster</name>
        <value>nn-01,nn-02</value>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.hdfscluster.nn-01</name>
        <value>cluster-hadoop-01:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.hdfscluster.nn-02</name>
        <value>cluster-hadoop-02:8020</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.hdfscluster.nn-01</name>
        <value>cluster-hadoop-01:9870</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.hdfscluster.nn-02</name>
        <value>cluster-hadoop-02:9870</value>
    </property>

    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://cluster-hadoop-01:8485;cluster-hadoop-02:8485;cluster-hadoop-03:8485/hdfscluster</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/hadoop/tmp/journalnode</value>
    </property>

    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence(hadoop:22)</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>
</configuration>

Мой hbase-сайт. xml имеет

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://hdfscluster/hbase</value>
    </property>

    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>

    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>cluster-hadoop-01,cluster-hadoop-02,cluster-hadoop-03</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/data/zookeeper/data</value>
    </property>

    <property>
        <name>hbase.tmp.dir</name>
        <value>/data/hbase/tmp</value>
    </property>
</configuration>

Мой hbase-env. sh имеет

export JAVA_HOME="/opt/jdk"
export HBASE_MANAGES_ZK=false
export HADOOP_HOME="/opt/hadoop"
export HBASE_CLASSPATH=".:${HADOOP_HOME}/etc/hadoop"
export HBASE_LOG_DIR="/data/hbase/log"

Мой путь HBase conf:

root@cluster-hadoop-01:~# ll /opt/hbase/conf/
total 56
drwxr-xr-x 2 root root 4096 May  2 16:31 ./
drwxr-xr-x 7 root root 4096 May  2 01:18 ../
-rw-r--r-- 1 root root   18 May  2 10:36 backup-masters
lrwxrwxrwx 1 root root   36 May  2 12:04 core-site.xml -> /opt/hadoop/etc/hadoop/core-site.xml
-rw-r--r-- 1 root root 1811 Jan  6 01:24 hadoop-metrics2-hbase.properties
-rw-r--r-- 1 root root 4616 Jan  6 01:24 hbase-env.cmd
-rw-r--r-- 1 root root 7898 May  2 10:36 hbase-env.sh
-rw-r--r-- 1 root root 2257 Jan  6 01:24 hbase-policy.xml
-rw-r--r-- 1 root root  841 May  2 16:10 hbase-site.xml
lrwxrwxrwx 1 root root   36 May  2 12:04 hdfs-site.xml -> /opt/hadoop/etc/hadoop/hdfs-site.xml
-rw-r--r-- 1 root root 1169 Jan  6 01:24 log4j-hbtop.properties
-rw-r--r-- 1 root root 4949 Jan  6 01:24 log4j.properties
-rw-r--r-- 1 root root   54 May  2 10:33 regionservers

1 Ответ

1 голос
/ 04 мая 2020
  • Благодаря постоянным попыткам я нашел решение, но до сих пор не знаю причину. Изменить hdfs-site. xml файл конфигурации:
    <property>
        <name>dfs.client.failover.proxy.provider.hdfscluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
...