欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

记录踩过的坑-Hadoop

程序员文章站 2022-05-20 08:29:22
...

目录

Hadoop 

解压".tar"文件出错,提示:无法创建符号链接

Error: JAVA_HOME is incorrectly set

Could not locate executablenull\bin\winutils.exe in the Hadoop binaries

 java.lang.UnsatisfiedLinkError

localhost:50070/访问失败

HDFS-Failed to add storage directory

Windows配置文件

hdfs-site.xml

core-site.xml

Hbase

java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures

Windows hbase shell无法启动

java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32

可视化页面

Windows配置文件

hbase-site.xml


Hadoop 

解压".tar"文件出错,提示:无法创建符号链接

打开CMD进入压缩文件目录
然后再输入:start winrar x -y hadoop-3.1.2.tar.gz

Error: JAVA_HOME is incorrectly set

已经设置了JAVA_HOME,但还是报上面的错误。

一般是因为路径上包含了空格

解决方法

修改E:\Hadoop2.7.7\hadoop-2.7.7\etc\hadoop\hadoop-env.cmd

用路径替代符

C:\PROGRA~1\Java\jdk1.8.0_91

PROGRA~1  ===== C:\Program Files 目录的dos文件名模式下的缩写

set JAVA_HOME=C:\PROGRA~1\Java\jdk1.8.0_91

Could not locate executablenull\bin\winutils.exe in the Hadoop binaries

检查拷贝到Windows的System32目录、hadoop的bin目录的hadoop.dll、winutils.exe版本和hadoop版本是否一致。

 java.lang.UnsatisfiedLinkError

检查拷贝到System32、bin目录的hadoop.dll、winutils.exe版本和hadoop版本是否一致。

localhost:50070/访问失败

hdfs-site.xml

看有没有以下内容

<property>
  <name>dfs.http.address</name>
  <value>0.0.0.0:50070</value>
</property>

另外hadoop3.x的常用端口跟2.x的不一样
namenode    rpc-address    8020
namenode    http-address    9870
namenode    https-address    9871
datanode    address    9866
datanode    http-address    9864
datanode    https-address    9865
resourcemanager    http-address    8088

HDFS-Failed to add storage directory

网上资料说是因为多次对namenode进行format导致的。

解决方案

1、将namenode和datanode的clusterID和namespaceID修改一致即可。
2、当然也可以通过直接删除数据节点DN下的current文件夹

Windows配置文件

很多问题都是配置文件的问题。

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>    
        <name>dfs.namenode.name.dir</name>    
        <value>/D:/hadoop/data/dfs/namenode</value>    
    </property>    
    <property>    
        <name>dfs.datanode.data.dir</name>    
        <value>/D:/hadoop/data/dfs/datanode</value>  
    </property>
    <property>
        <name>dfs.permission</name>
        <value>false</value>
    </property>
</configuration>

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
	<property>
  		<name>fs.default.name</name>
  		<value>hdfs://localhost:9000</value>
	</property>
	<property>
  		<name>hadoop.tmp.dir</name>
  		<value>/D:/hadoop/tmp</value>
	</property>
</configuration>

Hbase

java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures

hbase-site.xml增加配置 

<property>
  <name>hbase.unsafe.stream.capability.enforce</name>
  <value>false</value>
</property>

Windows hbase shell无法启动

报错信息:

This file has been superceded by packaging our ruby files into a jar and using jruby's bootstrapping to invoke them. If you need to source this file fo some reason it is now named 'jar-bootstrap.rb' and is located in the root of the file hbase-shell.jar and in the source tree at 'hbase-shell/src/main/ruby'.

我用的是hadoop3.3.0和hbase2.4.0,把hbase换成2.2.6解决。

java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32

下载jansi-1.4.jar包放到hbase-2.2.1\lib下,重新启动即可。

可视化页面

具体看配置,我配的是http://localhost:16010

Windows配置文件

很多问题都是配置文件的问题。

hbase-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
  <!--
    The following properties are set for running HBase as a single process on a
    developer workstation. With this configuration, HBase is running in
    "stand-alone" mode and without a distributed file system. In this mode, and
    without further configuration, HBase and ZooKeeper data are stored on the
    local filesystem, in a path under the value configured for `hbase.tmp.dir`.
    This value is overridden from its default value of `/tmp` because many
    systems clean `/tmp` on a regular basis. Instead, it points to a path within
    this HBase installation directory.

    Running against the `LocalFileSystem`, as opposed to a distributed
    filesystem, runs the risk of data integrity issues and data loss. Normally
    HBase will refuse to run in such an environment. Setting
    `hbase.unsafe.stream.capability.enforce` to `false` overrides this behavior,
    permitting operation. This configuration is for the developer workstation
    only and __should not be used in production!__

    See also https://hbase.apache.org/book.html#standalone_dist
  -->
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:9000/hbase</value>
    </property>
    <property>
        <name>hbase.tmp.dir</name>
        <value>D:/hadoop/hbase/tmp</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>127.0.0.1</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>D:/hadoop/hbase/zoo</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>false</value>
    </property>
    <property>
        <name>hbase.wal.provider</name>
        <value>filesystem</value>
    </property>
    <property> 
        <name>dfs.replication</name> 
        <value>1</value> 
    </property>
    <property>
        <name>hbase.unsafe.stream.capability.enforce</name>
        <value>false</value>
    </property>
</configuration>