欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

hdfs dfs -appendToFile error 问题解决

程序员文章站 2022-07-14 21:48:16
...

hadoop 集群搭建好了
使用命令 把本地的 文件内容追加到 hdfs 服务器上的 指定文件

追加 本地文件的内容 到hdfs 上的文件末尾

报错一

[[email protected] hadoop]# hdfs dfs -appendToFile hdfs-site.xml /p1
18/12/08 00:31:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
appendToFile: Failed to APPEND_FILE /p1 for DFSClient_NONMAPREDUCE_985284284_1 on 192.168.137.101 because lease recovery is in progress. Try again later.
[[email protected] hadoop]# 

报错二

Exception in thread "main" java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.10.22.17:50010, 10.10.22.18:50010], original=[10.10.22.17:50010, 10.10.22.18:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:960)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1026)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1175)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:531)

解决办法

hadoop 解压路径下 / etc/ hadoop / hdfs-stie.xml

  • vi hdfs-stie.xml 添加 对应的配置属性
<!-- appendToFile追加 -->
<property>
        <name>dfs.support.append</name>
        <value>true</value>
</property>

<property>
        <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
        <value>NEVER</value>
</property>
<property>
        <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
        <value>true</value>
</property>

停止服务
重新启动服务
重新追加 可以成功

在客户端的代码里面加入:
如果是客户端 java代码 可以添加如下内容

conf = new Configuration();    
   conf.set("dfs.client.block.write.replace-datanode-on-failure.policy",
                "NEVER"
        ); 
conf.set("dfs.client.block.write.replace-datanode-on-failure.enable",
                "true"
        ); 

相关标签: bigdata