CDH部署客户端报错deploy client configuration fail
环境:
CDH-5.14.2-1.cdh5.14.2.p0.3
问题描述:
添加Hive服务时,部署hdfs,yarn,hbase,kafka,spark2,hive的client配置时报错如下。
解决方案:
首先解决第一个问题
JAVA_HOME is not set and could not be found.
命令行查看,确实配置了JAVA_HOME:
[aaa@qq.com parcels]# echo $JAVA_HOME
/usr/local/java/jdk1.8.0_231
查看cdh部署运行日志,日志都在cm安装目录下的/run/cloudera-scm-agent/process目录下,这个报错是部署spark2配置文件时报的错,找到最新的部署相关日志文件。
/opt/cm-5.14.2/run/cloudera-scm-agent/process/ccdeploy_spark2-conf_etcspark2conf.cloudera.spark2_on_yarn_-3826120527610311429/logs
查看stderr.log,发现部署的时候不会到/usr/local/java找jdk
因为安装JDK通过tar.gz安装包安装,不会添加软连接到/usr/java ,所以在/usr/java目录找不到jdk,这里执行以下命令,重新通过cm部署,将不会再报JAVA_HOME is not set and could not be found.
# cd /usr
# mikdir java
# ln -s /usr/local/jdk1.8.0_231 /usr/java/default
解决第二个问题
/var/lib/alternatives/***-conf is empty.
参考这篇博客https://cloud.tencent.com/developer/article/1349500,将每个节点的***-conf按要求补充。这里给出本集群的配置文件内容做参考。
hadoop-conf
auto
/etc/hadoop/conf
/etc/hadoop/conf.cloudera.yarn
92
/etc/hadoop/conf.cloudera.hdfs
90
/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/etc/hadoop/conf.empty
10
kafka-conf
auto
/etc/kafka/conf
/etc/kafka/conf.cloudera.kafka
50
/opt/cloudera/parcels/KAFKA-3.1.0-1.3.1.0.p0.35/etc/kafka/conf.dist
10
hbase-conf
auto
/etc/hbase/conf
/etc/hbase/conf.cloudera.hbase
90
/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/etc/hbase/conf.dist
10
spark2-conf
auto
/etc/spark2/conf
/etc/spark2/conf.cloudera.spark2_on_yarn
51
/opt/cloudera/parcels/SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904/etc/spark2/conf.dist
10
按要求将所有为空的conf文件都添加内容后,重启cloudera agent,重新部署,依然有部分节点仍然报empty错误。
参考这篇bug的评论https://bugzilla.redhat.com/show_bug.cgi?id=1016725
将报empty节点的conf文件删除,重启cloudera agent,重新部署,成功执行,且会自动在/var/lib/alternatives文件夹下生成相应的conf文件。
至此,Hive服务成功添加。