欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

shell脚本安装jdk、mysql、hadoop、zookeeper、hbase、hive、spark...大一统

程序员文章站 2022-04-29 20:59:24
...

一、准备工作

笔者安装上述软件的版本分别为:

  • jdk: jdk-8u221
  • mysql : mysql-5.6.46
  • hadoop: hadoop-2.6.0-cdh5.14.2
  • zookeeper: zookeeper-3.4.6
  • hbase: hbase-1.2.0-cdh5.14.2
  • hive: hive-1.1.0-cdh5.14.2
  • spark: spark-2.2.0-bin-hadoop2.7

链接:安装包及脚本
提取码:25dh

下载好安装包之后需要在虚拟机中创建3个文件夹

mkdir /opt/install
mkdir /opt/software
mkdir /opt/scripts
  • install是安装文件的根目录
  • 将安装包及mysql连接hive相关jar包都放进software文件夹
  • 将安装脚本及配置文件都放入 scripts 文件夹

二、安装脚本及相关配置文件

说明: 笔者使用的虚拟机主机名为: hadoop-001
虚拟机IP: 192.168.206.140

这两个设置每个人不一样,脚本及配置文件中这两个设置项必须要修改,其他的配置项可因需修改

注:如果使用window编辑的脚本传入虚拟机,会提示报错信息
-bash: ./abc.sh: /bin/bash^M: bad interpreter: No such file or directory

原因是windows下的文件是dos格式,即每一行结尾以\r\n来标识,而linux下的文件是unix格式,行尾则以\n来标识
解决方案:

  • vim ,编辑文件,执行“: set ff=unix”,将文件设置为unix格式,然后执行“:wq”,保存退出
  • 或者使用Notepad++ 编辑,如下图:
    shell脚本安装jdk、mysql、hadoop、zookeeper、hbase、hive、spark...大一统

2.1 安装 jdk

jdk_180_221.sh

#!/bin/bash
#jdk
tar -zxvf /opt/software/jdk-8u221-linux-x64.tar.gz -C /opt/install/
#配置环境变量
echo "export JAVA_HOME=/opt/install/jdk1.8.0_221" >> /etc/profile
echo 'export CLASSPATH=.:$JAVA_HOME/rt.jar:$JAVA_HOME/tools.jar:$JAVA_HOME/dt.jar' >> /etc/profile
echo 'export JRE_HOME=$JAVA_HOME/jre' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin' >> /etc/profile
source /etc/profile
java -version

安装详细步骤参考:Linux安装jdk

2.2 安装 mysql

mysql_5_6_46.sh

#!/bin/bash
#mysql
#找到服务端冲突目录并卸载
a=`rpm -qa | grep -i Mariadb`	
rpm -e ${a} --nodeps
#下载依赖包
yum install -y net-tools
yum install -y perl
yum install -y autoconf
#解压安装
rpm -ivh /opt/software/MySQL-client-5.6.46-1.el7.x86_64.rpm
rpm -ivh /opt/software/MySQL-server-5.6.46-1.el7.x86_64.rpm
#配置文件修改
sed -i '3a [client]' /usr/my.cnf
sed -i '4a default-character-set = utf8' /usr/my.cnf
sed -i '6a skip-grant-tables' /usr/my.cnf
sed -i '7a character_set_server = utf8' /usr/my.cnf
sed -i '8a collation_server = utf8_general_ci' /usr/my.cnf
sed -i '9a lower_case_table_names' /usr/my.cnf
#重启数据库
service mysql restart
#更新root用户的密码
mysql -e "update mysql.user set password=password('ok')"
#设置临时密码不过期
mysql -e "update mysql.user set password_expired='N'"
#将第七行 skip-grant-tables跳过权限验证 注释掉
sed -i  '7s/skip-grant-tables/#skip-grant-tables/g' /usr/my.cnf
service mysql restart
#再一次设置root用户密码
mysql -uroot -pok -e "set password=password('ok')"
#创建新用户bigdata,密码为ok
mysql -uroot -pok -e "create user 'bigdata'@'%' IDENTIFIED BY 'ok'"
#给用户bigdata、root赋权
mysql -uroot -pok -e "grant all on *.* to 'bigdata'@'%'"
mysql -uroot -pok -e "grant all on *.* to 'root'@'%'"
#刷新
mysql -uroot -pok -e "flush privileges"
mysql --version

安装详细步骤参考:Linux安装mysql–超简单的安装教程

2.3 安装 hadoop

hadoop_2_6_0.sh

#!/bin/bash
#hadoop
#免登录配置
#删除原先**文件
rm -rf /root/.ssh/id_rsa
#生成私钥
ssh-****** -t rsa -P ""
#复制私钥拷贝到公钥
cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys
#给公钥授权,可读可写
chmod 600 /root/.ssh/authorized_keys
#解压安装包
tar -zxvf /opt/software/hadoop-2.6.0-cdh5.14.2.tar.gz -C /opt/install/
#改名
mv /opt/install/hadoop-2.6.0-cdh5.14.2 /opt/install/hadoop-260
#添加环境变量
echo "export HADOOP_HOME=/opt/install/hadoop-260" >> /etc/profile
echo 'export HADOOP_MAPRED_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_HDFS_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export YARN_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native' >> /etc/profile
echo 'export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin' >> /etc/profile
source /etc/profile
#改变hadoop的Java路径
sed -i  '24,26s/\${JAVA_HOME}/\/opt\/install\/jdk1.8.0_221/g' /opt/install/hadoop-260/etc/hadoop/hadoop-env.sh
#注1:上述语句表示匹配24~26行之间的数据
#替换格式为:sed -i '24,26s/str1/str2/g' /path    ,将str1替换成str2
#注2:$、/是特殊字符需要加斜杠转义
#修改配置文件
cat $PWD/core-site.xml > /opt/install/hadoop-260/etc/hadoop/core-site.xml
cat $PWD/hdfs-site.xml > /opt/install/hadoop-260/etc/hadoop/hdfs-site.xml
cat $PWD/mapred-site.xml > /opt/install/hadoop-260/etc/hadoop/mapred-site.xml
cat $PWD/yarn-site.xml > /opt/install/hadoop-260/etc/hadoop/yarn-site.xml
tar -xvf /opt/software/hadoop-native-64-2.6.0.tar -C $HADOOP_HOME/lib/native
tar -xvf /opt/software/hadoop-native-64-2.6.0.tar -C $HADOOP_HOME/lib
#查看版本信息
echo 'hadoop 版本信息:' 
hadoop version
#格式化hdfs,首先删除用于存放临时文件的目录tmp,安全
rm -rf /opt/install/hadoop-260/tmp/
#格式化
hadoop namenode -format
#启动hdfs和yarn服务
start-all.sh
echo '*******查看jps状态*******'
jps


core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<!-- 设置默认节点ip地址接口 -->
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.206.140:9000</value>
</property>

<!-- 设置用于存放hadoop临时文件的目录 -->
<property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/install/hadoop-260/tmp</value>
</property>

<!-- 设置允许其他主机root用户可以访问 -->
<property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
</property>

<!-- 设置其他组下的root用户可以访问 -->
<property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
</property>

</configuration>


hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
        <name>dfs.replication</name>
        <value>1</value>
</property>

<!-- 设置免权限验证 -->
<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
</property>

</configuration>


mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<!-- 设置mapreduce的工作模式:yarn -->
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

<!-- 设置mapreduce的工作地址 -->
<property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop-001:10020</value>
</property>

<!-- 设置web页面历史服务端口的配置 -->
<property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop-001:19888</value>
</property>

</configuration>


yarn-site.xml
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->


<!-- reducer获取数据方式 -->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

<!-- 指定YARN的ResourceManager的地址 -->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop-001</value>
</property>

</configuration>

安装详细步骤参考:搭建hadoop单机环境(伪分布式)



2.4 安装 zookeeper

zookeeper_3_4_6.sh

#!/bin/bash
#zookeeper
tar -zxvf /opt/software/zookeeper-3.4.6.tar.gz -C /opt/install/
echo 'export ZK_HOME=/opt/install/zookeeper-3.4.6' >> /etc/profile
echo 'export PATH=$PATH:$ZK_HOME/bin' >> /etc/profile
cat $PWD/zoo.cfg > /opt/install/zookeeper-3.4.6/conf/zoo.cfg
mkdir /opt/install/zookeeper-3.4.6/zkData
mkdir /opt/install/zookeeper-3.4.6/zkLog
touch /opt/install/zookeeper-3.4.6/zkData/myid
echo '1' > /opt/install/zookeeper-3.4.6/zkData/myid
source /etc/profile
zkServer.sh start
zkServer.sh status


zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
#最大访问数:不限制
maxClientCnxns=0
# The number of ticks that the initial
# synchronization phase can take
#初始化最小进程数:50
initLimit=50
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
#数据目录
dataDir=/opt/install/zookeeper-3.4.6/zkData
#日志目录
dataLogDir=/opt/install/zookeeper-3.4.6/zkLog
# the port at which the clients will connect
clientPort=2181
#配置三台以上的奇数台可用机器主机名或者ip,注如果不配集群不需要添加以下内容
server.1=hadoop-001:2888:3888

安装详细步骤参考:Zookeeper环境搭建

2.5 安装 hbase

hbase_1_2_0.sh

#!/bin/bash
#hbase
tar -zxvf /opt/software/hbase-1.2.0-cdh5.14.2.tar.gz -C /opt/install/
mv /opt/install/hbase-1.2.0-cdh5.14.2 /opt/install/hbase-120
echo 'export HBASE_HOME=/opt/install/hbase-120' >> /etc/profile
echo 'export PATH=$PATH:$HBASE_HOME/bin' >> /etc/profile
source /etc/profile
cat $PWD/hbase-site.xml > /opt/install/hbase-120/conf/hbase-site.xml
echo 'export JAVA_HOME=/opt/install/jdk1.8.0_221' >> /opt/install/hbase-120/conf/hbase-env.sh
echo 'export HADOOP_HOME=/opt/install/hadoop-260' >> /opt/install/hbase-120/conf/hbase-env.sh
echo 'export HBASE_HOME=/opt/install/hbase-120' >> /opt/install/hbase-120/conf/hbase-env.sh
#使用外部zookeeper
echo 'export HBASE_MANAGES_ZK=false' >> /opt/install/hbase-120/conf/hbase-env.sh
source /etc/profile
#启动habse
start-hbase.sh
jps | grep -i hmaster
jps | grep -i hre


hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>

<!-- hbase文件存储目录 -->
<property>
         <name>hbase.rootdir</name>
         <value>hdfs://192.168.206.140:9000/hbase</value>
</property>

<!-- 配置hbase为分布式值改为true -->
<property>
         <name>hbase.cluster.distributed</name>
         <value>true</value>
</property>

<property>
         <name>hbase.zookeeper.property.dataDir</name>
         <value>/opt/install/hbase-120/data</value>
</property>


</configuration>

安装详细步骤参考:Hbase环境搭建

2.6 安装hive

hive_1_1_0.sh

#!/bin/bash
#hive
tar -zxvf /opt/software/hive-1.1.0-cdh5.14.2.tar.gz -C /opt/install/
mv /opt/install/hive-1.1.0-cdh5.14.2 /opt/install/hive-110
echo 'export HIVE_PATH=/opt/install/hive-110' >> /etc/profile
echo 'export PATH=$PATH:$HIVE_PATH/bin' >> /etc/profile
source /etc/profile
cat $PWD/hive-site.xml > /opt/install/hive-110/conf/hive-site.xml
cat $PWD/hive-env.sh > /opt/install/hive-110/conf/hive-env.sh
mv /opt/install/hive-110/conf/hive-log4j.properties.template /opt/install/hive-110/conf/hive-log4j.properties
sed -i  '20,22s/\${java.io.tmpdir}\/\${user.name}/\/opt\/install\/hive-110\/logs/g' /opt/install/hive-110/conf/hive-log4j.properties
mkdir /opt/install/hive-110/logs
source /etc/profile
echo 'hive版本信息:'
hive --version
#移动jar包
cp /opt/software/mysql-connector-java-5.1.48-bin.jar /opt/install/hive-110/lib/
chmod  777 /opt/install/hive-110
#新建hdfs存储目录
hdfs dfs -mkdir -p /hive/warehouse
#赋权,然后在可以在hive中创建表
hdfs dfs -chmod 777 /hive
hdfs dfs -chmod 777 /hive/warehouse
#初始化数据库
schematool -initSchema -dbType mysql
#后台启动hive原服务
nohup hive --service metastore &
#测试,观察表是否能创建成功
hive -e "create database if not exists test"


hive-env.sh
export HADOOP_HOME=/opt/install/hadoop-260
export HIVE_HOME=/opt/install/hive-110
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HIVE_AUX_JARS_PATH=/opt/install/hive-110/lib
export JAVA_HOME=/opt/install/jdk1.8.0_221
export HIVE_CONF_DIR=/opt/install/hive-110/conf



hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
        <property>
                <name>hive.metastore.warehouse.dir</name>
                <value>hdfs://hadoop-001:9000/hive/warehouse</value>
               <description>管理表存储的位置,可以是linux中的目录,也可以是相对于fs.default.name有关的目录</description>
        </property>
        <property>
                <name>hive.metastore.local</name>
                <value>true</value>
        </property>
        <!-- 指定hive元数据存储的MySQL地址 -->
        <property>
                <name>javax.jdo.option.ConnectionURL</name>
                <value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true</value>
        </property>
        <!-- 元数据存储数据库的驱动 -->
        <property>
                <name>javax.jdo.option.ConnectionDriverName</name>
                <value>com.mysql.jdbc.Driver</value>
        </property>
        <!-- 元数据存储数据库的用户名 -->
        <property>
                <name>javax.jdo.option.ConnectionUserName</name>
                <value>root</value>
        </property>
        <!-- 元数据存储数据库的密码,(注:这里是mysql自己root用户的密码) -->
        <property>
                <name>javax.jdo.option.ConnectionPassword</name>
                <value>ok</value>
        </property>
		<!-- 远程连接元数据库 -->
		<property>
				<name>hive.metastore.uris</name>
				<value>thrift://192.168.206.140:9083</value>
		</property>

</configuration>

安装详细步骤参考:Hive环境搭建

2.7 安装 spark

spark_2_2_0.sh

#!/bin/bash
#spark
tar -zxvf /opt/software/spark-2.2.0-bin-hadoop2.7.tgz -C /opt/install/
mv /opt/install/spark-2.2.0-bin-hadoop2.7/ /opt/install/spark-220
echo "export SPARK_HOME=/opt/install/spark-220" >> /etc/profile
echo 'export PATH=$PATH:$SPARK_HOME/bin' >> /etc/profile
source /etc/profile
mv /opt/install/spark-220/conf/slaves.template /opt/install/spark-220/conf/slaves
cat $PWD/spark-env.sh > /opt/install/spark-220/conf/spark-env.sh
#集成spark on hive
cp /opt/software/mysql-connector-java-5.1.48-bin.jar /opt/install/spark-220/jars/
cp /opt/install/hive-110/conf/hive-site.xml /opt/install/spark-220/conf/
cp /opt/install/hadoop-260/etc/hadoop/core-site.xml /opt/install/spark-220/conf/
cp /opt/install/hadoop-260/etc/hadoop/hdfs-site.xml /opt/install/spark-220/conf/
/opt/install/spark-220/sbin/start-all.sh
spark-sql -e "show databases"
jps


spark-env.sh
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.

# Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program

# Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos

# Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)

# Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers

# Generic options for the daemons used in the standalone deploy mode
# - SPARK_CONF_DIR      Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - SPARK_LOG_DIR       Where log files are stored.  (Default: ${SPARK_HOME}/logs)
# - SPARK_PID_DIR       Where the pid file is stored. (Default: /tmp)
# - SPARK_IDENT_STRING  A string representing this instance of spark. (Default: $USER)
# - SPARK_NICENESS      The scheduling priority for daemons. (Default: 0)
# - SPARK_NO_DAEMONIZE  Run the proposed command in the foreground. It will not output a PID file.

export HADOOP_HOME=/opt/install/hadoop-260
export HADOOP_CONF_DIR=/opt/install/hadoop-260/etc/hadoop/
export JAVA_HOME=/opt/install/jdk1.8.0_221
#指定master的主机
export SPARK_MASTER_HOST=hadoop-001
#指定master的端口
export SPARK_MASTER_PORT=7077

三、大一统安装jdk、mysql、hadoop、zookeeper、hbase、hive、spark…

需将下载链接中 scripts 整体打包放入 linux: /opt 文件夹中
shell脚本安装jdk、mysql、hadoop、zookeeper、hbase、hive、spark...大一统

hadoop-all.sh

#!/bin/bash
#jdk mysql hadoop zookeeper hive hbase spark

#jdk
tar -zxvf /opt/software/jdk-8u221-linux-x64.tar.gz -C /opt/install/
echo "export JAVA_HOME=/opt/install/jdk1.8.0_221" >> /etc/profile
echo 'export CLASSPATH=.:$JAVA_HOME/rt.jar:$JAVA_HOME/tools.jar:$JAVA_HOME/dt.jar' >> /etc/profile
echo 'export JRE_HOME=$JAVA_HOME/jre' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin' >> /etc/profile
source /etc/profile
java -version

#mysql
#找到服务端冲突目录并卸载
a=`rpm -qa | grep -i Mariadb`	
rpm -e ${a} --nodeps
#下载依赖包
yum install -y net-tools
yum install -y perl
yum install -y autoconf
#解压安装
rpm -ivh /opt/software/MySQL-client-5.6.46-1.el7.x86_64.rpm
rpm -ivh /opt/software/MySQL-server-5.6.46-1.el7.x86_64.rpm
#配置文件修改
sed -i '3a [client]' /usr/my.cnf
sed -i '4a default-character-set = utf8' /usr/my.cnf
sed -i '6a skip-grant-tables' /usr/my.cnf
sed -i '7a character_set_server = utf8' /usr/my.cnf
sed -i '8a collation_server = utf8_general_ci' /usr/my.cnf
sed -i '9a lower_case_table_names' /usr/my.cnf
#重启数据库
service mysql restart
mysql -e "update mysql.user set password=password('ok')"
mysql -e "update mysql.user set password_expired='N'"
sed -i  '7s/skip-grant-tables/#skip-grant-tables/g' /usr/my.cnf
service mysql restart
mysql -uroot -pok -e "set password=password('ok')"
mysql -uroot -pok -e "create user 'bigdata'@'%' IDENTIFIED BY 'ok'"
mysql -uroot -pok -e "grant all on *.* to 'bigdata'@'%'"
mysql -uroot -pok -e "grant all on *.* to 'root'@'%'"
mysql -uroot -pok -e "flush privileges"
mysql --version

#hadoop
##免登录配置
#删除原先**文件
rm -rf /root/.ssh/id_rsa
#生成私钥
ssh-****** -t rsa -P ""
#复制私钥拷贝到公钥
cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys
#给公钥授权,可读可写
chmod 600 /root/.ssh/authorized_keys
#解压安装包
tar -zxvf /opt/software/hadoop-2.6.0-cdh5.14.2.tar.gz -C /opt/install/
#改名
mv /opt/install/hadoop-2.6.0-cdh5.14.2 /opt/install/hadoop-260
#添加环境变量
echo "export HADOOP_HOME=/opt/install/hadoop-260" >> /etc/profile
echo 'export HADOOP_MAPRED_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_HDFS_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export YARN_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native' >> /etc/profile
echo 'export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin' >> /etc/profile
source /etc/profile
#改变hadoop的Java路径
sed -i  '24,26s/\${JAVA_HOME}/\/opt\/install\/jdk1.8.0_221/g' /opt/install/hadoop-260/etc/hadoop/hadoop-env.sh
#注1: 24,26s/str1/str2/g 表示匹配24~26行之间的数据,将str1替换成str2
#注2: \/斜杠表示转义反斜杠,反斜杠/是匹配文件路径用的
#修改配置文件
cat $PWD/core-site.xml > /opt/install/hadoop-260/etc/hadoop/core-site.xml
cat $PWD/hdfs-site.xml > /opt/install/hadoop-260/etc/hadoop/hdfs-site.xml
cat $PWD/mapred-site.xml > /opt/install/hadoop-260/etc/hadoop/mapred-site.xml
cat $PWD/yarn-site.xml > /opt/install/hadoop-260/etc/hadoop/yarn-site.xml
tar -xvf /opt/software/hadoop-native-64-2.6.0.tar -C $HADOOP_HOME/lib/native
tar -xvf /opt/software/hadoop-native-64-2.6.0.tar -C $HADOOP_HOME/lib
#查看版本信息
echo 'hadoop 版本信息:' 
hadoop version
#格式化hdfs,首先删除用于存放临时文件的目录tmp,安全
rm -rf /opt/install/hadoop-260/tmp/
#格式化
hadoop namenode -format
#启动hdfs和yarn服务
start-all.sh
echo '*******查看jps状态*******'
jps

#zookeeper
tar -zxvf /opt/software/zookeeper-3.4.6.tar.gz -C /opt/install/
echo 'export ZK_HOME=/opt/install/zookeeper-3.4.6' >> /etc/profile
echo 'export PATH=$PATH:$ZK_HOME/bin' >> /etc/profile
cat $PWD/zoo.cfg > /opt/install/zookeeper-3.4.6/conf/zoo.cfg
mkdir /opt/install/zookeeper-3.4.6/zkData
mkdir /opt/install/zookeeper-3.4.6/zkLog
touch /opt/install/zookeeper-3.4.6/zkData/myid
echo '1' > /opt/install/zookeeper-3.4.6/zkData/myid
source /etc/profile
zkServer.sh start
zkServer.sh status

#hbase
tar -zxvf /opt/software/hbase-1.2.0-cdh5.14.2.tar.gz -C /opt/install/
mv /opt/install/hbase-1.2.0-cdh5.14.2 /opt/install/hbase-120
echo 'export HBASE_HOME=/opt/install/hbase-120' >> /etc/profile
echo 'export PATH=$PATH:$HBASE_HOME/bin' >> /etc/profile
source /etc/profile
cat $PWD/hbase-site.xml > /opt/install/hbase-120/conf/hbase-site.xml
echo 'export JAVA_HOME=/opt/install/jdk1.8.0_221' >> /opt/install/hbase-120/conf/hbase-env.sh
echo 'export HADOOP_HOME=/opt/install/hadoop-260' >> /opt/install/hbase-120/conf/hbase-env.sh
echo 'export HBASE_HOME=/opt/install/hbase-120' >> /opt/install/hbase-120/conf/hbase-env.sh
#使用外部zookeeper
echo 'export HBASE_MANAGES_ZK=false' >> /opt/install/hbase-120/conf/hbase-env.sh
source /etc/profile
#启动habse
start-hbase.sh
jps | grep -i hmaster
jps | grep -i hre

#hive
tar -zxvf /opt/software/hive-1.1.0-cdh5.14.2.tar.gz -C /opt/install/
mv /opt/install/hive-1.1.0-cdh5.14.2 /opt/install/hive-110
echo 'export HIVE_PATH=/opt/install/hive-110' >> /etc/profile
echo 'export PATH=$PATH:$HIVE_PATH/bin' >> /etc/profile
source /etc/profile
cat $PWD/hive-site.xml > /opt/install/hive-110/conf/hive-site.xml
cat $PWD/hive-env.sh > /opt/install/hive-110/conf/hive-env.sh
mv /opt/install/hive-110/conf/hive-log4j.properties.template /opt/install/hive-110/conf/hive-log4j.properties
sed -i  '20,22s/\${java.io.tmpdir}\/\${user.name}/\/opt\/install\/hive-110\/logs/g' /opt/install/hive-110/conf/hive-log4j.properties
mkdir /opt/install/hive-110/logs
source /etc/profile
echo 'hive版本信息:'
hive --version
#移动jar包
cp /opt/software/mysql-connector-java-5.1.48-bin.jar /opt/install/hive-110/lib/
chmod  777 /opt/install/hive-110
#新建hdfs存储目录
hdfs dfs -mkdir -p /hive/warehouse
#赋权,然后在可以在hive中创建表
hdfs dfs -chmod 777 /hive
hdfs dfs -chmod 777 /hive/warehouse
#初始化数据库
schematool -initSchema -dbType mysql
#后台启动hive原服务
nohup hive --service metastore &
#测试,观察表是否能创建成功
hive -e "create database if not exists test"

#spark
tar -zxvf /opt/software/spark-2.2.0-bin-hadoop2.7.tgz -C /opt/install/
mv /opt/install/spark-2.2.0-bin-hadoop2.7/ /opt/install/spark-220
echo "export SPARK_HOME=/opt/install/spark-220" >> /etc/profile
echo 'export PATH=$PATH:$SPARK_HOME/bin' >> /etc/profile
source /etc/profile
mv /opt/install/spark-220/conf/slaves.template /opt/install/spark-220/conf/slaves
cat $PWD/spark-env.sh > /opt/install/spark-220/conf/spark-env.sh
#集成spark on hive
cp /opt/software/mysql-connector-java-5.1.48-bin.jar /opt/install/spark-220/jars/
cp /opt/install/hive-110/conf/hive-site.xml /opt/install/spark-220/conf/
cp /opt/install/hadoop-260/etc/hadoop/core-site.xml /opt/install/spark-220/conf/
cp /opt/install/hadoop-260/etc/hadoop/hdfs-site.xml /opt/install/spark-220/conf/
/opt/install/spark-220/sbin/start-all.sh
spark-sql -e "show databases"
jps
相关标签: shell脚本