欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

linux spark 安装

程序员文章站 2022-03-17 20:56:42
...

linux spark 安装

 

0.准备工作 hadoop 服务器

10.156.50.35 yanfabu2-35.base.app.dev.yf zk1  hadoop1 master1 master sparkmaster
10.156.50.36 yanfabu2-36.base.app.dev.yf zk2  hadoop2 master2        sparkwork1
10.156.50.37 yanfabu2-37.base.app.dev.yf zk3  hadoop3 slaver1        sparkwork2

 

1.安裝 scala

 

wget https://downloads.lightbend.com/scala/2.11.7/scala-2.11.7.tgz
scp -r scala-2.11.7.tgz  zkkafka@10.156.50.36:/home/zkkafka/
scp -r scala-2.11.7.tgz  zkkafka@10.156.50.37:/home/zkkafka/

tar -zxvf scala-2.11.7.tgz
mv  scala-2.11.7 scala

vim ~/.bash_profile

export SCALA_HOME=/home/zkkafka/scala
export PATH=$PATH:$SCALA_HOME/bin

source ~/.bash_profile

 

2.scala 測試

scala -version
[zkkafka@yanfabu2-37 ~]$ scala -version
Scala code runner version 2.11.7 -- Copyright 2002-2013, LAMP/EPFL

 

 

3.安裝 spark

 

wget  https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.6.tgz
tar xf spark-2.3.3-bin-hadoop2.6.tgz
scp -r spark-2.3.3-bin-hadoop2.6.tgz  zkkafka@10.156.50.37:/home/zkkafka/
rm -rf spark-2.3.3-bin-hadoop2.6.tgz
mv spark-2.3.3-bin-hadoop2.6 spark


vim ~/.bash_profile

export SPARK_HOME=/home/zkkafka/spark
export PATH=$PATH:$SPARK_HOME/bin

source ~/.bash_profile

scp -r ~/.bash_profile  zkkafka@10.156.50.37:/home/zkkafka/

 

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
export LANG="zh_CN.utf8"

export   JAVA_HOME=/home/zkkafka/jdk1.8.0_151
export   ZOOKEEPER_HOME=/home/zkkafka/zookeeper-3.4.6
export   CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:
export   PATH=$JAVA_HOME/bin:$PATH
export   PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf

export KAFKA_HOME=/home/zkkafka/kafka_2.11-2.1.1
export PATH=$KAFKA_HOME/bin:$PATH

export HADOOP_HOME=/home/zkkafka/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH


export HBASE_HOME=/home/zkkafka/hbase
export PATH=$HBASE_HOME/bin:$PATH

export HIVE_HOME=/home/zkkafka/hive
export PATH=$HIVE_HOME/bin:$PATH
export HIVE_CONF_DIR=$HIVE_HOME/conf


export SQOOP_HOME=/home/zkkafka/sqoop
export PATH=$PATH:$SQOOP_HOME/bin

#export SQOOP_HOME=/home/zkkafka/sqoop2
#export PATH=$PATH:$SQOOP_HOME/bin
#export SQOOP_SERVER_EXTRA_LIB=$SQOOP_HOME/extra
#export CATALINA_BASE=$SQOOP_HOME/server
#export LOGDIR=$SQOOP_HOME/logs/

export PIG_HOME=/home/zkkafka/pig
export PATH=$PATH:$PIG_HOME/bin

export SCALA_HOME=/home/zkkafka/scala
export PATH=$PATH:$SCALA_HOME/bin

export SPARK_HOME=/home/zkkafka/spark
export PATH=$PATH:$SPARK_HOME/bin

 

 

4.修改配置文件

cp conf/spark-env.sh.template  conf/spark-env.sh
vim  conf/spark-env.sh


export JAVA_HOME=/home/zkkafka/jdk1.8.0_151
export SCALA_HOME=/home/zkkafka/scala
export HADOOP_HOME=/home/zkkafka/hadoop
export HADOOP_CONF_DIR=/home/zkkafka/hadoop/etc/hadoop
export SPARK_MASTER_IP=sparkmaster
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1



vim conf/slaves
sparkwork1
sparkwork2

 

5.启动

sbin/start-all.sh 2>&1 >> /dev/null &

 

[zkkafka@yanfabu2-35 spark]$ jps
6880 JournalNode
95745 Master
59330 QuorumPeerMain
95815 Jps
56377 Kafka
7818 JobHistoryServer
7515 ResourceManager
7084 NameNode
7405 DFSZKFailoverController

[zkkafka@yanfabu2-36 spark]$ jps
127409 NameNode
127123 JournalNode
54724 Jps
37365 QuorumPeerMain
54665 Worker
34571 Kafka
127293 DFSZKFailoverController

[zkkafka@yanfabu2-37 ~]$ jps
28384 Worker
28433 Jps
129444 DataNode
42955 QuorumPeerMain
129370 JournalNode
40189 Kafka
129580 NodeManager

 

6. 访问 web-ui

http://10.156.50.35:8080/


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

捐助开发者 

在兴趣的驱动下,写一个免费的东西,有欣喜,也还有汗水,希望你喜欢我的作品,同时也能支持一下。 当然,有钱捧个钱场(支持支付宝和微信 以及扣扣群),没钱捧个人场,谢谢各位。

 

个人主页http://knight-black-bob.iteye.com/


linux spark 安装
            
    
    博客分类: linuxscalaspark linuxspark安装 linux spark 安装
            
    
    博客分类: linuxscalaspark linuxspark安装 linux spark 安装
            
    
    博客分类: linuxscalaspark linuxspark安装 
 
 
 谢谢您的赞助,我会做的更好!