欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Spark环境搭建

程序员文章站 2024-01-15 23:56:46
...

环境搭建

Spark On Yarn

Hadoop环境

  • 设置CentOS进程数和文件数(重启生效)
[aaa@qq.com ~]# vi /etc/security/limits.conf

* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800

优化linux性能,修改这个最大值

  • 配置主机名(重启生效)
[aaa@qq.com ~]# vi /etc/hostname
CentOS
[aaa@qq.com ~]# reboot
  • 设置IP映射
[aaa@qq.com ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.40.128 CentOS

  • 防火墙服务
# 临时关闭服务
[aaa@qq.com ~]# systemctl stop firewalld
[aaa@qq.com ~]# firewall-cmd --state
# 关闭开机自动启动
[aaa@qq.com ~]# systemctl disable firewalld

  • 安装JDK1.8+
[aaa@qq.com ~]# rpm -ivh jdk-8u171-linux-x64.rpm 
[aaa@qq.com ~]# ls -l /usr/java/
total 4
lrwxrwxrwx. 1 root root   16 Mar 26 00:56 default -> /usr/java/latest
drwxr-xr-x. 9 root root 4096 Mar 26 00:56 jdk1.8.0_171-amd64
lrwxrwxrwx. 1 root root   28 Mar 26 00:56 latest -> /usr/java/jdk1.8.0_171-amd64
[aaa@qq.com ~]# vi .bashrc 
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[aaa@qq.com ~]# source ~/.bashrc

  • SSH配置免密
[aaa@qq.com ~]# ssh-****** -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4b:29:93:1c:7f:06:93:67:fc:c5:ed:27:9b:83:26:c0 aaa@qq.com
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|         o   . . |
|      . + +   o .|
|     . = * . . . |
|      = E o . . o|
|       + =   . +.|
|        . . o +  |
|           o   . |
|                 |
+-----------------+
[aaa@qq.com ~]# ssh-copy-id CentOS
The authenticity of host 'centos (192.168.40.128)' can't be established.
RSA key fingerprint is 3f:86:41:46:f2:05:33:31:5d:b6:11:45:9c:64:12:8e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos,192.168.40.128' (RSA) to the list of known hosts.
aaa@qq.com's password: 
Now try logging into the machine, with "ssh 'CentOS'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
[aaa@qq.com ~]# ssh aaa@qq.com
Last login: Tue Mar 26 01:03:52 2019 from 192.168.40.1
[aaa@qq.com ~]# exit
logout
Connection to CentOS closed.
  • 配置HDFS|YARN
    hadoop-2.9.2.tar.gz解压到系统的/usr目录下然后配置[core|hdfs|yarn|mapred]-site.xml配置文件

[aaa@qq.com ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml

<!--nn访问入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://CentOS:9000</value>
</property>
<!--hdfs工作基础目录-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>

[aaa@qq.com ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml


<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主机-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>CentOS:50090</value>
</property>
<!--设置datanode最大文件操作数-->
<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
</property>
<!--设置datanode并行处理能力-->
<property>
        <name>dfs.datanode.handler.count</name>
        <value>6</value>
</property>

[aaa@qq.com ~]# vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml

<!--配置MapReduce计算框架的核心实现Shuffle-洗牌-->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<!--配置资源管理器所在的目标主机-->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>CentOS</value>
</property>
<!--关闭物理内存检查-->
<property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
</property>
<!--关闭虚拟内存检查-->
<property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>

[aaa@qq.com ~]# vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml

<!--MapRedcue框架资源管理器的实现-->
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

  • 配置hadoop环境变量
[aaa@qq.com ~]# vi .bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
CLASSPATH=.
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME
export CLASSPATH
export PATH
export M2_HOME
export MAVEN_OPTS
export HADOOP_HOME
[aaa@qq.com ~]# source .bashrc
  • 启动hadoop服务
[aaa@qq.com ~]# hdfs namenode -format # 创建初始化所需的fsimage文件
[aaa@qq.com ~]# start-dfs.sh
[aaa@qq.com ~]# start-yarn.sh

访问:http://CentOS:8088以及http://CentOS:50070

Spark环境
下载spark-2.4.5-bin-without-hadoop.tgz解压到/usr目录,并且将Spark目录修改名字为spark-2.4.5然后修改spark-env.sh和spark-default.conf文件.

  • 解压安装spark
[aaa@qq.com ~]# tar -zxf spark-2.4.5-bin-without-hadoop.tgz -C /usr/
[aaa@qq.com ~]# mv /usr/spark-2.4.5-bin-without-hadoop/ /usr/spark-2.4.5
[aaa@qq.com ~]# tree -L 1 /usr/spark-2.4.5/
/usr/spark-2.4.5/
├── bin  # Spark系统执行脚本
├── conf # Spar配置目录
├── data
├── examples # Spark提供的官方案例
├── jars
├── kubernetes
├── LICENSE
├── licenses
├── NOTICE
├── python
├── R
├── README.md
├── RELEASE
├── sbin # Spark用户执行脚本
└── yarn
  • 配置spark服务
[aaa@qq.com ~]# cd /usr/spark-2.4.5/
[aaa@qq.com spark-2.4.5]# mv conf/spark-env.sh.template conf/spark-env.sh
[aaa@qq.com spark-2.4.5]# vi conf/spark-env.sh 
# Options read in YARN client/cluster mode
# - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)

HADOOP_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
SPARK_EXECUTOR_CORES= 2
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=1G

LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
[aaa@qq.com spark-2.4.5]# mv conf/spark-defaults.conf.template conf/spark-defaults.conf
[aaa@qq.com spark-2.4.5]# vi conf/spark-defaults.conf 
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs

需要现在在HDFS上创建spark-logs目录,用于作为Sparkhistory服务器存储历史计算数据的地方。
[aaa@qq.com ~]# hdfs dfs -mkdir /spark-logs

  • 启动Spark history server
[aaa@qq.com spark-2.4.5]# ./sbin/start-history-server.sh
[aaa@qq.com spark-2.4.5]# jps
124528 HistoryServer
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
124574 Jps
  • 访问http://主机ip:18080访问Spark History Server
    Spark环境搭建

测试环境

[aaa@qq.com spark-2.4.5]# ./bin/spark-submit 
                            --master yarn 
                            --deploy-mode client
                            --class org.apache.spark.examples.SparkPi 
                            --num-executors 2 
                            --executor-cores 3 
                            /usr/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

得到结果

19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 6609 ms on CentOS (executor 1) (1/2)
19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 6403 ms on CentOS (executor 1) (2/2)
19/04/21 03:30:39 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/04/21 03:30:39 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 29.116 s
19/04/21 03:30:40 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 30.317103 s
Pi is roughly 3.141915709578548
19/04/21 03:30:40 INFO server.AbstractConnector: Stopped aaa@qq.com41035930{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/04/21 03:30:40 INFO ui.SparkUI: Stopped Spark web UI at http://CentOS:4040
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors

Spark环境搭建
Spark shell

[aaa@qq.com bin]# ./spark-shell --master yarn --deploy-mode client  --executor-cores 4 --num-executors 3
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/04/17 01:46:04 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = yarn, app id = application_1555383933869_0004).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.1
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.

scala>
Spark standalone

Hadoop环境

  • 设置CentOS进程数和文件数(可选)
[aaa@qq.com ~]# vi /etc/security/limits.conf

 * soft nofile 204800
 * hard nofile 204800
 * soft nproc 204800
 * hard nproc 204800

优化linux性能,修改这个最大值,重启CentOS生效

  • 配置主机名(重启生效)
[aaa@qq.com ~]# vi /etc/hostname
CentOS
[aaa@qq.com ~]# rebbot
  • 设置IP映射
[aaa@qq.com ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.52.134 CentOS
  • 防火墙服务
# 临时关闭服务
[aaa@qq.com ~]# systemctl stop firewalld
[aaa@qq.com ~]# firewall-cmd --state
not running
# 关闭开机自动启动
[aaa@qq.com ~]# systemctl disable firewalld
  • 安装JDK1.8+
[aaa@qq.com ~]# rpm -ivh jdk-8u171-linux-x64.rpm 
[aaa@qq.com ~]# ls -l /usr/java/
total 4
lrwxrwxrwx. 1 root root   16 Mar 26 00:56 default -> /usr/java/latest
drwxr-xr-x. 9 root root 4096 Mar 26 00:56 jdk1.8.0_171-amd64
lrwxrwxrwx. 1 root root   28 Mar 26 00:56 latest -> /usr/java/jdk1.8.0_171-amd64
[aaa@qq.com ~]# vi .bashrc 
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[aaa@qq.com ~]# source ~/.bashrc
  • SSH配置免密
[aaa@qq.com ~]# ssh-****** -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4b:29:93:1c:7f:06:93:67:fc:c5:ed:27:9b:83:26:c0 aaa@qq.com
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|         o   . . |
|      . + +   o .|
|     . = * . . . |
|      = E o . . o|
|       + =   . +.|
|        . . o +  |
|           o   . |
|                 |
+-----------------+
[aaa@qq.com ~]# ssh-copy-id CentOS
The authenticity of host 'centos (192.168.40.128)' can't be established.
RSA key fingerprint is 3f:86:41:46:f2:05:33:31:5d:b6:11:45:9c:64:12:8e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos,192.168.40.128' (RSA) to the list of known hosts.
aaa@qq.com's password: 
Now try logging into the machine, with "ssh 'CentOS'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
[aaa@qq.com ~]# ssh aaa@qq.com
Last login: Tue Mar 26 01:03:52 2019 from 192.168.40.1
[aaa@qq.com ~]# exit
logout
Connection to CentOS closed.
  • 配置HDFS
    将hadoop-2.9.2.tar.gz解压到系统的/usr目录下然后配置[core|hdfs|yarn|mapred]-site.xml配置文件。

[aaa@qq.com ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml

<!--nn访问入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://CentOS:9000</value>
</property>
<!--hdfs工作基础目录-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>

[aaa@qq.com ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml

<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主机-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>CentOS:50090</value>
</property>
<!--设置datanode最大文件操作数-->
<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
</property>
<!--设置datanode并行处理能力-->
<property>
        <name>dfs.datanode.handler.count</name>
        <value>6</value>
</property>
  • 配置hadoop环境变量
[aaa@qq.com ~]# vi .bashrc
JAVA_HOME=/usr/java/latest
HADOOP_HOME=/usr/hadoop-2.9.2
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export CLASSPATH
export PATH
export HADOOP_HOME
[aaa@qq.com ~]# source .bashrc
  • 启动Hadoop服务
[aaa@qq.com ~]# jps122690 NodeManager122374 SecondaryNameNode122201 DataNode122539 ResourceManager122058 NameNode123036 Jps
[aaa@qq.com ~]# hdfs namenode -format # 创建初始化所需的fsimage文件
[aaa@qq.com ~]# start-dfs.sh
[aaa@qq.com ~]# jps
122374 SecondaryNameNode
122201 DataNode
122058 NameNode
123036 Jps

访问:http://centos:50070/

Spark环境
下载spark-2.4.5-bin-without-hadoop.tgz解压到/usr目录,并且将Spark目录修改名字为spark-2.4.5然后修改spark-env.sh和spark-default.conf文件.

  • 解压安装spark
[aaa@qq.com ~]# tar -zxf spark-2.4.5-bin-without-hadoop.tgz -C /usr/
[aaa@qq.com ~]# mv /usr/spark-2.4.5-bin-without-hadoop/ /usr/spark-2.4.5
[aaa@qq.com ~]# tree -L 1 /usr/spark-2.4.5/
/usr/spark-2.4.5/
├── bin  # Spark系统执行脚本
├── conf # Spar配置目录
├── data
├── examples # Spark提供的官方案例
├── jars
├── kubernetes
├── LICENSE
├── licenses
├── NOTICE
├── python
├── R
├── README.md
├── RELEASE
├── sbin # Spark用户执行脚本
└── yarn
  • 配置Spark服务

[aaa@qq.com ~]# cd /usr/spark-2.4.5/
[aaa@qq.com spark-2.4.5]# mv conf/spark-env.sh.template conf/spark-env.sh
[aaa@qq.com spark-2.4.5]# vi conf/spark-env.sh

# Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers

SPARK_MASTER_HOST=CentOS
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=4
SPARK_WORKER_INSTANCES=2
SPARK_WORKER_MEMORY=2g

export SPARK_MASTER_HOST
export SPARK_MASTER_PORT
export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export SPARK_WORKER_INSTANCES

export LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"

[aaa@qq.com spark-2.4.5]# mv conf/spark-defaults.conf.template conf/spark-defaults.conf
[aaa@qq.com spark-2.4.5]# vi conf/spark-defaults.conf

spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs

需要现在在HDFS上创建spark-logs目录,用于作为Sparkhistory服务器存储历史计算数据的地方。

[aaa@qq.com ~]# hdfs dfs -mkdir /spark-logs

  • 启动Spark history server
[aaa@qq.com spark-2.4.5]# ./sbin/start-history-server.sh
[aaa@qq.com spark-2.4.5]# jps
124528 HistoryServer
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
124574 Jps
  • 访问http://主机ip:18080访问Spark History Server
    Spark环境搭建
  • 启动Spark自己计算服务
[aaa@qq.com spark-2.4.5]# ./sbin/start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/spark-2.4.5/logs/spark-root-org.apache.spark.deploy.master.Master-1-CentOS.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-CentOS.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-2-CentOS.out
[aaa@qq.com spark-2.4.5]# jps
7908 Worker
7525 HistoryServer
8165 Jps
122374 SecondaryNameNode
7751 Master
122201 DataNode
122058 NameNode
7854 Worker

用户可以访问http://CentOS:8080
Spark环境搭建

测试环境

[aaa@qq.com spark-2.4.5]# ./bin/spark-submit 
                            --master spark://CentOS:7077 
                            --deploy-mode client
                            --class org.apache.spark.examples.SparkPi 
                            --total-executor-cores 6 
                            /usr/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

得到结果

19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 6609 ms on CentOS (executor 1) (1/2)
19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 6403 ms on CentOS (executor 1) (2/2)
19/04/21 03:30:39 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/04/21 03:30:39 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 29.116 s
19/04/21 03:30:40 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 30.317103 s
Pi is roughly 3.141915709578548
19/04/21 03:30:40 INFO server.AbstractConnector: Stopped aaa@qq.com41035930{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/04/21 03:30:40 INFO ui.SparkUI: Stopped Spark web UI at http://CentOS:4040
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors

Spark环境搭建
Spark Shell

[aaa@qq.com spark-2.4.5]# ./bin/spark-shell --master spark://CentOS:7077 --total-executor-cores 6
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = spark://CentOS:7077, app id = app-20200207140419-0003).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.5
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_231)
Type in expressions to have them evaluated.
Type :help for more informat.

scala> sc.textFile("hdfs:///demo/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2,true).saveAsTextFile("hdfs:///demo/results")
相关标签: Spark spark