spark高可用,yarn
程序员文章站
2024-02-23 09:56:22
...
1.配置spark-env.sh
# 配置大哥;在二哥上面,MASTER_PORT=指的是自己
SPARK_MASTER_HOST=hadoop102
# 设置zookeepr,不能换行
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop101:2181,hadoop102:2181,hadoop103:2181 -Dspark.deploy.zookeeper.dir=/spark"
# 告诉Spark,hadoop放到哪里面了,前题是每一个Spark,的服务器上都装有hadoop
HADOOP_CONF_DIR=/data/hadoop/hadoop-3.2.1/etc/hadoop/
配置二哥MASTER_PORT=指的是自己
SPARK_MASTER_HOST=hadoop101
2.配置slaves
#配置的小弟
hadoop103
hadoop104
3.启动
- 启动zookeeper
bin/zkServer.sh
- 启动hadoop
sbin/start-all.sh
- 启动spark
sbin/start-all.sh
停止spark
sbin/stop-all.sh
注意查看spark的web端的时候,由于启动了zookeeper,spark的端口与zookeeper的端口冲突,spark会把端口改成8081,在logs日志中会显示
# 查看网络的状态;是zookeeper占用了8080端口
netstat -anp|grep 8080
启动二哥在hadoop101上启动
sbin/start-master.sh
4.运行示例
# --master:把任务提交给大哥;
# --executor-memory:指定executor的内存;
# --executor-cores:指定executor的cpu,核心
bin/spark-submit \
--master spark://hadoop102:7077 \
--name myPi \
--executor-memory 500m \
--total-executor-cores 2 \
--class org.apache.spark.examples.SparkPi \
examples/jars/spark-examples_2.11-2.4.4.jar \
100000
# --master:把任务提交给大哥;
# --executor-memory:指定executor的内存;(都是workder的,executor)
# --executor-cores:指定executor的cpu,核心(都是workder的,executor)
# --deploy-mode:部署模式
bin/spark-submit \
--master spark://hadoop102:7077 \
--name myPi \
--deploy-mode cluster \
--executor-memory 500m \
--total-executor-cores 2 \
--class org.apache.spark.examples.SparkPi \
examples/jars/spark-examples_2.11-2.4.4.jar \
10000
# --driver-cores:适应于集群模式;2核
# --driver-memory:适应于集群模式;500m;
bin/spark-submit \
--master spark://hadoop102:7077 \
--name myPi \
--deploy-mode cluster \
--executor-memory 500m \
--total-executor-cores 2 \
--driver-cores 2 \
--driver-memory 500m \
--class org.apache.spark.examples.SparkPi \
examples/jars/spark-examples_2.11-2.4.4.jar \
100
# --master:把任务提交给大哥;
# --executor-memory:指定executor的内存;(都是workder的,executor)
# --executor-cores:指定executor的cpu,核心(都是workder的,executor)
# --deploy-mode:部署模式
bin/spark-submit \
--master yarn \
--name myPi \
--executor-memory 500m \
--total-executor-cores 2 \
--class org.apache.spark.examples.SparkPi \
examples/jars/spark-examples_2.11-2.4.4.jar \
10000
集群提交
# --master:把任务提交给大哥;
# --executor-memory:指定executor的内存;(都是workder的,executor)
# --executor-cores:指定executor的cpu,核心(都是workder的,executor)
# --deploy-mode:部署模式
bin/spark-submit \
--master yarn \
--name myPi \
--deploy-mode cluster \
--executor-memory 500m \
--total-executor-cores 2 \
--class org.apache.spark.examples.SparkPi \
examples/jars/spark-examples_2.11-2.4.4.jar \
10000
上一篇: 希尔排序算法Java实现
下一篇: sort排序
推荐阅读
-
Spark —— 高可用集群搭建
-
Spark+zookeeper搭建高可用集群学习笔记
-
spark高可用,yarn
-
spark yarn-client 提交模式中出现虚拟内存不足的错误
-
Spark --最全的安装部署 local本地模式spark安装 spark--standalone集群安装 spark-HA高可用安装 spark on yarn安装
-
Spark on Yarn分析
-
sql server 高可用日志传送
-
sql server 高可用故障转移(1)
-
sql server 高可用性技术总结
-
ActiveMQ:高可用集群方案 博客分类: activemq activemqzookeeperLevelDB