欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

利用idea对spark程序进行远程提交和调试

程序员文章站 2022-04-01 15:37:23
...

利用idea对spark程序进行远程提交和调试

本文以WordCount程序来实现idea对spark程序进行远程提交和调试
环境
- 利用虚拟机搭建拥有3台主机的spark集群
spark1:192.168.6.137
spark2:192.168.6.138
spark3:192.168.6.139
- idea-IU-2016.3.7

前提是集群和调试的主机在同一个网段内。

一、利用idea对spark程序进行远程提交

WordCount scala程序

/**
  * Created by cuiyufei on 2018/2/13.
  */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object WordCount {
  private val master = "spark://spark1:7077"
  private val remote_file = "hdfs://spark1:9000/user/spark/data/spark.txt"
  def main(args: Array[String]) {
    val conf = new SparkConf()
      .setAppName("WordCount")
      .setMaster(master)
      .set("spark.executor.memory", "512m")
      .setJars(List("D:\\JetBrains\\workspace\\WordCount\\out\\artifacts\\WordCount_jar\\WordCount.jar"))

    val sc = new SparkContext(conf)
    val textFile = sc.textFile(remote_file)
    val wordCount = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)
    wordCount.foreach(println)
  }
}

pom.xml文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>WODAS</groupId>
    <artifactId>WordCount</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <spark.version>2.1.0</spark.version>
        <scala.version>2.11</scala.version>
    </properties>
    <repositories>
        <repository>
            <id>nexus-aliyun</id>
            <name>Nexus aliyun</name>
            <url>http://maven.aliyun.com/nexus/content/groups/public</url>
        </repository>
    </repositories>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-mllib_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>

    </dependencies>

    <build>
        <plugins>

            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.6.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.19</version>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>

        </plugins>
    </build>

</project>

进行远程提交,注意两点
- setMaster(master):master变量必须为远程集群
- setJars(List(“D:\JetBrains\workspace\WordCount\out\artifacts\WordCount_jar\WordCount.jar”)):设置本地jar的目录

设置好后,点击运行即可

二、对程序进行远程调试

1.首先,在集群配置文件sparkk-env.sh中加入一下代码

export SPARK_SUBMIT_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"
这里对上面的几个参数进行说明:
-Xdebug 启用调试特性
-Xrunjdwp 启用JDWP实现,包含若干子选项:
transport=dt_socket JPDA front-end和back-end之间的传输方法。dt_socket表示使用套接字传输。
address=8888 JVM在8888端口上监听请求,这个设定为一个不冲突的端口即可。
server=y y表示启动的JVM是被调试者。如果为n,则表示启动的JVM是调试器。
suspend=y y表示启动的JVM会暂停等待,直到调试器连接上才继续执行。suspend=n,则JVM不会暂停等待。

2.scala代码和远程提交的代码一样
3.idea的设置
对运行进行配置
利用idea对spark程序进行远程提交和调试
添加远程设置
利用idea对spark程序进行远程提交和调试
根据spark集群中spark-env.sh的SPARK_SUBMIT_OPTS的变量,对远程执行进行配置
利用idea对spark程序进行远程提交和调试
配置完成后,设置断点,在scala程序右键debug即可
利用idea对spark程序进行远程提交和调试
利用idea对spark程序进行远程提交和调试