IntelliJ IDEA下Maven创建Scala项目的方法步骤
环境:intellij idea
版本:spark-2.2.1 scala-2.11.0
利用 maven 第一次创建 scala 项目也遇到了许多坑
创建一个 scala 的 wordcount 程序
第一步:intellij idea下安装 scala 插件
安装完 scala 插件完成
第二步:maven 下 scala 下的项目创建
正常创建 maven 项目(不会的看另一篇 maven 配置)
第三步:scala 版本的下载及配置
通过spark官网下载页面 可知“note: starting version 2.0, spark is built with scala 2.11 by default.”,建议下载spark2.2对应的 scala 2.11。
登录scala官网,单击download按钮,然后再“other releases”标题下找到“下载2.11.0
根据自己的系统下载相应的版本
接下来就是配置scala 的环境变量(跟 jdk 的配置方法一样)
输入 scala -version 查看是否配置成功 会显示 scala code runner version 2.11.0 – copyright 2002-2013, lamp/epfl
选择自己安装 scala 的路径
第四步:编写 scala 程序
将其他的代码删除,不然在编辑的时候会报错
配置 pom.xml文件
在里面添加一个 spark
<properties> <scala.version>2.11.0</scala.version> <spark.version>2.2.1</spark.version> </properties> <dependency> <groupid>org.apache.spark</groupid> <artifactid>spark-core_2.11</artifactid> <version>${spark.version}</version> </dependency>
具体的 pom.xml 内容
<project xmlns="http://maven.apache.org/pom/4.0.0" xmlns:xsi="http://www.w3.org/2001/xmlschema-instance" xsi:schemalocation="http://maven.apache.org/pom/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelversion>4.0.0</modelversion> <groupid>cn.spark</groupid> <artifactid>spark</artifactid> <version>1.0-snapshot</version> <inceptionyear>2008</inceptionyear> <properties> <scala.version>2.11.0</scala.version> <spark.version>2.2.1</spark.version> </properties> <pluginrepositories> <pluginrepository> <id>scala-tools.org</id> <name>scala-tools maven2 repository</name> <url>http://scala-tools.org/repo-releases</url> </pluginrepository> </pluginrepositories> <dependencies> <dependency> <groupid>org.scala-lang</groupid> <artifactid>scala-library</artifactid> <version>${scala.version}</version> </dependency> <dependency> <groupid>org.apache.spark</groupid> <artifactid>spark-core_2.11</artifactid> <version>${spark.version}</version> </dependency> <dependency> <groupid>junit</groupid> <artifactid>junit</artifactid> <version>4.4</version> <scope>test</scope> </dependency> <dependency> <groupid>org.specs</groupid> <artifactid>specs</artifactid> <version>1.2.5</version> <scope>test</scope> </dependency> </dependencies> <build> <sourcedirectory>src/main/scala</sourcedirectory> <testsourcedirectory>src/test/scala</testsourcedirectory> <plugins> <plugin> <groupid>org.scala-tools</groupid> <artifactid>maven-scala-plugin</artifactid> <executions> <execution> <goals> <goal>compile</goal> <goal>testcompile</goal> </goals> </execution> </executions> <configuration> <scalaversion>${scala.version}</scalaversion> <args> <arg>-target:jvm-1.5</arg> </args> </configuration> </plugin> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-eclipse-plugin</artifactid> <configuration> <downloadsources>true</downloadsources> <buildcommands> <buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand> </buildcommands> <additionalprojectnatures> <projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature> </additionalprojectnatures> <classpathcontainers> <classpathcontainer>org.eclipse.jdt.launching.jre_container</classpathcontainer> <classpathcontainer>ch.epfl.lamp.sdt.launching.scala_container</classpathcontainer> </classpathcontainers> </configuration> </plugin> </plugins> </build> <reporting> <plugins> <plugin> <groupid>org.scala-tools</groupid> <artifactid>maven-scala-plugin</artifactid> <configuration> <scalaversion>${scala.version}</scalaversion> </configuration> </plugin> </plugins> </reporting> </project>
编写 wordcount 文件
package cn.spark import org.apache.spark.{sparkconf, sparkcontext} /** * created by hubo on 2018/1/13 */ object wordcount { def main(args: array[string]) { var masterurl = "local" var inputpath = "/users/huwenbo/desktop/a.txt" var outputpath = "/users/huwenbo/desktop/out" if (args.length == 1) { masterurl = args(0) } else if (args.length == 3) { masterurl = args(0) inputpath = args(1) outputpath = args(2) } println(s"masterurl:$masterurl, inputpath: $inputpath, outputpath: $outputpath") val sparkconf = new sparkconf().setmaster(masterurl).setappname("wordcount") val sc = new sparkcontext(sparkconf) val rowrdd = sc.textfile(inputpath) val resultrdd = rowrdd.flatmap(line => line.split("\\s+")) .map(word => (word, 1)).reducebykey(_ + _) resultrdd.saveastextfile(outputpath) } }
var masterurl = “local”
local代表自己本地运行,在 hadoop 上运行添加相应地址
在配置中遇到的错误,会写在另一篇文章里。
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。
上一篇: Scala入门之List使用详解
下一篇: scala中的隐式类型转换的实现