pyspark环境配置
参考地址:
1、https://jingyan.baidu.com/article/86fae346b696633c49121a30.html
使用参考:
1、https://www.gitbook.com/book/aiyanbo/spark-programming-guide-zh-cn/details
2、https://github.com/search?utf8=%E2%9C%93&q=pyspark&type=
3、http://spark.apache.org/docs/latest/quick-start.html
4、http://spark.apache.org/docs/latest/rdd-programming-guide.html
5、http://spark.apache.org/docs/latest/sql-programming-guide.html
6、http://spark.apache.org/docs/latest/ml-guide.html
7、http://spark.apache.org/docs/latest/api/python/index.html
本教程只介绍了 单机版的安装,至于集群版(yarn)参考:
1、http://blog.csdn.net/dream_an/article/details/52946840
2、http://www.itnose.net/detail/6478156.html
3、http://blog.csdn.net/wc781708249/article/details/78256309
安装JDK
1、下载JDK (必须是 java7+)
本教程下载的是
jdk-8u131-linux-x64.tar.gz
2、执行: tar -zxvf jdk-8u131-linux-x64.tar.gz
3、配置Java环境
vim /etc/profile
在末尾加上以下语句:
export JAVA_HOME=/home/wu/down/jdk1.8.0_131
export JRE_HOME=/home/wu/down/jdk1.8.0_131/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
注:这里是将jdk-8u131-linux-x64.tar.gz 解压放置到/home/wu/down 目录
保存关闭后,执行:source /etc/profile
4、检查Java是否可以
在终端输入: java -version
如果出现以下语句说明已经安装完成
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-2ubuntu1.16.04.3-b11)
OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
安装spark
1、先去官网下载spark
本教程 下载的是
spark-2.1.1-bin-hadoop2.7.tgz
2、解压
tar -zxvf spark-2.1.1-bin-hadoop2.7.tgz
3、运行
cd/home/wu/down
mv spark-2.1.1-bin-hadoop2.7 spark
spark/bin/pyspark
注:本教程是将spark-2.1.1-bin-hadoop2.7存放至/home/wu/down
目录
如果可以启动,说明没有问题
补充:修改日志级别(消除不必要信息)
cd/home/wu/down
cd spark/conf/
cp log4j.properties.template log4j.properties
vim log4j.properties
将 log4j.rootCategory=INFO, console 修改成 log4j.rootCategory=WARN, console 后保存
再执行 spark/bin/pyspark 会变得很简洁
执行一个例子
1、交互模式执行
spark/bin/pyspark # 启动交互模式
>>> x = sc.parallelize([1,2,3]) # sc = spark context, parallelize creates an RDD from the passed object
>>> y = x.map(lambda x: (x,x**2))
>>> print(x.collect()) # collect copies RDD elements to a list on the driver
[1, 2, 3]
>>> print(y.collect())
[(1, 1), (2, 4), (3, 9)]
>>>
2、使用spark-submit执行py脚本
cd/home/wu/downvim test.py
输入:
from pyspark.context import SparkContext
from pyspark.conf import SparkConf
sc = SparkContext(conf=SparkConf().setAppName("The first example"))
x = sc.parallelize([1,2,3])
y = x.filter(lambda x: x%2 == 1) # filters out even elements
print(x.collect())
print(y.collect())
执行:spark/bin/spark-submit test.py
[email protected]:/home/wu/down# spark/bin/spark-submit test.py
17/10/13 16:19:56 WARN Utils: Your hostname, wu resolves to a loopback address: 127.0.1.1; using 192.168.101.211 instead (on interface enp3s0)
17/10/13 16:19:56 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/10/13 16:19:57 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[1, 2, 3]
[1, 3]
3、使用python直接执行py脚本
参考:http://blog.csdn.net/houmou/article/details/50925573
Spark的安装目录里的pyspark和py4j这两个模块,需要在环境变量里定义PYTHONPATH,编辑~/.bashrc或者/etc/profile文件均可
vim ~/.bashrc # 或者 sudo vim /etc/profile # 添加下面这一行 export SPARK_HOME=/home/wu/down/spark export PYTHONPATH=$SPARK_HOME/python/:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONPATH # 使其生效 source ~/.bashrc # 或者 sudo source /etc/profile
执行:python test.py
4、使用python3执行py脚本
参考:https://*.com/questions/30279783/apache-spark-how-to-use-pyspark-with-python-3
默认情况是使用python2 执行(并且调用的也是python2中的安装包),通过以下方法可以实现python3执行
vim ~/.bashrc # 或者 sudo vim /etc/profile # 添加下面这一行 export PYSPARK_PYTHON=python3 # 使其生效 source ~/.bashrc # 或者 sudo source /etc/profile
执行:python3 test.py
注:交互模式和spark-submit调用的将是python3中的包