欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Spark MLlib线性回归代码实现及结果展示

程序员文章站 2022-07-10 14:02:58
代码实现:import org.apache.spark.sql.SparkSessionimport org.apache.spark.sql.DataFrameimport org.apache.spark.ml.feature.VectorAssemblerimport org.apache. ......

代码实现:
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.DataFrame
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.regression.LinearRegression
/**
  * Created by zhen on 2018/3/10.
  */
object LinearRegression {

  def main(args: Array[String]) {
    //设置环境
  val spark = SparkSession.builder ().appName ("SendBroadcast").master ("local[2]").getOrCreate()
    val sc = spark.sparkContext
    val sqlContext = spark.sqlContext
    //准备训练集合

    val raw_data = sc.textFile("src/sparkMLlib/man.txt")
    val map_data = raw_data.map{x=>
      val mid = x.replaceAll(","," ,")
      val split_list = mid.substring(0,mid.length-1).split(",")
      for(x <- 0 until split_list.length){
        if(split_list(x).trim.equals("")) split_list(x) = "0.0" else split_list(x) = split_list(x).trim
      }
      ( split_list(1).toDouble,split_list(2).toDouble,split_list(3).toDouble,split_list(4).toDouble,
        split_list(5).toDouble,split_list(6).toDouble,split_list(7).toDouble,split_list(8).toDouble,
        split_list(9).toDouble,split_list(10).toDouble,split_list(11).toDouble)
    }
    val mid = map_data.sample(false,0.6,0)//随机取样,训练模型
    val df = sqlContext.createDataFrame(mid)
    val colArray = Array("c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10", "c11")
    val data = df.toDF("c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10", "c11")
    val assembler = new VectorAssembler().setInputCols(colArray).setOutputCol("features")
    val vecDF = assembler.transform(data)
    //准备预测集合
    val map_data_for_predict = map_data
    val df_for_predict = sqlContext.createDataFrame(map_data_for_predict)
    val data_for_predict = df_for_predict.toDF("c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10", "c11")
    val colArray_for_predict = Array("c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10", "c11")
    val assembler_for_predict = new VectorAssembler().setInputCols(colArray_for_predict).setOutputCol("features")
    val vecDF_for_predict: DataFrame = assembler_for_predict.transform(data_for_predict)
    // 建立模型,进行预测
    // 设置线性回归参数
    val lr1 = new LinearRegression()
    val lr2 = lr1.setFeaturesCol("features").setLabelCol("c5").setFitIntercept(true)
    // RegParam:正则化
    val lr3 = lr2.setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)
    // 将训练集合代入模型进行训练
    val lrModel = lr3.fit(vecDF)
    // 输出模型全部参数
    lrModel.extractParamMap()
    //coefficients 系数 intercept 截距
    println(s"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}")
    // 模型进行评价
    val trainingSummary = lrModel.summary
    trainingSummary.residuals.show()
    println(s"均方根差: ${trainingSummary.rootMeanSquaredError}")//RMSE:均方根差
    println(s"判定系数: ${trainingSummary.r2}")//r2:判定系数,也称为拟合优度,越接近1越好
    val predictions = lrModel.transform(vecDF_for_predict)
    val predict_result = predictions.selectExpr("features","c5", "round(prediction,1) as prediction")
    predict_result.rdd.saveAsTextFile("src/sparkMLlib/manResult")
    sc.stop()
  }
}

性能评估:

均方根差: 0.2968176690349843
判定系数: 0.9715059814474793

结果:

[[4.61,1.51,5.91,4.18,3.91,0.0,7.83,0.0,4.81,4.71,3.44,0.0,3.61,3.76],1.51,1.7]
[[3.1,3.64,1.6,2.57,3.16,0.0,5.6,0.0,1.84,2.77,0.0,2.4,0.0,2.53],3.64,3.4]
[[3.15,4.24,2.89,1.94,3.81,0.0,6.12,0.0,0.0,0.0,2.23,0.0,2.51,3.98],4.24,3.9]
[[2.13,3.81,3.5,3.29,3.47,0.0,0.0,0.0,2.16,2.06,1.65,0.0,3.37,3.93],3.81,3.6]
[[3.6,4.36,2.89,3.46,3.66,0.0,7.17,0.0,2.86,2.58,0.0,2.73,2.73,3.94],4.36,4.0]
[[2.65,3.58,3.9,3.63,2.71,0.0,5.91,0.0,3.63,3.08,2.33,0.0,1.79,2.54],3.58,3.4]
[(14,[0,1,2,3,4,6,8,9],[2.13,2.7,2.26,1.78,2.82,7.15,2.69,2.46]),2.7,2.6]
[[2.31,2.42,4.0,3.27,3.69,0.0,5.87,0.0,0.0,0.0,1.32,0.0,1.32,2.09],2.42,2.4]
[(14,[0,1,2,3,6,10,12,13],[3.4,4.12,3.04,2.76,9.55,1.44,3.61,3.95]),4.12,3.8]