Kaggle:房价预测
程序员文章站
2022-06-26 20:02:07
...
第一次打Kaggle竞赛,在网上搜集了很多例子,也有了自己的理解,不断改进代码不断提交最后还是只得了0.14513的成绩不过相对于第一次提交的0.45的成绩还是有所进步,哈哈哈。
话不多说,直接上代码:
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestRegressor
train_data = pd.read_csv("./data/train.csv")
test_data = pd.read_csv("./data/test.csv")
# 得到数据的label值
label_y = train_data.SalePrice
# 将原本数据的label值删除就得到了特证值
train_data = train_data.drop(["SalePrice"], axis=1)
# 将数据中包含字符的特征删除 只使用数字特征来预测
X_train = train_data.select_dtypes(exclude=["object"])
X_test = test_data.select_dtypes(exclude=["object"])
# 将训练数据中的特征中含有缺失值的特征删除
reduce_col = [col for col in X_train.columns if X_train[col].isnull().any()]
deal_X_train = X_train.drop(reduce_col, axis=1)
# 为了使测试数据保持一致 删除训练数据中含有缺失值特征对应的测试数据的特征
deal_X_test = X_test.drop(reduce_col, axis=1)
# 删除测试数据中含有缺失值的特征
reduce_col = [col for col in deal_X_test.columns if deal_X_test[col].isnull().any()]
deal_X_train1 = deal_X_train.drop(reduce_col, axis=1)
deal_X_test1 = deal_X_test.drop(reduce_col, axis=1)
# 建立随机森林模型,并训练数据
model = RandomForestRegressor(n_estimators=10)
model.fit(deal_X_train1, label_y)
# 预测测试数据中的label值
pre_test_y = model.predict(deal_X_test1)
my_csv = pd.DataFrame({"Id": X_test.Id, "SalePrice": pre_test_y})
my_csv.to_csv("./my_csv/submission4.csv", index=False)
刚开始敲代码的时候也没看中随机森林是还是,就呆呆的选了后者,就因为这个第一次的成绩只有0.4多,后来改过来成绩彪到了0.2左右。这份代码出来把含有缺失值的特征项删除了还把含有字符的特征项给删除了,我就想能不能把字符改成数字再进行预测呢?不用删除特征应该精度会提高的吧!话不多说,继续上代码:
import pandas as pd
import csv
import numpy as np
from sklearn.ensemble import RandomForestRegressor
read_train = pd.read_csv("./data/train.csv", delimiter=",")
read_test = pd.read_csv("./data/test.csv", delimiter=",")
# 将字符数据转化成数字
def data_to_num(dataframe):
index = dataframe.columns
for i in index:
if type(dataframe[i][0]) != np.int64:
features_node = dataframe[i].unique()
n = 0
for j in features_node:
dataframe.loc[dataframe[i] == j, i] = n
n += 1
return dataframe
data_train = data_to_num(read_train)
data_test = data_to_num(read_test)
# print(data_train)
label_y = data_train.SalePrice
X_train = data_train.drop(["SalePrice"], axis=1)
X_test = data_test
reduce_col = [col for col in X_train.columns if X_train[col].isnull().any()]
deal_X_train = X_train.drop(reduce_col, axis=1)
deal_X_test = X_test.drop(reduce_col, axis=1)
reduce_col = [col for col in deal_X_test.columns if deal_X_test[col].isnull().any()]
deal_X_train1 = deal_X_train.drop(reduce_col, axis=1)
deal_X_test1 = deal_X_test.drop(reduce_col, axis=1)
model = RandomForestRegressor(n_estimators=10)
model.fit(deal_X_train1, label_y)
predict_test = model.predict(deal_X_test1)
print(predict_test)
my_csv = pd.DataFrame({"Id": X_test.Id, "SalePrice": predict_test})
my_csv.to_csv("./my_csv/submission5.csv", index=False)
最后的得分达到了0.15左右,嗯还算不错,再想办法改进,预测问题…好像提升树算法是专门处理回归预测的欸,对可以换算法换成梯度提升树算法,还有缺失值我不想删除了,那就用均值填补吧!还有字符问题,字符数据转化成数字并不是连续值是否会影响我的精度呢?还是删除了吧!还有一些特征数据太大是否也会影响呢?做个归一化吧!这下一起改!再来!
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.preprocessing import scale
from sklearn.impute import SimpleImputer
read_train = pd.read_csv("./data/train.csv", delimiter=",")
read_test = pd.read_csv("./data/test.csv", delimiter=",")
train_data = read_train.select_dtypes(exclude=["object"])
test_data = read_test.select_dtypes(exclude=["object"])
label_y = train_data.SalePrice
X_train = train_data.drop(["SalePrice"], axis=1)
X_test = test_data
imputer = SimpleImputer(missing_values=np.nan, strategy="mean")
deal_X_train = imputer.fit_transform(X_train)
deal_X_test = imputer.fit_transform(X_test)
deal_X_train1 = scale(deal_X_train)
deal_X_test1 = scale(deal_X_test)
model = GradientBoostingRegressor()
model.fit(deal_X_train1, label_y)
predict = model.predict(deal_X_test1)
csv_file = pd.DataFrame({"Id": X_test.Id, "SalePrice": predict})
csv_file.to_csv("./my_csv/submission11.csv", index=False)
上一篇: kaggle 房价预测