欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

ensemble learning 1—— bagging and Random Forset

程序员文章站 2022-07-14 14:10:37
...

当做重要决定时,大家可能都会考虑吸取多个专家而不只是一个人的意见。集成学习也是如此。集成学习就是组合多个学习器,最后可以得到一个更好的学习器。
When making important decisions, everyone may consider drawing from multiple experts rather than just one person. The same is true for ensemble learning. Ensemble learning is to combine multiple learners, and finally a better learner can be obtained.

集成学习算法:
1.个体学习器之间不存在强依赖关系,装袋(bagging)
2.随机森林(Random Forest)
3.个体学习器之间存在强依赖关系,提升(boosting)
4.Stacking

Here I explain the bagging and RF.
1. bagging也叫做bootstrap aggregating,是在原始数据集选择S次后得到S个新数据集的一种技术。是一种有放回抽样。
ensemble learning 1—— bagging and Random Forset
code

#bagging的思想是对并行分类器的选择
# 导入算法包以及数据集
from sklearn import neighbors
from sklearn import datasets
from sklearn.ensemble import BaggingClassifier
from sklearn import tree
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt

iris = datasets.load_iris()
x_data = iris.data[:,:2]
y_data = iris.target

x_train,x_test,y_train,y_test = train_test_split(x_data, y_data)

knn = neighbors.KNeighborsClassifier()
knn.fit(x_train, y_train)

def plot(model):
    # 获取数据值所在的范围
    x_min, x_max = x_data[:, 0].min() - 1, x_data[:, 0].max() + 1
    y_min, y_max = x_data[:, 1].min() - 1, x_data[:, 1].max() + 1

    # 生成网格矩阵
    xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
                         np.arange(y_min, y_max, 0.02))

    z = model.predict(np.c_[xx.ravel(), yy.ravel()])# ravel与flatten类似,多维数据转一维。flatten不会改变原始数据,ravel会改变原始数据
    z = z.reshape(xx.shape)
    # 等高线图
    cs = plt.contourf(xx, yy, z)

# 画图
plot(knn)
# 样本散点图
plt.scatter(x_data[:, 0], x_data[:, 1], c=y_data)
plt.show()
# 准确率
knn.score(x_test, y_test)

dtree = tree.DecisionTreeClassifier()
dtree.fit(x_train, y_train)

# 画图
plot(dtree)
# 样本散点图
plt.scatter(x_data[:, 0], x_data[:, 1], c=y_data)
plt.show()
# 准确率
dtree.score(x_test, y_test)

bagging_knn = BaggingClassifier(knn, n_estimators=100)
# 输入数据建立模型
bagging_knn.fit(x_train, y_train)
plot(bagging_knn)
# 样本散点图
plt.scatter(x_data[:, 0], x_data[:, 1], c=y_data)
plt.show()
bagging_knn.score(x_test, y_test)

bagging_tree = BaggingClassifier(dtree, n_estimators=100)
# 输入数据建立模型
bagging_tree.fit(x_train, y_train)
plot(bagging_tree)
# 样本散点图
plt.scatter(x_data[:, 0], x_data[:, 1], c=y_data)
plt.show()
bagging_tree.score(x_test, y_test)


2. Random Forset(RF)
ensemble learning 1—— bagging and Random Forset
···
(1) 样本的随机:从样本集中用bagging的方式,随机选
择n个样本。
(2) 特征的随机:从所有属性d中随机选择k个属性(k<d),
然后从k个属性中选择最佳分割属性作为节点建立
CART决策树。
(3)重复以上两个步骤m次,建立m棵CART决策树。
(4)这m棵CART决策树形成随机森林,通过投票表决结
果,决定数据属于哪一类。
···
code:

from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
import numpy as np
import matplotlib.pyplot as plt

# 载入数据
data = np.genfromtxt("LR-testSet2.txt", delimiter=",")
x_data = data[:,:-1]
y_data = data[:,-1]

plt.scatter(x_data[:,0],x_data[:,1],c=y_data)
plt.show()


x_train,x_test,y_train,y_test = train_test_split(x_data, y_data, test_size = 0.5)

def plot(model):
    # 获取数据值所在的范围
    x_min, x_max = x_data[:, 0].min() - 1, x_data[:, 0].max() + 1
    y_min, y_max = x_data[:, 1].min() - 1, x_data[:, 1].max() + 1

    # 生成网格矩阵
    xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
                         np.arange(y_min, y_max, 0.02))

    z = model.predict(np.c_[xx.ravel(), yy.ravel()])# ravel与flatten类似,多维数据转一维。flatten不会改变原始数据,ravel会改变原始数据
    z = z.reshape(xx.shape)
    # 等高线图
    cs = plt.contourf(xx, yy, z)
    # 样本散点图
    plt.scatter(x_test[:, 0], x_test[:, 1], c=y_test)
    plt.show()
#决策树分类器
dtree = tree.DecisionTreeClassifier()
dtree.fit(x_train, y_train)
plot(dtree)
dtree.score(x_test, y_test)
#建立RF模型,对数据进行训练及评价
RF = RandomForestClassifier(n_estimators=50)
RF.fit(x_train, y_train)
plot(RF)
RF.score(x_test, y_test)
相关标签: AI