全面解读机器学习之决策树
一、决策树的构造
- 优点:计算复杂度不高,输出结果易于理解,对中间值的缺失不敏感,可以处理不相关特征数据。
- 缺点:可能会产生过度匹配问题。
- 适用数据类型:数值型和标称型。
决策树的一般流程:
- 收集数据:可以使用任何方法。
- 准备数据:树构造算法只适用于标称型数据,因此数值型数据必须离散化。
- 分析数据:可以使用任何方法,构造树完成之后,我们应该检查图形是否符合预期。
- 训练算法:构造树的数据结构。
- 测试算法:使用经验树计算错误率。
- 使用算法:此步骤可以适用于任何监督学习算法,而使用决策树可以更好地理解数据的内在含义。
使用ID3算法划分数据集。
1. 信息增益
在划分数据集之前之后信息发生的变化称为信息增益。知道如何计算信息增益,就知道计算每个特征值划分数据集获得的信息增益,获得信息增益最高的特征就是最好的选择。
熵定义为信息的期望值。如果待分类的事务可能划分在多个分类之中,则符号的信息定义为:
其中是选择该分类的概率。并且需要计算所有类别所有可能值包含的信息期望值:
其中,是分类数目。下面,创建trees.py
文件,实现信息熵的使用。
from math import log
def calcShannonEnt(dataSet):
numEntries = len(dataSet)
labelCounts = {}
for featVec in dataSet:
currentLabel = featVec[-1]
if currentLabel not in labelCounts.keys():
labelCounts[currentLabel] = 0 # 将标签加入到字典中
labelCounts[currentLabel] += 1 # 统计每个标签的数量
shannonEnt = 0.0
for key in labelCounts:
prob = float(labelCounts[key]) / numEntries #选key分类的概率
shannonEnt -= prob * log(prob, 2) # 香农公式
return shannonEnt
测试一下:
def createDataSet():
dataSet = [[1, 1, 'yes'],
[1, 1, 'yes'],
[1, 0, 'no'],
[0 ,1, 'no'],
[0 ,1, 'no']]
labels = ['no surfing', 'flippers']
return dataSet, labels
myData, labels = createDataSet()
myData[0][-1] = 'maybe'
#print(calcShannonEnt(myData))
myData[0][-1] = 'yes' # 再改回来
'''
0.9709505944546686
'''
熵越高,混合的数据也越多。在数据集上添加一些新的标签,熵就会变大。
myData[0][-1] = 'maybe'
print(calcShannonEnt(myData))
'''
1.3709505944546687
'''
2. 划分数据集
上一节,通过度量划分数据集的熵,来判断当前是否是正确地划分了数据集。熵值越大,表示数据越混乱,自然划分成功的可能性越小。下面函数实现了以第axis
个特征值为标准,对该特征是否是value
进行划分:
# 返回对第axis列中的特征的划分 结果是value的数据
def splitDataSet(dataSet, axis, value):
retDataSet = []
for featVec in dataSet:
if featVec[axis] == value: # 第axis个特征等于value的话
reducedFeatVec = featVec[:axis] # 将axis所指的特征去除掉即可
reducedFeatVec.extend(featVec[axis + 1:])
retDataSet.append(reducedFeatVec)
return retDataSet
print(splitDataSet(myData, 0, 1))
print(splitDataSet(myData, 0, 0))
'''
#结果是去掉axis所指的特征
[[1, 'maybe'], [1, 'yes'], [0, 'no']]
[[1, 'no'], [1, 'no']]
'''
python
中的append()
和extend()
函数是有差别的:
a = [1,2,3,4,5]
b = [1,2]
b.append(a[2:])
# b = [1, 2, [3, 4, 5]]
c = [3,5]
c.extend(a[:3])
# c = [3, 5, 1, 2, 3]
下面遍历数据集,找到最好的特征值划分方式。
# 选出整个数据集最好的特征划分
def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1 # 最后一项是标签
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0
bestFeature = -1
for i in range(numFeatures):
# 使用列表推导来创建新的列表
# 将数据集中所有第i个特征值或者所有可能存在的值写入新的list
featList = [example[i] for example in dataSet]
uniqueVals = set(featList) # 集合中值唯一 顺序排好
newEntropy = 0.0
for value in uniqueVals:
# 按照第i个特征进行划分
subDataSet = splitDataSet(dataSet, i, value)
# 第i个特征的概率
prob = len(subDataSet) / float(len(dataSet))
# 计算按照第i个特征划分后的各个类别的熵的和
newEntropy += prob * calcShannonEnt(subDataSet)
# 计算信息增益
infoGain = baseEntropy - newEntropy
if(infoGain > bestInfoGain):
bestInfoGain = infoGain
bestFeature = i
return bestFeature
print(myData)
print(chooseBestFeatureToSplit(myData))
'''
[[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
0
'''
3. 递归构建决策树
首先定义函数,返回出现次数分类最多的名称:
def majorityCnt(classList):
classCount = {}
for vote in classList:
if vote not in classCount.keys():
classCount[vote] = 0
classCount[vote] += 1 # 统计每个标签出现的个数
sortedClassCount = sorted(classCount.items(),
key = operator.itemgetter(1),
reverse = True)
return sortedClassCount[0][0] # 返回个数最多的那个特征
递归创建决策树分类函数:
def createTree(dataSet, labels):
# 将类别特征存到classList列表中
classList = [example[-1] for example in dataSet]
# 列表中的类别特征完全相同 直接返回 停止划分
if classList.count(classList[0]) == len(classList):
return classList[0]
# 遍历完所有特征 返回样本个数最多的类别
if len(dataSet[0]) == 1:
return majorityCnt(classList)
bestFeat = chooseBestFeatureToSplit(dataSet)
bestFeatLabel = labels[bestFeat]
myTree = {bestFeatLabel : {} }
del(labels[bestFeat])
featValues = [example[bestFeat] for example in dataSet]
uniqueVals = set(featValues) # 记录当前选择的特征的属性 无重复
for value in uniqueVals:
subLabels = labels[ : ] # 复制类标签
myTree[bestFeatLabel][value] = \
createTree(splitDataSet(dataSet, bestFeat, value), subLabels)
return myTree
myTree = createTree(myData, labels)
print(myTree)
'''
嵌套字典
{'no surfing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}}
'''
二、在Python中使用Matplotlib注解绘制树形图
决策树范例:
1. Matplotlib注解
Matplotlib
提供了一个非常有用的注解工具annotations
,它可以在数据图形上添加文本注解。打开文本编辑器,创建名为treePlotter.py的新文件:
import matplotlib.pyplot as plt
# 定义文本框和剪头格式
定义文本框和箭头格式
# boxstyle = ”sawtooth” 表示 注解框的边缘是波浪线,
# fc = '0.8' 是颜色深度
decisionNode = dict(boxstyle = "sawtooth", fc = '0.8')# dict 字典
leafNode = dict(boxstyle = 'round4', fc = '0.8') # 注释框是圆的
arrow_args = dict(arrowstyle = '<-') # 箭头指向文本框 不是数据点
def plotNode(nodeTxt, centerPt, parentPt, nodeType):
createPlot.ax1.annotate(nodeTxt, xy = parentPt,
xycoords = 'axes fraction', xytext = centerPt,
textcoords = 'axes fraction',
va = 'center', ha = 'center', bbox = nodeType,
arrowprops = arrow_args)
def createPlot():
# 创建一个新图形
fig = plt.figure(1, facecolor = 'white')
# 清空绘图区
fig.clf()
createPlot.ax1 = plt.subplot(111, frameon = False)
plotNode('决策结点', (0.5, 0.1), (0.1, 0.5), decisionNode)
plotNode('叶结点', (0.8, 0.1), (0.3, 0.8), leafNode)
plt.show()
print(createPlot())
2. 构造注解树
定义函数实现:(1)知道有多少个叶节点,以便可以正确确定x
轴的长度;(2)知道树有多少层,以便可以正确确定y
轴的高度。
# 获取叶结点的数目
def getNumLeafs(myTree):
numLeafs = 0
# 由于python3改变了dict.keys,返回的是dict_keys对象,支持iterable
# 但不支持indexable,我们可以将其明确的转化成list
firstStr = list(myTree.keys())[0] # 存储key
secondDict = myTree[firstStr] # 存储values
for key in secondDict.keys(): # values 中的key 循环
# 测试结点是否是字典 不是字典就是叶子结点
# 是结点 就递归调用此函数
if type(secondDict[key]).__name__ == 'dict':#test to see if the nodes are dictonaires, if not they are leaf nodes
numLeafs += getNumLeafs(secondDict[key])
else:
numLeafs += 1
return numLeafs
# 获取叶的层数
def getTreeDepth(myTree):
maxDepth = 0
#由于python3改变了dict.keys,返回的是dict_keys对象,支持iterable
# 但不支持indexable,我们可以将其明确的转化成list
firstStr = list(myTree.keys())[0]
secondDict = myTree[firstStr]
for key in secondDict.keys():
if type(secondDict[key]).__name__ == 'dict':#test to see if the nodes are dictonaires, if not they are leaf nodes
thisDepth = 1 + getTreeDepth(secondDict[key])
else:
thisDepth = 1
if thisDepth > maxDepth:
maxDepth = thisDepth
return maxDepth
# 该函数主要用于测试 返回预定树的结构
def retrieveTree(i):
listOfTrees =[{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}},
{'no surfacing': {0: 'no', 1: {'flippers': {0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}}
]
return listOfTrees[i]
print(getNumLeafs(retrieveTree(0)))
print(getTreeDepth(retrieveTree(0)))
'''
3
2
'''
下面将所有方法综合到一起,绘制一棵完整的树。
# 在父子结点间填充文本信息
def plotMidText(cntrPt, parentPt, txtString):
xMid = (parentPt[0] - cntrPt[0]) / 2.0 + cntrPt[0]
yMid = (parentPt[1] - cntrPt[1]) / 2.0 + cntrPt[1]
createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)
def plotTree(myTree, parentPt, nodeTxt): # if the first key tells you what feat was split on
numLeafs = getNumLeafs(myTree) # this determines the x width of this tree
depth = getTreeDepth(myTree)
firstStr = list(myTree.keys())[0] # the text label for this node should be this
cntrPt = (plotTree.xOff + (1.0 + float(numLeafs)) / 2.0 / plotTree.totalW, plotTree.yOff)
plotMidText(cntrPt, parentPt, nodeTxt)
plotNode(firstStr, cntrPt, parentPt, decisionNode)
secondDict = myTree[firstStr]
plotTree.yOff = plotTree.yOff - 1.0 / plotTree.totalD
for key in secondDict.keys():
if type(secondDict[
key]).__name__ == 'dict': # test to see if the nodes are dictonaires, if not they are leaf nodes
plotTree(secondDict[key], cntrPt, str(key)) # recursion
else: # it's a leaf node print the leaf node
plotTree.xOff = plotTree.xOff + 1.0 / plotTree.totalW
plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)
plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))
plotTree.yOff = plotTree.yOff + 1.0 / plotTree.totalD
# if you do get a dictonary you know it's a tree, and the first element will be another dict
def createPlot(inTree):
fig = plt.figure(1, facecolor='white')
fig.clf()
axprops = dict(xticks=[], yticks=[])
createPlot.ax1 = plt.subplot(111, frameon=False, **axprops) # no ticks
# createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
plotTree.totalW = float(getNumLeafs(inTree))
plotTree.totalD = float(getTreeDepth(inTree))
plotTree.xOff = -0.5 / plotTree.totalW
plotTree.yOff = 1.0
plotTree(inTree, (0.5, 1.0), '')
plt.show()
myTree = retrieveTree(0)
print(createPlot(myTree))
三、 测试和存储分类器
本节,使用决策树构建分类器,介绍如何存储分类器。下一节将在真实数据上使用决策树分类算法,验证它是否可以正确预测出患者应该使用的隐形眼镜类型。
1. 测试算法:使用决策树执行分类
下面代码使用递归函数,实现对未知样本的分类:
# 利用决策树构建分类器
# 参数是 嵌套字典 特征标签列表 需要测试的特征(0或1)
def classify(inputTree, featLabels, testVec):
firstStr = list(inputTree.keys())[0] #
secondDict = inputTree[firstStr]
# featIndex是特征标签列表中firstStr标签的索引
# 判断firstStr到底是数据集中的哪个属性
featIndex = featLabels.index(firstStr)
classLabel = []
for key in secondDict.keys():
if testVec[featIndex] == key:
if type(secondDict[key]).__name__ == 'dict':
classLabel = classify(secondDict[key], featLabels, testVec)
else:
classLabel = secondDict[key]
return classLabel
myData, labels = createDataSet()
import treeplotter
m = treeplotter.retrieveTree(0)
print(classify(m, labels, [1, 0]))
print(classify(m, labels, [1, 1]))
print(classify(m, labels, [0, 1]))
'''
no
yes
no
'''
2. 使用算法:决策树的存储
使用python
中的pickle
模块序列化对象,可以在磁盘中上保存对象,在需要的时候读取出来。
def storeTree(inputTree, filename):
import pickle
fw = open(filename, 'wb+')
pickle.dump(inputTree, fw)
fw.close()
def grabTree(filename):
import pickle
fr = open(filename, 'rb+')
return pickle.load(fr)
#print(myTree)
storeTree(myTree, 'classifierstorage.txt')
grabTree('classifierstorage.txt')
四、示例:使用决策树预测隐形眼镜类型
使用决策树预测隐形眼镜类型:
- 收集数据:提供的文本文件。
- 准备数据:解析
tab
键分隔的数据行。 - 分析数据:快速检查数据,确保正确地解析数据内容,使用
createPlot()
函数绘制最终的树形图。 - 训练算法:使用第一节中的
createTree()
函数。 - 测试算法:编写测试函数验证决策树可以正确分类给定的数据实例。
- 使用算法:存储树的数据结构,以便下次使用时无需重新构造树。
fr = open('lenses.txt') # 柔性焦距透镜组
lenses = [inst.strip().split('\t') for inst in fr.readlines()]
lensesLabels = ['age', 'prescript', 'astigmatic', 'tearRate']
lensesTree = createTree(lenses, lensesLabels)
treeplotter.createPlot(lensesTree)
对数据进行打印:
for i in range(len(lenses)):
print(lenses[i])
'''
['young', 'myope', 'no', 'reduced', 'no lenses']
['young', 'myope', 'no', 'normal', 'soft']
['young', 'myope', 'yes', 'reduced', 'no lenses']
['young', 'myope', 'yes', 'normal', 'hard']
['young', 'hyper', 'no', 'reduced', 'no lenses']
等等。。。。。
'''
print(lensesTree)
'''
{'tearRate': {'normal': {'astigmatic': {'no': {'age': {'presbyopic':
{'prescript': {'hyper': 'soft', 'myope': 'no lenses'}}, 'young': 'soft',
'pre': 'soft'}}, 'yes': {'prescript': {'hyper': {'age': {'presbyopic':
'no lenses', 'young': 'hard', 'pre': 'no lenses'}}, 'myope': 'hard'}}}},
'reduced': 'no lenses'}}
'''
展示图片:
treeplotter.createPlot(lensesTree)
上述匹配这些选项太多了,称为过度匹配。可以使用裁剪决策树,去掉一些不必要的叶子结点。上述使用的方法称为ID3算法。
五、总结
- 开始处理数据,测量数据集的不一致性——熵。
- 找到最优方法划分数据集,使得分类错误率最小,直到数据集中的所有数据属于统一分类。
- 使用递归的方法,将划分的数据集转化为决策树,存储到嵌套字典中。
- 使用图形函数,将字典转化为容易理解的图表。
完整代码
trees.py
from math import log
# 计算数据集的信息熵
def calcShannonEnt(dataSet):
numEntries = len(dataSet)
labelCounts = {}
for featVec in dataSet:
currentLabel = featVec[-1]
if currentLabel not in labelCounts.keys():
labelCounts[currentLabel] = 0 # 将标签加入到字典中
labelCounts[currentLabel] += 1 # 统计每个标签的数量
shannonEnt = 0.0
for key in labelCounts:
prob = float(labelCounts[key]) / numEntries #选key分类的概率
shannonEnt -= prob * log(prob, 2) # 香农公式
return shannonEnt
def createDataSet():
dataSet = [[1, 1, 'yes'],
[1, 1, 'yes'],
[1, 0, 'no'],
[0 ,1, 'no'],
[0 ,1, 'no']]
labels = ['no surfacing', 'flippers']
return dataSet, labels
myData, labels = createDataSet()
myData[0][-1] = 'maybe'
#print(calcShannonEnt(myData))
myData[0][-1] = 'yes' # 再改回来
# 返回对第axis列中的特征的划分 结果是value的数据
def splitDataSet(dataSet, axis, value):
retDataSet = []
for featVec in dataSet:
if featVec[axis] == value: # 第axis个特征等于value的话
reducedFeatVec = featVec[:axis] # 将axis所指的特征去除掉即可
reducedFeatVec.extend(featVec[axis + 1:])
retDataSet.append(reducedFeatVec)
return retDataSet
'''
print(myData)
print(splitDataSet(myData, 0, 1))
print(splitDataSet(myData, 0, 0))
'''
# 选出整个数据集最好的特征划分
def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1 # 最后一项是标签
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0
bestFeature = -1
for i in range(numFeatures):
# 使用列表推导来创建新的列表
# 将数据集中所有第i个特征值或者所有可能存在的值写入新的list
featList = [example[i] for example in dataSet]
uniqueVals = set(featList) # 集合中值唯一 顺序排好
newEntropy = 0.0
for value in uniqueVals:
# 按照第i个特征进行划分
subDataSet = splitDataSet(dataSet, i, value)
# 第i个特征的概率
prob = len(subDataSet) / float(len(dataSet))
# 计算按照第i个特征划分后的各个类别的熵的和
newEntropy += prob * calcShannonEnt(subDataSet)
# 计算信息增益
infoGain = baseEntropy - newEntropy
if(infoGain > bestInfoGain):
bestInfoGain = infoGain
bestFeature = i
return bestFeature
#print(myData)
#print(chooseBestFeatureToSplit(myData)) # 返回0 代表第0个特征
import operator
# 返回数据集中 个数最多的特征
def majorityCnt(classList):
classCount = {}
for vote in classList:
if vote not in classCount.keys():
classCount[vote] = 0
classCount[vote] += 1 # 统计每个标签出现的个数
sortedClassCount = sorted(classCount.items(),
key = operator.itemgetter(1),
reverse = True)
return sortedClassCount[0][0] # 返回个数最多的那个特征
# 创造嵌套字典
def createTree(dataSet, labels):
# 将类别特征存到classList列表中
classList = [example[-1] for example in dataSet]
# 列表中的类别特征完全相同 直接返回 停止划分
if classList.count(classList[0]) == len(classList):
return classList[0]
# 遍历完所有特征 返回样本个数最多的类别
if len(dataSet[0]) == 1:
return majorityCnt(classList)
bestFeat = chooseBestFeatureToSplit(dataSet)
bestFeatLabel = labels[bestFeat]
myTree = {bestFeatLabel : {} }
del(labels[bestFeat])
featValues = [example[bestFeat] for example in dataSet]
uniqueVals = set(featValues) # 记录当前选择的特征的属性 无重复
for value in uniqueVals:
subLabels = labels[ : ] # 复制类标签
myTree[bestFeatLabel][value] = \
createTree(splitDataSet(dataSet, bestFeat, value), subLabels)
return myTree
myTree = createTree(myData, labels)
# print(myTree)
'''
嵌套字典
{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}}
'''
# 利用决策树构建分类器
# 参数是 嵌套字典 特征标签列表 需要测试的特征(0或1)
def classify(inputTree, featLabels, testVec):
firstStr = list(inputTree.keys())[0] #
secondDict = inputTree[firstStr]
# featIndex是特征标签列表中firstStr标签的索引
# 判断firstStr到底是数据集中的哪个属性
featIndex = featLabels.index(firstStr)
classLabel = []
for key in secondDict.keys():
if testVec[featIndex] == key:
if type(secondDict[key]).__name__ == 'dict':
classLabel = classify(secondDict[key], featLabels, testVec)
else:
classLabel = secondDict[key]
return classLabel
myData, labels = createDataSet()
import treeplotter
m = treeplotter.retrieveTree(0)
'''
print(classify(m, labels, [1, 0]))
print(classify(m, labels, [1, 1]))
print(classify(m, labels, [0, 1]))
'''
def storeTree(inputTree, filename):
import pickle
fw = open(filename, 'wb+')
pickle.dump(inputTree, fw)
fw.close()
def grabTree(filename):
import pickle
fr = open(filename, 'rb+')
return pickle.load(fr)
#print(myTree)
#storeTree(myTree, 'classifierstorage.txt')
#grabTree('classifierstorage.txt')
fr = open('lenses.txt') # 柔性焦距透镜组
lenses = [inst.strip().split('\t') for inst in fr.readlines()]
lensesLabels = ['age', 'prescript', 'astigmatic', 'tearRate']
lensesTree = createTree(lenses, lensesLabels)
treeplotter.createPlot(lensesTree)
for i in range(len(lenses)):
print(lenses[i])
print(lensesTree)
treeplotter.py
import matplotlib.pyplot as plt
# 定义文本框和箭头格式
# boxstyle = ”sawtooth” 表示 注解框的边缘是波浪线,
# fc = '0.8' 是颜色深度
decisionNode = dict(boxstyle = "sawtooth", fc = '0.8')# dict 字典
leafNode = dict(boxstyle = 'round4', fc = '0.8') # 注释框是圆的
arrow_args = dict(arrowstyle = '<-') # 箭头指向文本框 不是数据点
#
def plotNode(nodeTxt, centerPt, parentPt, nodeType):
createPlot.ax1.annotate(nodeTxt, xy = parentPt,
xycoords = 'axes fraction', xytext = centerPt,
textcoords = 'axes fraction',
va = 'center', ha = 'center', bbox = nodeType,
arrowprops = arrow_args)
'''
def createPlot():
# 创建一个新图形
fig = plt.figure(1, facecolor = 'white')
# 清空绘图区
fig.clf()
createPlot.ax1 = plt.subplot(111, frameon = False)
plotNode('决策结点', (0.5, 0.1), (0.1, 0.5), decisionNode)
plotNode('叶结点', (0.8, 0.1), (0.3, 0.8), leafNode)
plt.show()
'''
# print(createPlot())
# 获取叶结点的数目
def getNumLeafs(myTree):
numLeafs = 0
# 由于python3改变了dict.keys,返回的是dict_keys对象,支持iterable
# 但不支持indexable,我们可以将其明确的转化成list
firstStr = list(myTree.keys())[0] # 存储key
secondDict = myTree[firstStr] # 存储values
for key in secondDict.keys(): # values 中的key 循环
# 测试结点是否是字典 不是字典就是叶子结点
# 是结点 就递归调用此函数
if type(secondDict[key]).__name__ == 'dict':#test to see if the nodes are dictonaires, if not they are leaf nodes
numLeafs += getNumLeafs(secondDict[key])
else:
numLeafs += 1
return numLeafs
# 获取叶的层数
def getTreeDepth(myTree):
maxDepth = 0
#由于python3改变了dict.keys,返回的是dict_keys对象,支持iterable
# 但不支持indexable,我们可以将其明确的转化成list
firstStr = list(myTree.keys())[0]
secondDict = myTree[firstStr]
for key in secondDict.keys():
if type(secondDict[key]).__name__ == 'dict':#test to see if the nodes are dictonaires, if not they are leaf nodes
thisDepth = 1 + getTreeDepth(secondDict[key])
else:
thisDepth = 1
if thisDepth > maxDepth:
maxDepth = thisDepth
return maxDepth
# 该函数主要用于测试 返回预定树的结构
def retrieveTree(i):
listOfTrees =[{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}},
{'no surfacing': {0: 'no', 1: {'flippers': {0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}}
]
return listOfTrees[i]
# print(getNumLeafs(retrieveTree(0)))
# print(getTreeDepth(retrieveTree(0)))
# 在父子结点间填充文本信息
def plotMidText(cntrPt, parentPt, txtString):
xMid = (parentPt[0] - cntrPt[0]) / 2.0 + cntrPt[0]
yMid = (parentPt[1] - cntrPt[1]) / 2.0 + cntrPt[1]
createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)
def plotTree(myTree, parentPt, nodeTxt): # if the first key tells you what feat was split on
numLeafs = getNumLeafs(myTree) # this determines the x width of this tree
depth = getTreeDepth(myTree)
firstStr = list(myTree.keys())[0] # the text label for this node should be this
cntrPt = (plotTree.xOff + (1.0 + float(numLeafs)) / 2.0 / plotTree.totalW, plotTree.yOff)
plotMidText(cntrPt, parentPt, nodeTxt)
plotNode(firstStr, cntrPt, parentPt, decisionNode)
secondDict = myTree[firstStr]
plotTree.yOff = plotTree.yOff - 1.0 / plotTree.totalD
for key in secondDict.keys():
if type(secondDict[
key]).__name__ == 'dict': # test to see if the nodes are dictonaires, if not they are leaf nodes
plotTree(secondDict[key], cntrPt, str(key)) # recursion
else: # it's a leaf node print the leaf node
plotTree.xOff = plotTree.xOff + 1.0 / plotTree.totalW
plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)
plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))
plotTree.yOff = plotTree.yOff + 1.0 / plotTree.totalD
# if you do get a dictonary you know it's a tree, and the first element will be another dict
def createPlot(inTree):
fig = plt.figure(1, facecolor = 'white')
fig.clf()
axprops = dict(xticks = [], yticks = [])
createPlot.ax1 = plt.subplot(111, frameon = False, **axprops) # no ticks
# createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
plotTree.totalW = float(getNumLeafs(inTree))
plotTree.totalD = float(getTreeDepth(inTree))
plotTree.xOff = -0.5 / plotTree.totalW
plotTree.yOff = 1.0
plotTree(inTree, (0.5, 1.0), '')
plt.show()
myTree = retrieveTree(1)
# print(createPlot(myTree))
本文地址:https://blog.csdn.net/bwqiang/article/details/107029990
上一篇: 笑话集原创笑话精品展第145期