上周實(shí)現(xiàn)了離散變量的決策樹的構(gòu)建(ID3算法),它的做法是每次選取當(dāng)前最佳的特征來分割數(shù)據(jù),并按照該特征所有的可能值來切分。也就是說,如果一個(gè)特征有4種取值,那么數(shù)據(jù)被切分成4份,一旦按某特征切分后,便固定死了,該特征在之后的算法執(zhí)行過程中將不會(huì)再起作用,顯然,這種切分方式過于迅速。而此外,ID3算法不能直接處理連續(xù)型特征。
再補(bǔ)充一下用ID3算法生成決策樹的圖例。
我們的例子是李航的《統(tǒng)計(jì)學(xué)習(xí)方法》第五章的表5.1,根據(jù)該表生成決策樹,在已知年齡、有工作、有自己房子、信貸情況的情況下判斷是否給貸款.
用ID3算法生成的決策樹如下(畫圖的程序?qū)崿F(xiàn)在最后,參照的是Peter Harrington的《機(jī)器學(xué)習(xí)實(shí)戰(zhàn)》):
效果很明顯,從雜亂無章的15條記錄中提取出這么精辟的決策樹,有了這棵決策樹便很輕易的可以判斷該不該給某人貸款,如果他有房子,就給貸,如果沒有,但他有工作,也給貸,如果都沒有,就不給貸。比表5.1精簡有效多了。
再來看一個(gè)例子,周志華的《機(jī)器學(xué)習(xí)》的判斷是否為好瓜的數(shù)據(jù):
判斷一個(gè)西瓜可以從色澤,根蒂,敲聲,紋理,臍部,觸感6個(gè)特征去判斷,每個(gè)特征都有2-3個(gè)值,用ID3算法生成的決策樹如下:
這里一個(gè)節(jié)點(diǎn)可以有2個(gè)以上的分支,取決于每個(gè)特征的所有可能值。這樣也使一團(tuán)雜亂無章的數(shù)據(jù)有了個(gè)很清晰的決策樹。
**總結(jié):
ID3算法可以使離散的問題清晰簡單化,但也有兩點(diǎn)局限:
- 切分過于迅速
- 不能直接處理連續(xù)型特征**
如遇到連續(xù)變化的特征或者特征可能值很多的情況下,算法得出的效果并不理想而且沒有多大用處。大多數(shù)情況下,生成決策樹的目的是用來分類的。
這周,生成決策樹的算法是CART算法,不像ID3算法,它是一種二元切分法,具體處理方法:如果特征值大于給定值就走左子樹,否則就走右子樹。解決了ID3算法的局限,但同時(shí),如果用來分類,生成的決策樹容易太貪心,滿足了大部分訓(xùn)練數(shù)據(jù),出現(xiàn)過擬合。為提高泛化能力,需對(duì)其
進(jìn)行剪枝,把某些節(jié)點(diǎn)塌陷成一類。
在本文,構(gòu)建CART的實(shí)現(xiàn)算法有兩種(程序在最后)
一種是Peter Harrington的《機(jī)器學(xué)習(xí)實(shí)戰(zhàn)》的對(duì)連續(xù)數(shù)據(jù)的構(gòu)建算法,核心方法(選取最優(yōu)特征)的偽代碼如下:
遍歷每個(gè)特征:
遍歷每個(gè)特征值:
將數(shù)據(jù)切分成兩份
計(jì)算切分的誤差
如果當(dāng)前誤差小于當(dāng)前最小誤差:
更新當(dāng)前最小誤差
更新當(dāng)前最優(yōu)特征和最優(yōu)切分點(diǎn)
返回最優(yōu)切分特征和最優(yōu)切分點(diǎn)
一種是李航的《統(tǒng)計(jì)學(xué)習(xí)方法》的用基尼指數(shù)構(gòu)建的算法,程序是自己實(shí)現(xiàn)的,目前只能針對(duì)離散性數(shù)據(jù),核心方法的偽代碼如下:
遍歷每個(gè)特征:
遍歷每個(gè)特征值:
將數(shù)據(jù)切分成兩份
計(jì)算切分的基尼指數(shù)
如果基尼指數(shù)小于當(dāng)前基尼指數(shù):
更新當(dāng)前基尼指數(shù)
更新當(dāng)前最優(yōu)特征和最優(yōu)切分點(diǎn)
返回最優(yōu)切分特征和最優(yōu)切分點(diǎn)
只是把誤差計(jì)算方式變成了基尼指數(shù),其他基本類似。
對(duì)前面兩例用CART算法生成的決策樹如下:
圖5和圖2是一樣的,因?yàn)橛脕砬蟹值奶卣鞫贾挥袃深?br> 但圖6和圖4便不一樣。
再來對(duì)連續(xù)的數(shù)據(jù)構(gòu)建決策樹,數(shù)據(jù)來自于Peter Harrington的《機(jī)器學(xué)習(xí)實(shí)戰(zhàn)》的第九章ex0.txt
肉眼可以分辨,整段數(shù)據(jù)可分為5段,用CART算法生成的結(jié)果如下:
{'spInd': 0, 'spVal': 0.39434999999999998, 'left': {'spInd': 0, 'spVal': 0.58200200000000002, 'left': {'spInd': 0, 'spVal': 0.79758300000000004, 'left': 3.9871631999999999, 'right': 2.9836209534883724}, 'right': 1.980035071428571}, 'right': {'spInd': 0, 'spVal': 0.19783400000000001, 'left': 1.0289583666666666, 'right': -0.023838155555555553}}
(實(shí)在不想畫圖了,就用dict表示吧,spInd表示當(dāng)前分割特征,spVal表示當(dāng)前分割值,left表示坐子節(jié)點(diǎn),right表示右子節(jié)點(diǎn))
從dict中也明顯可以看到,它將數(shù)據(jù)分成5段,但這個(gè)前提是ops=(1,4)選的好,對(duì)樹進(jìn)行預(yù)剪枝了。
如果ops=(0.1,0.4)會(huì)發(fā)生什么呢?
{'spInd': 0, 'spVal': 0.39434999999999998, 'left': {'spInd': 0, 'spVal': 0.58200200000000002, 'left': {'spInd': 0, 'spVal': 0.79758300000000004, 'left': {'spInd': 0, 'spVal': 0.81900600000000001, 'left': {'spInd': 0, 'spVal': 0.83269300000000002, 'left': 3.9814298333333347, 'right': {'spInd': 0, 'spVal': 0.81913599999999998, 'left': 4.5692899999999996, 'right': 4.048082}}, 'right': 3.7688410000000001}, 'right': {'spInd': 0, 'spVal': 0.62039299999999997, 'left': {'spInd': 0, 'spVal': 0.62261599999999995, 'left': 2.9787170277777779, 'right': 2.6702779999999997}, 'right': {'spInd': 0, 'spVal': 0.61605100000000002, 'left': 3.5225040000000001, 'right': 3.0497069999999997}}}, 'right': {'spInd': 0, 'spVal': 0.48669800000000002, 'left': {'spInd': 0, 'spVal': 0.53324099999999997, 'left': {'spInd': 0, 'spVal': 0.55900899999999998, 'left': 2.0720909999999999, 'right': 1.8145387500000001}, 'right': 2.0843065555555551}, 'right': 1.8810897500000001}}, 'right': {'spInd': 0, 'spVal': 0.19783400000000001, 'left': {'spInd': 0, 'spVal': 0.21054200000000001, 'left': {'spInd': 0, 'spVal': 0.37526999999999999, 'left': 1.2040690000000001, 'right': {'spInd': 0, 'spVal': 0.316465, 'left': 0.86561450000000006, 'right': {'spInd': 0, 'spVal': 0.23417499999999999, 'left': 1.1113766363636364, 'right': 0.90613224999999997}}}, 'right': 1.3753635000000002}, 'right': {'spInd': 0, 'spVal': 0.14865400000000001, 'left': 0.071894545454545447, 'right': {'spInd': 0, 'spVal': 0.14314299999999999, 'left': -0.27792149999999999, 'right': -0.040866062499999994}}}}
顯然,過擬合了。生成了很多不必要的節(jié)點(diǎn)。在實(shí)際應(yīng)用中,根本不能控制數(shù)據(jù)值得大小,所以ops也很難選好,而ops的選擇對(duì)結(jié)果的影響很大。所以僅僅預(yù)剪枝是遠(yuǎn)遠(yuǎn)不夠的。
于是需要后剪枝。簡單來說,就是選擇ops,使得構(gòu)建出的樹足夠大,接下來從上而下找到葉節(jié)點(diǎn),用測(cè)試集的數(shù)據(jù)來判斷這些葉節(jié)點(diǎn)是否能降低測(cè)試誤差,如果能,就合并,偽代碼如下:
基于已有的樹切分測(cè)試數(shù)據(jù):
如果存在任一子集是一棵樹,則在該子集遞歸剪枝過程
計(jì)算當(dāng)前兩個(gè)葉節(jié)點(diǎn)合并后的誤差
計(jì)算合并前的誤差
如果合并后的誤差小于合并前的誤差:
將兩個(gè)葉節(jié)點(diǎn)合并
對(duì)上述決策樹進(jìn)行剪枝,由于沒有測(cè)試數(shù)據(jù),便拿前150當(dāng)作訓(xùn)練數(shù)據(jù),后50當(dāng)作測(cè)試數(shù)據(jù),圖如下:
同樣,ops=(0.1,0.4),剪枝后的樹為:
{'spInd': 0, 'spVal': 0.39434999999999998, 'left': {'spInd': 0, 'spVal': 0.58028299999999999, 'left': {'spInd': 0, 'spVal': 0.79758300000000004, 'left': 3.9739993000000005, 'right': 3.0065657575757574}, 'right': 1.9667640539772728}, 'right': {'spInd': 0, 'spVal': 0.19783400000000001, 'left': 1.0753531944444445, 'right': -0.028014558823529413}}
由那么復(fù)雜的樹剪枝剪成只有五個(gè)類別。效果不錯(cuò)
實(shí)現(xiàn)代碼如下:
treePlotter.py
'''
Created on 2017年7月30日
@author: fujianfei
'''
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['SimHei']#解約matplotlib畫圖,中文亂碼問題
decisionNode = dict(boxstyle="sawtooth", fc="0.8")
leafNode = dict(boxstyle="round4", fc="0.8")
arrow_args = dict(arrowstyle="<-")
def getNumLeafs(myTree):
numLeafs = 0
firstSides = list(myTree.keys())
firstStr = firstSides[0]#找到輸入的第一個(gè)元素
secondDict = myTree[firstStr]
for key in secondDict.keys():
if type(secondDict[key]).__name__=='dict':#test to see if the nodes are dictonaires, if not they are leaf nodes
numLeafs += getNumLeafs(secondDict[key])
else: numLeafs +=1
return numLeafs
def getTreeDepth(myTree):
maxDepth = 0
firstSides = list(myTree.keys())
firstStr = firstSides[0]#找到輸入的第一個(gè)元素
secondDict = myTree[firstStr]
for key in secondDict.keys():
if type(secondDict[key]).__name__=='dict':#test to see if the nodes are dictonaires, if not they are leaf nodes
thisDepth = 1 + getTreeDepth(secondDict[key])
else: thisDepth = 1
if thisDepth > maxDepth: maxDepth = thisDepth
return maxDepth
def plotNode(nodeTxt, centerPt, parentPt, nodeType):
createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction',
xytext=centerPt, textcoords='axes fraction',
va="center", ha="center", bbox=nodeType, arrowprops=arrow_args )
def plotMidText(cntrPt, parentPt, txtString):
xMid = (parentPt[0]-cntrPt[0])/2.0 + cntrPt[0]
yMid = (parentPt[1]-cntrPt[1])/2.0 + cntrPt[1]
createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)
def plotTree(myTree, parentPt, nodeTxt):#if the first key tells you what feat was split on
numLeafs = getNumLeafs(myTree) #this determines the x width of this tree
#depth = getTreeDepth(myTree)
firstSides = list(myTree.keys())
firstStr = firstSides[0]#找到輸入的第一個(gè)元素
cntrPt = (plotTree.xOff + (1.0 + float(numLeafs))/2.0/plotTree.totalW, plotTree.yOff)
plotMidText(cntrPt, parentPt, nodeTxt)
plotNode(firstStr, cntrPt, parentPt, decisionNode)
secondDict = myTree[firstStr]
plotTree.yOff = plotTree.yOff - 1.0/plotTree.totalD
for key in secondDict.keys():
if type(secondDict[key]).__name__=='dict':#test to see if the nodes are dictonaires, if not they are leaf nodes
plotTree(secondDict[key],cntrPt,str(key)) #recursion
else: #it's a leaf node print the leaf node
plotTree.xOff = plotTree.xOff + 1.0/plotTree.totalW
plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)
plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))
plotTree.yOff = plotTree.yOff + 1.0/plotTree.totalD
#if you do get a dictonary you know it's a tree, and the first element will be another dict
def createPlot(inTree):
fig = plt.figure(1, facecolor='white')
fig.clf()
axprops = dict(xticks=[], yticks=[])
createPlot.ax1 = plt.subplot(111, frameon=False, **axprops) #no ticks
#createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
plotTree.totalW = float(getNumLeafs(inTree))
plotTree.totalD = float(getTreeDepth(inTree))
plotTree.xOff = -0.5/plotTree.totalW; plotTree.yOff = 1.0;
plotTree(inTree, (0.5,1.0), '')
plt.show()
#def createPlot():
# fig = plt.figure(1, facecolor='white')
# fig.clf()
# createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
# plotNode('a decision node', (0.5, 0.1), (0.1, 0.5), decisionNode)
# plotNode('a leaf node', (0.8, 0.1), (0.3, 0.8), leafNode)
# plt.show()
# def retrieveTree(i):
# listOfTrees =[{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}},
# {'no surfacing': {0: 'no', 1: {'flippers': {0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}}
# ]
# return listOfTrees[i]
#createPlot(thisTree)
CARTTree.py
'''
Created on 2017年8月2日
@author: Administrator
'''
import operator
class TreeNode(object):
'''
.樹節(jié)點(diǎn)的定義:
'''
def __init__(self, feat=None, val=None, left=None, right=None):
'''
featureToSpliton:該節(jié)點(diǎn)對(duì)應(yīng)的特征,比如'年齡'
ValToSplit:由特征分類后的值,比如'青年','中年','老年'
leftBranch:左分支
rightBranch:右分支
'''
self.feat = feat
self.val = val
self.left = left
self.right = right
def calcGini(dataSet):
'''
.計(jì)算訓(xùn)練數(shù)據(jù)的預(yù)測(cè)誤差,在這里用基尼指數(shù)
'''
num = len(dataSet)#數(shù)據(jù)集的行數(shù),即有幾個(gè)樣本點(diǎn)
labelCounts = {}#提取總共有多少標(biāo)簽并計(jì)數(shù)
for featVec in dataSet:#遍歷數(shù)據(jù)集
label = featVec[-1]#提取標(biāo)簽
if label not in labelCounts.keys():#如果標(biāo)簽不再labelCount里
labelCounts[label] = 0#那么在字典labelCount里建一對(duì)字典 key=label,value=0
labelCounts[label] += 1#對(duì)key=label的字典 的 value加1,計(jì)數(shù)
gini = 0.0 #定義1-基尼指數(shù)
for key in labelCounts.keys():
prop = float(labelCounts[key])/num #計(jì)算每個(gè)類別的概率
gini += prop ** 2 #每個(gè)類別概率的平方相加,賦值給gini
return 1-gini#1-概率平方之和,即為基尼指數(shù)
def splitDataSet(dataSet, featAndVal):
'''
.分割數(shù)據(jù)集,根據(jù)特征feat(比如年齡)和特征對(duì)應(yīng)的某個(gè)值val(比如青年)
.將數(shù)據(jù)dataSet分割為兩部分:青年的數(shù)據(jù)集sub_dateSet1,非青年的數(shù)據(jù)集sub_dateSet2,并返回兩個(gè)子數(shù)據(jù)集
.返回的子數(shù)據(jù)集可用來計(jì)算條件基尼指數(shù),Gini(D,A)
'''
sub_dateSet1 = []
sub_dateSet2 = []
for featVec in dataSet:
if featVec[featAndVal[0]] == featAndVal[1]:
reduceDataSet = featVec[:featAndVal[0]]
reduceDataSet.extend(featVec[featAndVal[0]+1:])
sub_dateSet1.append(reduceDataSet)
else:
reduceDataSet = featVec[:featAndVal[0]]
reduceDataSet.extend(featVec[featAndVal[0]+1:])
sub_dateSet2.append(reduceDataSet)
return sub_dateSet1,sub_dateSet2
def chooseBestFeatAndCuttingpoint(dataSet):
'''
.遍歷數(shù)據(jù)集找到最小的基尼指數(shù),選擇最優(yōu)特征與最優(yōu)切分點(diǎn)
'''
bestFeatAndCuttingpoint = [-1,-1]#定義優(yōu)特征和最優(yōu)切分點(diǎn)
min_gini = float("inf")#定義最小基尼指數(shù)
numFeat = len(dataSet[0]) - 1#特征數(shù)
numData = len(dataSet)#樣本數(shù)
for i in range(numFeat):#遍歷所有特征
featList = [example[i] for example in dataSet]
uniqueFeat = set(featList)
for value in uniqueFeat:#遍歷所有可能的切分點(diǎn)
#把樣本集合D根據(jù)特征A是否取某一可能值a被分割成D1和D2兩部分
subdata1,subdata2 = splitDataSet(dataSet, [i,value])
#計(jì)算在特征A,切分點(diǎn)a的條件下,集合D的基尼指數(shù)
tmp_gini = (float(len(subdata1))/numData) * calcGini(subdata1) + (float(len(subdata2))/numData) * calcGini(subdata2)
if tmp_gini < min_gini:
min_gini = tmp_gini
bestFeatAndCuttingpoint = [i,value]
return bestFeatAndCuttingpoint
def majorityCnt(classList):
'''
.多數(shù)投票表決,有時(shí)候會(huì)遇到數(shù)據(jù)集已經(jīng)處理了所有的屬性
.但是類標(biāo)簽還不是唯一的,這個(gè)時(shí)候便用該方法確定該葉子節(jié)點(diǎn)的分類
'''
classCount = {}
for vote in classList:
if vote not in classCount.keys():classCount[vote] = 0
classCount[vote] += 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reserve=True)
return sortedClassCount[0][0]
def creatCART(dataSet, labels):
'''
.用數(shù)據(jù)字典結(jié)構(gòu)存儲(chǔ)樹
.后續(xù)的CART樹剪枝就用這種結(jié)構(gòu)
'''
classList = [example[-1] for example in dataSet]#dataSet的最后一列,類別列
#結(jié)束遞歸的條件:
#1.類別完全相同
if classList.count(classList[0]) == len(classList):
return classList[0]
#2.分類到了最后一個(gè)節(jié)點(diǎn),用多數(shù)投票決定類別
if len(dataSet[0]) == 1:
return majorityCnt(classList)
#選擇最優(yōu)特征和最優(yōu)切分點(diǎn)
bestFeatAndCuttingpoint = chooseBestFeatAndCuttingpoint(dataSet)
bestFeatLabel = labels[bestFeatAndCuttingpoint[0]]#特征對(duì)應(yīng)的標(biāo)簽
mytree = {bestFeatLabel:{}}#定義樹,用字典類型的結(jié)構(gòu)就足以表示決策樹全部的信息
del(labels[bestFeatAndCuttingpoint[0]])#將用過的標(biāo)簽刪除
sub_dataSet1,sub_dataSet2 = splitDataSet(dataSet, bestFeatAndCuttingpoint)#分割成D1和D2
subLabels = labels[:]#去掉用過后的標(biāo)簽
mytree[bestFeatLabel]['是'] = creatCART(sub_dataSet1, subLabels)#符合val的Branch,即D1
mytree[bestFeatLabel]['否'] = creatCART(sub_dataSet2, subLabels)#不符合val的Branch,即D2
return mytree
class CART(object):
'''
.用特殊類型結(jié)構(gòu)存儲(chǔ)樹,自己建的TreeNode,樹節(jié)點(diǎn)形式的結(jié)構(gòu)
.這種結(jié)構(gòu)還不完善,沒有去調(diào)式
'''
def __init__(self,data=None):
def creatNode(dataSet=None, bestFeatAndCuttingpoint=None):
gini = calcGini(dataSet)
#遞歸停止條件:樣本個(gè)數(shù)小于預(yù)定閾值,或樣本集的基尼指數(shù)小于預(yù)定閾值,或這沒有更多特征
if len(dataSet) <=0 or gini <=0.0001 or len(dataSet[0]) <=0:
return None
#選擇最優(yōu)特征和最優(yōu)切分點(diǎn)
sub_dataSet1,sub_dataSet2 = splitDataSet(dataSet, bestFeatAndCuttingpoint)#分割成D1和D2
return TreeNode(bestFeatAndCuttingpoint[0], bestFeatAndCuttingpoint[1], creatNode(sub_dataSet1,chooseBestFeatAndCuttingpoint(sub_dataSet1)), creatNode(sub_dataSet2,chooseBestFeatAndCuttingpoint(sub_dataSet2)))
self.root = creatNode(data, chooseBestFeatAndCuttingpoint(data))
def preOrder(root):
'''
.樹的前序遍歷
'''
print(root.feat)
if root.left:
preOrder(root.left)
if root.right:
preOrder(root.right)
regTrees.py
'''
Created on 2017年8月5日
@author: fujianfei
'''
import numpy as np
from os.path import os
import matplotlib.pyplot as plt
def loadDataSet(fileName):
'''
.導(dǎo)入數(shù)據(jù)
'''
data_path = os.getcwd()+'\\data\\'
dataMat = np.loadtxt(data_path+fileName,delimiter='\t')
return dataMat
def binSplitDataSet(dataSet, feature, value):
'''
.將數(shù)據(jù)根據(jù)特征和值分成兩部分,一部分為大于value的數(shù)據(jù)集mat0,一部分為小于等于Value的數(shù)據(jù)集mat1
'''
# print(np.nonzero((dataSet[:,feature] > value)))
mat0 = dataSet[np.nonzero((dataSet[:,feature] > value)),:][0]
mat1 = dataSet[np.nonzero((dataSet[:,feature] <= value)),:][0]
return mat0,mat1
def regLeaf(dataSet):
return np.mean(dataSet[:,-1])
def regErr(dataSet):
return np.var(dataSet[:,-1]) * len(dataSet)
def chooseBestSplit(dataSet, leafType=regLeaf, errType=regErr, ops=(1,4)):
tolS = ops[0];tolN = ops[1]
if len(set(dataSet[:,-1].tolist())) == 1:
return None, leafType(dataSet)
n = len(dataSet[0])
S = errType(dataSet)
bestS = float('inf'); bestIndex = 0; bestValue = 0
for featIndex in range(n-1):#遍歷所有特征
for splitVal in set(dataSet[:,featIndex]):#遍歷所有確定特征的值
mat0,mat1 = binSplitDataSet(dataSet, featIndex, splitVal)#將數(shù)據(jù)分成兩部分
if(np.shape(mat0)[0] < tolN) or (np.shape(mat1)[0] < tolN):continue
newS = errType(mat0) + errType(mat1)#計(jì)算分成兩部分后的數(shù)據(jù)的方差之和
if newS < bestS:
bestIndex = featIndex
bestValue = splitVal
bestS = newS
if(S-bestS) < tolS:
return None, leafType(dataSet)
mat0,mat1 = binSplitDataSet(dataSet, bestIndex, bestValue)
if (np.shape(mat0)[0] < tolN) or (np.shape(mat1)[0] < tolN):
return None, leafType(dataSet)
return bestIndex, bestValue
def createTree(dataSet, leafType=regLeaf, errType=regErr, ops=(1,4)):
feat, val = chooseBestSplit(dataSet, leafType, errType, ops)
if feat == None: return val
retTree = {}
retTree['spInd'] = feat
retTree['spVal'] = val
lSet, rSet = binSplitDataSet(dataSet, feat, val)
retTree['left'] = createTree(lSet, leafType, errType, ops)
retTree['right'] = createTree(rSet, leafType, errType, ops)
return retTree
def istree(obj):
return (type(obj).__name__ == 'dict');
def getMean(tree):
'''
.計(jì)算樹的平均值
'''
if istree(tree['left']) : return getMean(tree['left'])
if istree(tree['right']) : return getMean(tree['right'])
return (tree['left']+tree['right'])/2.0
def prune(tree, testDate):
'''
.剪枝
'''
if len(testDate) == 0 : return getMean(tree)
if istree(tree['left']) or istree(tree['right']):
lSet, rSet = binSplitDataSet(testDate, tree['spInd'], tree['spVal'])
if istree(tree['left']) : tree['left'] = prune(tree['left'], lSet)
if istree(tree['right']) : tree['right'] = prune(tree['right'], rSet)
if (not istree(tree['left'])) and (not istree(tree['right'])):
lSet, rSet = binSplitDataSet(testDate, tree['spInd'], tree['spVal'])
#剪枝前的誤差
erroNoMerge = np.sum(np.power(lSet[:,-1]-tree['left'],2)) + np.sum(np.power(rSet[:,-1]-tree['right'],2))
#剪枝后的誤差
treeMean = (tree['left'] + tree['right'])/2.0
erroMerge = np.sum(np.power(testDate[:,-1]-treeMean,2))
#如果剪枝后的誤差小于剪枝前的誤差,則進(jìn)行剪枝
if erroMerge < erroNoMerge:
print('merging')
return treeMean
else : return tree
else : return tree
dataSet = loadDataSet('ex0.txt')
dataSet1 = loadDataSet('ex0test.txt')
plt.subplot(121)
plt.scatter(dataSet[:,0], dataSet[:,1])
plt.subplot(122)
plt.scatter(dataSet1[:,0], dataSet1[:,1])
plt.show()
tree_ = createTree(dataSet,ops=(0.1,0.4))
tree_ = prune(tree_,dataSet1)
print(tree_)
init.py
from DecisionTree import trees,CARTTree,treePlotter,regTrees
from os.path import os
# def createDataSet():
# dataSet = [[1,1,'yes'],
# [1,1,'yes'],
# [1,0,'no'],
# [0,1,'no'],
# [0,1,'no']]
# labels = ['no surfacing','flippers']
# return dataSet,labels
# def createDataSet():
# dataSet = [[1,2,2,3,'no'],
# [1,2,2,2,'no'],
# [1,1,2,2,'yes'],
# [1,1,1,3,'yes'],
# [1,2,2,3,'no'],
# [2,2,2,3,'no'],
# [2,2,2,2,'no'],
# [2,1,1,2,'yes'],
# [2,2,1,1,'yes'],
# [2,2,1,1,'yes'],
# [3,2,1,1,'yes'],
# [3,2,1,2,'yes'],
# [3,1,2,2,'yes'],
# [3,1,2,1,'yes'],
# [3,2,2,3,'no']]
# labels = ['年齡','有工作','有自己房子','信貸情況']
# return dataSet,labels
def loadData(fileName):
dataSet = []
data_path = os.getcwd()+'\\data\\'
fr = open(data_path+fileName)
for line in fr.readlines():
curLine = line.strip().split(',')
dataSet.append(curLine)
return dataSet
dataSet = loadData('watermelon1.txt')
labels = ['色澤', '根蒂', '敲聲', '紋理', '臍部', '觸感']
# dataSet,labels = createDataSet()
mytree = trees.createTree(dataSet, labels)
# mytree = regTrees.createTree(dataSet)
# mytree = CARTTree.creatCART(dataSet, labels)
print(mytree)
# treePlotter.createPlot(mytree)