"""
随机森林
bagging装袋法 构建多个相互独立的评估器
boosting提升法  上个模型预测错误的样本在下个模型预测时有更高的权重
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_wine
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier

wine=load_wine()
x_train, x_test, y_train, y_test = train_test_split(wine.data,wine.target,test_size=0.3)

rfc = RandomForestClassifier(criterion='entropy'
                           ,random_state=50
                           ,n_estimators=20     #随机森林中树的数量，越大准确性越高，计算量越大
                           ,max_depth=4
                           )
rfc = rfc.fit(x_train,y_train)
result = rfc.score(x_test,y_test)
print(result)

# 交叉检验 对比决策树和随机森林的交叉检验准确率的差异
import matplotlib.pyplot as plt

rfc_l=[]
clf_l=[]
for i in range(10):
    rfc=RandomForestClassifier(random_state=10)
    rfc_c=cross_val_score(rfc,wine.data,wine.target,cv=10).mean()
    rfc_l.append(rfc_c)
    clf=DecisionTreeClassifier(random_state=10)
    clf_c=cross_val_score(clf,wine.data,wine.target,cv=10).mean()
    clf_l.append(clf_c)
plt.plot(range(1, 11), rfc_l, color="red", label="RandomForest")
plt.plot(range(1, 11), clf_l, color="blue", label="DecisionTree")
plt.legend()
plt.show()

# n_estimators学习曲线
test=[]
for i in range(40):
    rfc = RandomForestClassifier(criterion='entropy'
                                 , random_state=50
                                 , n_estimators=i+1
                                 , max_depth=5
                                 )
    rfc = rfc.fit(x_train, y_train)
    score3 = rfc.score(x_test, y_test)
    test.append(score3)
print(test)
plt.plot(range(1, 41), test, color="red", label="max_depth")
plt.legend()
plt.show()

# print(rfc.estimators_)  #随机森林里每棵树的具体参数
# print(rfc.oob_score)    #随机样本漏掉部分的得分
# print(rfc.feature_importances_) #各个特征的重要性
# RandomForestClassifier().oob_score_

