2000字范文,分享全网优秀范文,学习好帮手!
2000字范文 > #泰坦尼克号幸存者预测

#泰坦尼克号幸存者预测

时间:2019-09-23 16:18:05

相关推荐

#泰坦尼克号幸存者预测

泰坦尼克号幸存者预测

泰坦尼克号训练数据见百度网盘:

链接:/s/1yHvYb2usyW24LqacHk9-Dw

提取码:p1do

import pandas as pdfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.model_selection import train_test_splitfrom sklearn.model_selection import GridSearchCVfrom sklearn.model_selection import cross_val_scoreimport matplotlib.pyplot as pltdata = pd.read_csv(r"./taitan_data.csv")data.head()data.info()#删除缺失值过多的列,和观察判断来说和预测的y没有关系的列data.drop(["Cabin","Name","Ticket"],inplace=True,axis=1)#处理缺失值,对缺失值较多的列进行填补,有一些特征只确实一两个值,可以采取直接删除记录的方法data["Age"] = data["Age"].fillna(data["Age"].mean())data = data.dropna()#将分类变量转换为数值型变量#将二分类变量转换为数值型变量#astype能够将一个pandas对象转换为某种类型,和apply(int(x))不同,astype可以将文本类转换为数字,用这个方式可以很便捷地将二分类特征转换为0~1data["Sex"] = (data["Sex"]== "male").astype("int")#将三分类变量转换为数值型变量labels = data["Embarked"].unique().tolist()data["Embarked"] = data["Embarked"].apply(lambda x: labels.index(x))#查看处理后的数据集data.head()data.head()X = data.iloc[:,data.columns != "Survived"]y = data.iloc[:,data.columns == "Survived"]from sklearn.model_selection import train_test_splitXtrain, Xtest, Ytrain, Ytest = train_test_split(X,y,test_size=0.3)#修正测试集和训练集的索引for i in [Xtrain, Xtest, Ytrain, Ytest]:i.index = range(i.shape[0])#查看分好的训练集和测试集Xtrain.head()clf = DecisionTreeClassifier(random_state=25)clf = clf.fit(Xtrain, Ytrain)score_ = clf.score(Xtest, Ytest)score_score = cross_val_score(clf,X,y,cv=10).mean()scoretr = []te = []for i in range(10):clf = DecisionTreeClassifier(random_state=25,max_depth=i+1,criterion="entropy")clf = clf.fit(Xtrain, Ytrain)score_tr = clf.score(Xtrain,Ytrain)score_te = cross_val_score(clf,X,y,cv=10).mean()tr.append(score_tr)te.append(score_te)print(max(te))plt.plot(range(1,11),tr,color="red",label="train")plt.plot(range(1,11),te,color="blue",label="test")plt.xticks(range(1,11))plt.legend()plt.show()import numpy as npgini_thresholds = np.linspace(0,0.5,20)parameters = {'splitter':('best','random'),'criterion':("gini","entropy"),"max_depth":[*range(1,10)],'min_samples_leaf':[*range(1,50,5)],'min_impurity_decrease':[*np.linspace(0,0.5,20)]}clf = DecisionTreeClassifier(random_state=25)GS = GridSearchCV(clf, parameters, cv=10)GS.fit(Xtrain,Ytrain)GS.best_params_GS.best_score_GS.best_estimator_GS.best_params_

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。