{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "15d2f5b5",
   "metadata": {},
   "source": [
    "# Report 2 Titanic"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7538d712",
   "metadata": {},
   "source": [
    "1.任务目标："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9833743b",
   "metadata": {},
   "source": [
    "任务共提供了三个数据集，包括train.csv test.csv gender_submission.csv 分别为训练集、测试集和测试集的实际生还结果。根据给出的一些乘客信息及生还状况对机器进行训练，来预测另一部分乘客的生还状况，并和gender_submission.csv 数据中所提供的实际生还情况进行对比，得到机器预测的准确率，并适当改进模型提高准确率。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0469920b",
   "metadata": {},
   "source": [
    "2.基础知识"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c3287cd",
   "metadata": {},
   "source": [
    "1.randomforestclassifier随机森林分配模型。随机森林是非常具有代表性的bagging集成算法，他的所有基评估器都是决策树，分类树组成的森林就叫做随机森林。随机森林通过自助法重采样技术，从原始训练样本集N中有放回地重复随机抽取n个样本生成新的训练样本集合训练决策树，然后按以上步骤生成m棵决策树组成随机森林。有很多优点：（1） 每棵树都选择部分样本及部分特征，一定程度避免过拟合； （2）每棵树随机选择样本并随机选择特征，使得具有很好的抗噪能力，性能稳定； （3）处理很高维度的数据，并且不用做特征选择； （4）适合并行计算； \n",
    "2.基于混淆矩阵对算法模型进行评估。在使用训练集建立模型，使用测试集评估模型后，需要利用混淆矩阵来进行计算相关指标。混淆矩阵的每一列代表一个类的实例预测，而每一行表示一个实际的类的实例。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1ff8e1b6",
   "metadata": {},
   "source": [
    "3.背景"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f662be9f",
   "metadata": {},
   "source": [
    "泰坦尼克号的沉没是历史上最臭名昭著的沉船事件之一，造成这一悲剧的原因之一是没有足够的救生艇供乘客和船员使用。而在这次灾难中幸存下来的人有一些运气因素，但也与其他因素有关，如妇女、儿童和上层阶级。现在需要利用机器进行训练，并在训练结束后可以根据一个人的信息对其是否生还进行预测。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "65fc1903",
   "metadata": {},
   "source": [
    "4.原理"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "272e890f",
   "metadata": {},
   "source": [
    "1.对给定数据集进行分析，第一列表示乘客编号，第二列表示是否生还，第三列表示乘客的舱位，（一等舱，二等舱，三等舱），第四列为乘客姓名，第五列表示性别，第六列表示年龄，第七列表示该船上的同辈亲属的人数，第八列表示在该船上某一个人父母或者孩子的个数。第九列表示船票号，第十列表示旅费，第十一列表示船舱号，第十二列表示从哪个港口上的该船。可以判断编号，乘客姓名，船票号是无用的特征。另外，给定的数据集中存在个别数据缺失的问题，对于个别数据的缺失，可以对缺失数据赋予此特征在所有样本中的均值，对于cabin 号，因为只有极个别样本中有这个特征，因此，也不考虑这个特征。\n",
    "2.对数据进行处理，从数据集中删除无用的特征，并将缺失值用训练集特征的均值填充，得到新的含有七个特征的数据集。\n",
    "3.建立随机森林模型，参数均为sklearn中的默认值。使机器完成对训练集的学习和对测试集的分类，并同gender_submission.csv中的结果比较得到准确率。\n",
    "4.对随机森林模型参数进行调节，使用支持向量机SVM进行分类预测。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2b95a753",
   "metadata": {},
   "source": [
    " 5.实验"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0d4da04",
   "metadata": {},
   "source": [
    "（1）数据处理。从数据集中删除无用的特征，并将缺失值用训练集特征的均值填充。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "296ef72f",
   "metadata": {},
   "outputs": [],
   "source": [
    "#1.删除无用特征\n",
    "import csv  \n",
    "import numpy as np\n",
    "from sklearn.metrics import confusion_matrix,classification_report\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.ensemble import RandomForestClassifier  #引入随机森林分类器\n",
    "from sklearn.datasets import make_classification\n",
    "from sklearn import svm\n",
    "filename = r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\train01.csv'#训练集所在位置\n",
    "with open(filename) as f:\n",
    "    reader = csv.reader(f)\n",
    "    train_data=[row[2:] for row in reader] # 第一列是序列号，第二列为标签，因此训练数据选择第二列以后的数据\n",
    "    train_data.pop(0)  # 删除掉第一行（标题行）\n",
    "with open(filename) as f:\n",
    "    reader = csv.reader(f)\n",
    "    train_label=[row[1] for row in reader] \n",
    "    train_label.pop(0)\n",
    "filename = r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\test01.csv'#测试集所在位置\n",
    "with open(filename) as f:\n",
    "    reader = csv.reader(f)\n",
    "    test_data=[row[1:] for row in reader] #测试集第一列是序列号，第二列以后为数据集\n",
    "    test_data.pop(0)\n",
    "filename = r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\gender_submission.csv'\n",
    "with open(filename) as f:\n",
    "    reader = csv.reader(f)\n",
    "    test_label=[row[1] for row in reader] # 测试数据集标签\n",
    "    test_label.pop(0)\n",
    "train_data=np.array(train_data)\n",
    "test_data=np.array(test_data)\n",
    "train_1=np.delete(train_data,[1,6,8],axis=1)\n",
    "test_1=np.delete(test_data,[1,6,8],axis=1)\n",
    "train_data=np.array(train_data)  #把数据集从list 格式转换为 矩阵\n",
    "test_data=np.array(test_data)\n",
    "train_1=np.delete(train_data,[1,6,8],axis=1) # 在训练集和测试集中删除姓名，船票号，船舱号这些列\n",
    "test_1=np.delete(test_data,[1,6,8],axis=1)\n",
    "#2.填补缺失值\n",
    "age = []\n",
    "for i in range(train_1.shape[0]):#填补年龄缺失值\n",
    "    if (train_1[i][2] != ''):\n",
    "        age.append(np.float64(train_1[i][2]))\n",
    "ave_age = int(sum(age) / len(age))\n",
    "\n",
    "fare = []\n",
    "for i in range(train_1.shape[0]):#填补旅费缺失值\n",
    "    if (train_1[i][-2] != ''):\n",
    "        fare.append(np.float64(train_1[i][-2]))\n",
    "ave_fare = float(sum(fare) / len(fare))\n",
    "for i in range(test_1.shape[0]):\n",
    "    if test_1[i][1]=='male':\n",
    "        test_1[i][1]=0\n",
    "    if test_1[i][1]=='female':\n",
    "        test_1[i][1]=1\n",
    "    if test_1[i][-1]=='S':\n",
    "        test_1[i][-1]=0\n",
    "    if test_1[i][-1]=='C':\n",
    "        test_1[i][-1]=1\n",
    "    if test_1[i][-1]=='Q':\n",
    "        test_1[i][-1]=2\n",
    "    if test_1[i][-1]=='':\n",
    "        test_1[i][-1]=0\n",
    "    if test_1[i][2]=='':\n",
    "        test_1[i][2]=ave_age\n",
    "    if test_1[i][-2]=='':\n",
    "        test_1[i][-2]=ave_fare\n",
    "        for i in range(train_1.shape[0]):\n",
    "            if train_1[i][1] == 'male':  # 男性取0，女性取1\n",
    "                train_1[i][1] = 0\n",
    "            if train_1[i][1] == 'female':\n",
    "                train_1[i][1] = 1\n",
    "            if train_1[i][-1] == 'S':  # 三个码头编号为 0，1，2\n",
    "                train_1[i][-1] = 0\n",
    "            if train_1[i][-1] == 'C':\n",
    "                train_1[i][-1] = 1\n",
    "            if train_1[i][-1] == 'Q':\n",
    "                train_1[i][-1] = 2\n",
    "            if train_1[i][-1] == '':  # 缺失值\n",
    "                train_1[i][-1] = 0\n",
    "            if train_1[i][2] == '':  # 年龄缺失 赋给均值\n",
    "                train_1[i][2] = ave_age\n",
    "print(train_1.shape)#根据打印结果，确定此时训练集有七个特征。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "81f0ec14",
   "metadata": {},
   "source": [
    "（2）构造随机森林模型进行训练，并对测试集进行分类预测。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "87b5374b",
   "metadata": {},
   "outputs": [],
   "source": [
    "clf = RandomForestClassifier()\n",
    "clf.fit(train_1, train_label)  # 利用训练集进行训练\n",
    "\n",
    "print(clf.feature_importances_)  # 输出每个特征的权重\n",
    "pre_test = clf.predict(test_1)  # 预测结果\n",
    "print(confusion_matrix(test_label, pre_test))  # 输出预测结果的混淆矩阵\n",
    "print(classification_report(test_label, pre_test))  # 打印分类报告"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7ef19fd6",
   "metadata": {},
   "source": [
    "（3）使用支持向量机SVM来提高准确率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "730fc630",
   "metadata": {},
   "outputs": [],
   "source": [
    "clf = svm.SVC(decision_function_shape='ovo',C=20,gamma=0.001) #多次调节参数，发现c和gamma取这两个值时训练效果较好\n",
    "clf.fit(train_1, train_label)\n",
    "svm_pre_test=clf.predict(test_1)\n",
    "print(confusion_matrix(test_label,svm_pre_test))\n",
    "print(classification_report(test_label,svm_pre_test))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9de5eeb8",
   "metadata": {},
   "source": [
    "6.总结"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f1078f94",
   "metadata": {},
   "source": [
    "1.面对分类问题，首先要解决的问题大都是对繁多的数据进行处理。需要判断所给数据中哪些数据是用来区分特征的，而哪些数据则是无效数据，对于无效数据，可以删去那一列数据从而让整体看上去更简洁，也能更好地对有效数据进行处理。而若在数据有缺失的情况下，可以通过这一类数据的均值或中位数等值来进行填充。\n",
    "2.随机森林模型在处理分类问题时有很多优点，如具有很好的抗噪能力，性能稳定、能处理很高维度的数据，并且不用做特征选择等，可以很好地处理这类分类问题，但在模型优化时也有一定缺点，参数较为复杂，不容易对参数进行调整。\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
