{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 实验4: Logistic回归与KNN算法（2 学时）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.体验KNN算法。给定以下的疾病诊断数据集。这个数据集包含1个病人代号的字段、5个条件字段(喉咙痛、发烧、淋巴腺肿胀、充血、头痛)以及一个目标字段(诊断结果)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义Yes=1，No=0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import operator# 迭代器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def knn(trainData, testData, labels, k):\n",
    "    # 计算训练样本的行数\n",
    "    rowSize = trainData.shape[0]\n",
    "    # 计算训练样本和测试样本的差值\n",
    "    diff = np.tile(testData, (rowSize, 1)) - trainData\n",
    "    # 计算差值的平方和\n",
    "    sqrDiff = diff ** 2\n",
    "    sqrDiffSum = sqrDiff.sum(axis=1)\n",
    "    # 计算距离\n",
    "    distances = sqrDiffSum ** 0.5\n",
    "    # 对所得的距离从低到高进行排序\n",
    "    sortDistance = distances.argsort()\n",
    "    \n",
    "    count = {}\n",
    "    \n",
    "    for i in range(k):\n",
    "        vote = labels[sortDistance[i]]\n",
    "        count[vote] = count.get(vote, 0) + 1\n",
    "    # 对类别出现的频数从高到低进行排序\n",
    "    sortCount = sorted(count.items(), key=operator.itemgetter(1), reverse=True)\n",
    "    \n",
    "    # 返回出现频数最高的类别\n",
    "    return sortCount[0][0] \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['过敏', '感冒']\n"
     ]
    }
   ],
   "source": [
    "trainData = np.array([[1,1,1,1,1],\n",
    "                     [0,0,0,1,1],\n",
    "                     [1,1,0,1,0],\n",
    "                     [1,0,1,0,0],\n",
    "                     [0,1,0,1,0],\n",
    "                     [0,0,0,1,0],\n",
    "                     [0,0,1,0,0],\n",
    "                     [1,0,0,1,1],\n",
    "                     [0,1,0,1,1],\n",
    "                     [1,1,0,1,1]])\n",
    "labels = ['链球菌喉炎', '过敏','感冒', '链球菌喉炎',\n",
    "          '感冒','过敏','链球菌喉炎','过敏','感冒','感冒']\n",
    "testData = np.array([[0,0,1,1,1],\n",
    "                   [1,1,0,0,1]])\n",
    "y_label=[]\n",
    "for test in testData:\n",
    "    X = knn(trainData, test, labels, 3)\n",
    "    y_label.append(X)\n",
    "print(y_label)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.下载数据集cancer_X.csv(162,217)，cancer_y.csv(18)，amazon_X.csv(1500,10000)，amazon_y.csv(50)，实现如下功能：\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1)\t利用Python语言实现多分类的Logistic回归模型的程序设计，并给出上述两组数据的Logistic回归模型的准确率（测试集与训练集相同）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我这里选择的是sklearn内置的多分类Logistic回归模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from tqdm import tqdm#进度条\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.metrics import accuracy_score"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [],
   "source": [
    "cancer_X=np.array(pd.read_csv('cancer_X.csv',header=None))\n",
    "cancer_y=np.array(pd.read_csv('cancer_y.csv',header=None)).ravel('C')# 导入labels,并按行拉直\n",
    "amazon_X=np.array(pd.read_csv('amazon_X.csv',header=None))\n",
    "amazon_y=np.array(pd.read_csv('amazon_y.csv',header=None)).ravel('C')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### cancer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[12 11 12 10  3 18 16 10 15 17 10 13 17 17 14 15  3 13 16  3 16 10 15 16\n",
      "  7 14  5  9 16  7 13 11  7 15  9 15 17 15  5 13 10 12 10  9 15  8 16  9\n",
      " 10]\n",
      "0.8571428571428571\n"
     ]
    }
   ],
   "source": [
    "x_train, x_test, y_train, y_test = \\\n",
    "            train_test_split(cancer_X, cancer_y, test_size=0.3)\n",
    "log_model = LogisticRegression(multi_class=\"multinomial\", solver=\"newton-cg\", max_iter=1000)\n",
    "log_model.fit(x_train,y_train)\n",
    "pred_test = log_model.predict(x_test)\n",
    "acu = accuracy_score(y_test, pred_test)  # 准确率\n",
    "print(pred_test)\n",
    "print(acu)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### amazon"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[19 30 20 40  2 47 42 48 42 22  3 32 10  2 24 46 49 36 28 34 15 43 21 35\n",
      " 19 37 30 37 17 16  1 23 12 48 50 28 13 20 27 27 36 19 14 40  9 16 50  6\n",
      " 46  9  8 13 49 46 29 23 50 15 20 33 41 22 39  1 15 50  1 12 10 38 29  7\n",
      " 45 24 33 11 42 20 46 31 36 35 46  1 25  1 43 38 48 40  3 26 17 45 27 44\n",
      "  1 27 24 16 27 34 48  9 10 42 27 38  6 33 49 31 45 48  7 12 23 47  7 44\n",
      " 46 31 28  1  9 39  8 15  4 33  5 27 10 32 45 18 37 10 26 16  3 33 43  1\n",
      " 25 32 44  1 43 43 40 26 11 25 42 39 34 25  4 35 22 12  3 40 21 19 10 49\n",
      " 27  8 47  4 35  7 21 41 24 32 31  3 48 16 22 11 28 31  7 36 42  7 30 14\n",
      " 12 12 35 49 12 13 12 20  5  3 12 27 17 32  6 15  5 43  2 35 20 17 25 27\n",
      " 40 49  7 36 33 49 34  6 20 35 22 47 40 31 44  8 15  4  2 43 36 47 45 48\n",
      " 41  4  5 41  9 37 12 24 22  6  5 10  9  8 18 27 41 34 16 29 19 24  9 30\n",
      " 15 28 38 35 36  8 18 39  4 13 50  2 22 11 12 14 26 44 18 49 24 41 37 49\n",
      " 50 26 17 49 48 36 27 18 36 26  4 42 10 28  8 42 12 33 10 32 39  2  8 16\n",
      " 49 43 38 25  6 22 43  5 23 25 14 29 38 33 22 13 28 38  1 16 25 50 46 42\n",
      " 26  9 16 21 12 18 37 41  6 40  7 12 36 44 36 21 14 30 39 17 47 48 45 28\n",
      " 29 18 46  4 20 40 49 33 20 44  8 45  3  7 21 41 23 31 30 34 16 21 50 47\n",
      " 30 36 16 33  4 18  8 42 20 16 45 24 25 44 31 15 42 10  4 15 26 14 17 45\n",
      " 32 32 28 32 16 44  6 11 41 30 18 46 50 42 33 38  8 25 18 27 12  6 22 49\n",
      " 28 31 33 47 41 38 36  2  4 19  2 39  6 46 20 40 12 47]\n",
      "0.6888888888888889\n"
     ]
    }
   ],
   "source": [
    "x_train, x_test, y_train, y_test = \\\n",
    "            train_test_split(amazon_X, amazon_y, test_size=0.3)\n",
    "log_model = LogisticRegression(multi_class=\"multinomial\", solver=\"newton-cg\", max_iter=1000)\n",
    "log_model.fit(x_train,y_train)\n",
    "pred_test = log_model.predict(x_test)\n",
    "acu = accuracy_score(y_test, pred_test)  # 准确率\n",
    "# print(pred_test)\n",
    "print(acu)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2)\t利用Python语言实现KNN算法，并给出不同的邻近点个数k对应的预测准确率（测试集与训练集相同）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import operator# 迭代器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def knn(trainData, testData, labels, k):\n",
    "    # 计算训练样本的行数\n",
    "    rowSize = trainData.shape[0]\n",
    "    # 计算训练样本和测试样本的差值\n",
    "    diff = np.tile(testData, (rowSize, 1)) - trainData\n",
    "    # 计算差值的平方和\n",
    "    sqrDiff = diff ** 2\n",
    "    sqrDiffSum = sqrDiff.sum(axis=1)\n",
    "    # 计算距离\n",
    "    distances = sqrDiffSum ** 0.5\n",
    "    # 对所得的距离从低到高进行排序\n",
    "    sortDistance = distances.argsort()\n",
    "    \n",
    "    count = {}\n",
    "    \n",
    "    for i in range(k):\n",
    "        vote = labels[sortDistance[i]]\n",
    "        count[vote] = count.get(vote, 0) + 1\n",
    "    # 对类别出现的频数从高到低进行排序\n",
    "    sortCount = sorted(count.items(), key=operator.itemgetter(1), reverse=True)\n",
    "    \n",
    "    # 返回出现频数最高的类别\n",
    "    return sortCount[0][0] \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def acc(y, y_label):\n",
    "    return sum(yi == yi_label for yi, yi_label in zip(y, y_label)) / len(y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(162, 217)\n",
      "(162,)\n",
      "(1500, 10000)\n",
      "(1500,)\n"
     ]
    }
   ],
   "source": [
    "cancer_X=np.array(pd.read_csv('cancer_X.csv',header=None))\n",
    "cancer_y=np.array(pd.read_csv('cancer_y.csv',header=None)).ravel('C')# 导入labels,并按行拉直\n",
    "print(cancer_X.shape)\n",
    "print(cancer_y.shape)\n",
    "amazon_X=np.array(pd.read_csv('amazon_X.csv',header=None))\n",
    "amazon_y=np.array(pd.read_csv('amazon_y.csv',header=None)).ravel('C')\n",
    "print(amazon_X.shape)\n",
    "print(amazon_y.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### cancer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9197530864197531\n"
     ]
    }
   ],
   "source": [
    "y_label=[]\n",
    "for test in cancer_X:\n",
    "    X = knn(cancer_X, test, cancer_y, 3)\n",
    "    y_label.append(X)\n",
    "print(acc(y_label,cancer_y))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### amazon"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████████████████████████████████████████████████████████████████████████| 1500/1500 [04:39<00:00,  5.36it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9413333333333334\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "from tqdm import tqdm# 进度条库\n",
    "y_label=[]\n",
    "for test in tqdm(amazon_X):\n",
    "    X = knn(amazon_X, test, amazon_y, 3)\n",
    "    y_label.append(X)\n",
    "print(acc(y_label,amazon_y))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "tensorflow",
   "language": "python",
   "name": "tensorflow"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
