{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[原版（英文）图书地址](https://www.oreilly.com/library/view/feature-engineering-for/9781491953235/)\n",
    "\n",
    "**翻译**：[apachecn](https://github.com/apachecn)，[翻译版本地址](https://github.com/apachecn/feature-engineering-for-ml-zh)\n",
    "\n",
    "**代码修改和整理**：[黄海广](https://github.com/fengdu78)，原文修改成jupyter notebook格式，并增加和修改了部分代码，测试全部通过，所有数据集已经放在[百度云](data/README.md)下载。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 四、特征缩放的效果：从词袋到 TF-IDF\n",
    "\n",
    "> 译者：[@gin](https://github.com/tirtile)\n",
    "> \n",
    "> 校对者：[@HeYun](https://github.com/KyrieHee)\n",
    "\n",
    "字袋易于生成，但远非完美。假设我们平等的统计所有单词，有些不需要的词也会被强调。在第三章提过一个例子，Emma and the raven。我们希望在文档表示中能强调两个主要角色。示例中，“Eama”和“raven”都出现了3词，但是“the”的出现高达8次，“and”出现了次，另外“it”以及“was”也都出现了4词。仅仅通过简单的频率统计，两个主要角色并不突出。这是有问题的。\n",
    "\n",
    "其他的像是“magnificently,” “gleamed,” “intimidated,” “tentatively,” 和“reigned,”这些辅助奠定段落基调的词也是很好的选择。它们表示情绪，这对数据科学家来说可能是非常有价值的信息。 所以，理想情况下，我们会倾向突出对有意义单词的表示。\n",
    "\n",
    "\n",
    "## Tf-Idf: 词袋的小转折\n",
    "\n",
    "Tf-Idf 是词袋的一个小小的转折。它表示词频-逆文档频。tf-idf不是查看每个文档中每个单词的原始计数，而是查看每个单词计数除以出现该单词的文档数量的标准化计数。\n",
    "$$\n",
    "{bow}(w, d)=\\# \\text {times} \\text { word } w \\text { appears in document } d\n",
    "$$\n",
    "$$\n",
    "t f-i d f(w, d)=\\frac{{bow}(w, d) \\times N}{(\\# \\text {documents in which word w appears)}}\n",
    "$$\n",
    "\n",
    "\n",
    "$N$代表数据集中所有文档的数量。分数$\\frac{{bow}(w, d) \\times N}{(\\# \\text {documents in which word w appears)}}$就是所谓的逆文件频率。如果一个单词出现在许多文档中，则其逆文档频率接近1。如果单词出现在较少文档中，则逆文档频率要高得多。\n",
    "\n",
    "或者，我们可以对原始逆文档频率进行对数转换，可以将1变为0，并使得较大的数字（比1大得多）变小。（稍后更多内容）\n",
    "\n",
    "如果我们定义 tf-idf 为：\n",
    "\n",
    "$$\n",
    "t f-i d f(w, d)=\\frac{{bow}(w, d) \\times N}{(\\# \\text {documents in which word w appears)}}\n",
    "$$\n",
    "\n",
    "那么每个文档中出现的单词都将被有效清零，并且只出现在少数文档中的单词的计数将比以前更大。\n",
    "\n",
    "让我们看一些图片来了解它的具体内容。图4-1展示了一个包含4个句子的简单样例：“it is a puppy,” “it is a cat,” “it is a kitten,” 以及 “that is a dog and this is a pen.” 我们将这些句子绘制在“puppy”，“cat”以及“is”三个词的特征空间上。\n",
    "\n",
    "![图 4-1: 关于猫和狗的四个句子](images/chapter4/4-1.png)\n",
    "\n",
    "<center><h5>图 4-1: 关于猫和狗的四个句子</h5></center>\n",
    "\n",
    "现在让我们看看对逆文档频进行对数变换之后，相同四个句子的tf-idf表示。 图4-2显示了相应特征空间中的文档。可以注意到，单词“is”被有效地消除，因为它出现在该数据集中的所有句子中。另外，单词“puppy”和“cat”都只出现在四个句子中的一个句子中，所以现在这两个词计数得比之前更高（$log(4)=1.38...>1$）。因此tf-idf使罕见词语更加突出，并有效地忽略了常见词汇。它与第3章中基于频率的滤波方法密切相关，但比放置严格截止阈值更具数学优雅性。\n",
    "![Figure 4-2: 图4-1中四个句子的Tf-idf表示](images/chapter4/4-2.png)\n",
    "\n",
    "<center><h5>Figure 4-2: 图4-1中四个句子的Tf-idf表示</h5></center>\n",
    "\n",
    "\n",
    "## Tf-Idf的含义\n",
    "\n",
    "Tf-idf使罕见的单词更加突出，并有效地忽略了常见单词。\n",
    "\n",
    "## 测试\n",
    "\n",
    "Tf-idf通过乘以一个常量来转换字数统计特性。因此，它是特征缩放的一个例子，这是第2章介绍的一个概念。特征缩放在实践中效果有多好？ 我们来比较简单文本分类任务中缩放和未缩放特征的表现。 coding时间到！\n",
    "\n",
    "本次实践， 我们依旧采用了Yelp评论数据集。Yelp数据集挑战赛第6轮包含在美国六个城市将近一百六十万商业评论。\n",
    "### 样例4-1：使用python加载和清洗Yelp评论数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import pandas as pd\n",
    "## Load Yelp Business data\n",
    "biz_f = open('data/yelp_academic_dataset_business.json')\n",
    "biz_df = pd.DataFrame([json.loads(x) for x in biz_f.readlines()])\n",
    "biz_f.close()\n",
    "## Load Yelp Reviews data\n",
    "review_file = open('data/yelp_academic_dataset_review.json')\n",
    "review_df = pd.DataFrame([json.loads(x) for x in review_file.readlines()])\n",
    "review_file.close()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(11537, 13)"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "biz_df.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(229907, 8)"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "review_df.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Pull out only Nightlife and Restaurants businesses\n",
    "two_biz = biz_df[biz_df.apply(\n",
    "    lambda x: 'Nightlife' in x['categories'] or 'Restaurants' in x['categories'\n",
    "                                                                   ],\n",
    "    axis=1)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(4816, 13)"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "two_biz.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(11537, 13)"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "biz_df.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Join with the reviews to get all reviews on the two types of business\n",
    "twobiz_reviews = two_biz.merge(review_df, on='business_id', how='inner')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(166038, 20)"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "twobiz_reviews.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Trim away the features we won't use\n",
    "twobiz_reviews = twobiz_reviews[['business_id',\n",
    "                        'name',\n",
    "                        'stars_y',\n",
    "                        'text',\n",
    "                        'categories']]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create the target column--True for Nightlife businesses, and False otherwise\n",
    "twobiz_reviews['target'] = twobiz_reviews.apply(\n",
    "    lambda x: 'Nightlife' in x['categories'], axis=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 建立分类数据集\n",
    "\n",
    "让我们看看是否可以使用评论来区分餐厅或夜生活场所。为了节省训练时间，仅使用一部分评论。这两个类别之间的评论数目有很大差异。这是所谓的类不平衡数据集。对于构建模型来说，不平衡的数据集存在着一个问题:这个模型会把大部分精力花费在比重更大的类上。由于我们在这两个类别都有大量的数据，解决这个问题的一个比较好方法是将数目较大的类（餐厅）进行下采样，使之与数目较小的类（夜生活）数目大致相同。下面是一个示例工作流程。\n",
    "\n",
    "1. 随机抽取10%夜生活场所评论以及2.1%的餐厅评论（选取合适的百分比使得每个种类的数目大致一样）\n",
    "\n",
    "2. 将数据集分成比例为7：3的训练集和测试集。在这个例子里，训练集包括29，264条评论，测试集有12542条。\n",
    "\n",
    "3. 训练数据包括46，924个不同的单词，这是词袋表示中特征的数量。\n",
    "\n",
    "### 样例4-2：创建一个分类数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a class-balanced subsample to play with\n",
    "nightlife = twobiz_reviews[\n",
    "    twobiz_reviews.apply(lambda x: 'Nightlife' in x['categories'], axis=1)]\n",
    "restaurants = twobiz_reviews[\n",
    "    twobiz_reviews.apply(lambda x: 'Restaurants' in x['categories'], axis=1)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(30136, 6)"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "nightlife.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(158430, 6)"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "restaurants.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "nightlife_subset = nightlife.sample(frac=0.1, random_state=123)\n",
    "restaurant_subset = restaurants.sample(frac=0.021, random_state=123)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(3014, 6)"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "nightlife_subset.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(3327, 6)"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "restaurant_subset.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "combined = pd.concat([nightlife_subset, restaurant_subset])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "combined['target'] = combined.apply(\n",
    "    lambda x: 'Nightlife' in x['categories'], axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>business_id</th>\n",
       "      <th>name</th>\n",
       "      <th>stars_y</th>\n",
       "      <th>text</th>\n",
       "      <th>categories</th>\n",
       "      <th>target</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>103709</th>\n",
       "      <td>2ceeU8e3nZjaPfGmLwh4kg</td>\n",
       "      <td>Casey Moore's Oyster House</td>\n",
       "      <td>2</td>\n",
       "      <td>Been here a couple times over the last few yea...</td>\n",
       "      <td>[Bars, Seafood, Irish, Pubs, Nightlife, Restau...</td>\n",
       "      <td>True</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>52043</th>\n",
       "      <td>JokKtdXU7zXHcr20Lrk29A</td>\n",
       "      <td>Four Peaks Brewing Co</td>\n",
       "      <td>5</td>\n",
       "      <td>Over the top service. One of my favorite meals...</td>\n",
       "      <td>[Bars, Food, Breweries, Pubs, Nightlife, Ameri...</td>\n",
       "      <td>True</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>134438</th>\n",
       "      <td>-yxfBYGB6SEqszmxJxd97A</td>\n",
       "      <td>Quiessence Restaurant</td>\n",
       "      <td>4</td>\n",
       "      <td>On a trip where I ate at some very nice and up...</td>\n",
       "      <td>[Wine Bars, Bars, American (New), Nightlife, R...</td>\n",
       "      <td>True</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>142453</th>\n",
       "      <td>fjAQGf-iJlVjD2vizzuORQ</td>\n",
       "      <td>Giligin's Bar</td>\n",
       "      <td>4</td>\n",
       "      <td>Stumbled on Giligins a few nights ago.  I knew...</td>\n",
       "      <td>[Bars, Nightlife]</td>\n",
       "      <td>True</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>67583</th>\n",
       "      <td>DjdA1xbHki_lopCSxf-Egg</td>\n",
       "      <td>Greasewood Flat</td>\n",
       "      <td>5</td>\n",
       "      <td>Thinking about one of these burgers is making ...</td>\n",
       "      <td>[Burgers, Bars, Hot Dogs, Nightlife, Restaurants]</td>\n",
       "      <td>True</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                   business_id                        name  stars_y  \\\n",
       "103709  2ceeU8e3nZjaPfGmLwh4kg  Casey Moore's Oyster House        2   \n",
       "52043   JokKtdXU7zXHcr20Lrk29A       Four Peaks Brewing Co        5   \n",
       "134438  -yxfBYGB6SEqszmxJxd97A       Quiessence Restaurant        4   \n",
       "142453  fjAQGf-iJlVjD2vizzuORQ               Giligin's Bar        4   \n",
       "67583   DjdA1xbHki_lopCSxf-Egg             Greasewood Flat        5   \n",
       "\n",
       "                                                     text  \\\n",
       "103709  Been here a couple times over the last few yea...   \n",
       "52043   Over the top service. One of my favorite meals...   \n",
       "134438  On a trip where I ate at some very nice and up...   \n",
       "142453  Stumbled on Giligins a few nights ago.  I knew...   \n",
       "67583   Thinking about one of these burgers is making ...   \n",
       "\n",
       "                                               categories  target  \n",
       "103709  [Bars, Seafood, Irish, Pubs, Nightlife, Restau...    True  \n",
       "52043   [Bars, Food, Breweries, Pubs, Nightlife, Ameri...    True  \n",
       "134438  [Wine Bars, Bars, American (New), Nightlife, R...    True  \n",
       "142453                                  [Bars, Nightlife]    True  \n",
       "67583   [Burgers, Bars, Hot Dogs, Nightlife, Restaurants]    True  "
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "combined.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_split.py:2179: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.\n",
      "  FutureWarning)\n"
     ]
    }
   ],
   "source": [
    "# Split into training and test data sets\n",
    "import sklearn.model_selection as modsel\n",
    "training_data, test_data = modsel.train_test_split(\n",
    "    combined, train_size=0.7, random_state=123)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(4438, 6)"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "training_data.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(1903, 6)"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_data.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 用tf-idf转换缩放词袋\n",
    "\n",
    "\n",
    "这个实验的目标是比较词袋，tf-idf以及L2归一化对于线性分类的作用。注意，做tf-idf接着做L2归一化和单独做L2归一化是一样的。所以我们需要只需要3个特征集合：词袋，tf-idf，以及逐词进行L2归一化后的词袋。\n",
    "\n",
    "在这个例子中，我们将使用Scikit-learn的CountVectorizer将评论文本转化为词袋。所有的文本特征化方法都依赖于标记器（tokenizer），该标记器能够将文本字符串转换为标记（词）列表。在这个例子中，Scikit-learn的默认标记模式是查找2个或更多字母数字字符的序列。标点符号被视为标记分隔符。\n",
    "\n",
    "## 测试集上进行特征缩放\n",
    "\n",
    "特征缩放的一个细微之处是它需要了解我们在实践中很可能不知道的特征统计，例如均值，方差，文档频率，L2范数等。为了计算tf-idf表示，我们不得不根据训练数据计算逆文档频率，并使用这些统计量来调整训练和测试数据。在Scikit-learn中，将特征变换拟合到训练集上相当于收集相关统计数据。然后可以将拟合过的变换应用于测试数据。\n",
    "\n",
    "### 样例4-3：特征变换"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.feature_extraction import text\n",
    "# Represent the review text as a bag-of-words\n",
    "bow_transform = text.CountVectorizer()\n",
    "X_tr_bow = bow_transform.fit_transform(training_data['text'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_te_bow = bow_transform.transform(test_data['text'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "18565"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(bow_transform.vocabulary_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_tr = training_data['target']\n",
    "y_te = test_data['target']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create the tf-idf representation using the bag-of-words matrix\n",
    "tfidf_trfm = text.TfidfTransformer(norm=None)\n",
    "X_tr_tfidf = tfidf_trfm.fit_transform(X_tr_bow)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_te_tfidf = tfidf_trfm.transform(X_te_bow)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sklearn.preprocessing as preproc\n",
    "# Just for kicks, l2-normalize the bag-of-words representation\n",
    "X_tr_l2 = preproc.normalize(X_tr_bow, axis=0)\n",
    "X_te_l2 = preproc.normalize(X_te_bow, axis=0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "当我们使用训练统计来衡量测试数据时，结果看起来有点模糊。测试集上的最小-最大比例缩放不再整齐地映射到零和一。L2范数，平均数和方差统计数据都将显得有些偏离。这比缺少数据的问题好一点。例如，测试集可能包含训练数据中不存在的单词，并且对于新的单词没有相应的文档频。通常的解决方案是简单地将测试集中新的单词丢弃。这似乎是不负责任的，但训练集上的模型在任何情况下都不会知道如何处理新词。一种稍微不太好的方法是明确地学习一个“垃圾”单词，并将所有罕见的频率单词映射到它，即使在训练集中也是如此，正如“罕见词汇”中所讨论的那样。\n",
    "\n",
    "## 使用逻辑回归进行分类\n",
    "\n",
    "逻辑回归是一个简单的线性分类器。通过对输入特征的加权组合，输入到一个sigmoid函数。sigmoid函数将任何实数平滑的映射到介于0和1之间。如图4-3绘制sigmoid函数曲线。由于逻辑回归比较简单，因此它通常是最先接触的分类器。\n",
    "\n",
    "![Figure 4-3: sigmoid函数](images/chapter4/4-3.png)\n",
    "\n",
    "<center><h5>Figure 4-3: sigmoid函数</h5></center>\n",
    "\n",
    "图4-3是sigmoid函数的插图。该函数将输入的实数x转换为一个0到1之间的数。它有一组参数w，表示围绕中点0.5增加的斜率。截距项b表示函数输出穿过中点的输入值。如果sigmoid输出大于0.5，则逻辑分类器将预测为正例，否则为反例。通过改变w和b，可以控制决策的改变，以及决策响应该点周围输入值变化的速度。\n",
    "\n",
    "### 样例4-4：使用默认参数训练逻辑回归分类器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LogisticRegression"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "def simple_logistic_classify(X_tr, y_tr, X_test, y_test, description, _C=1.0):\n",
    "    ## Helper function to train a logistic classifier and score on test data\n",
    "    m = LogisticRegression(C=_C).fit(X_tr, y_tr)\n",
    "    s = m.score(X_test, y_test)\n",
    "    print('Test score with', description, 'features:', s)\n",
    "    return m"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n",
      "  FutureWarning)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test score with bow features: 0.7677351550183921\n",
      "Test score with l2-normalized features: 0.7856016815554387\n",
      "Test score with tf-idf features: 0.7540725170782975\n"
     ]
    }
   ],
   "source": [
    "m1 = simple_logistic_classify(X_tr_bow, y_tr, X_te_bow, y_te, 'bow')\n",
    "m2 = simple_logistic_classify(X_tr_l2, y_tr, X_te_l2, y_te, 'l2-normalized')\n",
    "m3 = simple_logistic_classify(X_tr_tfidf, y_tr, X_te_tfidf, y_te, 'tf-idf')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "矛盾的是，结果表明最准确的分类器是使用BOW特征的分类器。出乎意料我们之外。事实证明，造成这种情况的原因是没有很好地“调整”分类器，这是比较分类器时一个常见的错误。\n",
    "\n",
    "## 使用正则化调整逻辑回归\n",
    "\n",
    "逻辑回归有些华而不实。 当特征的数量大于数据点的数量时，找到最佳模型的问题被认为是欠定的。 解决这个问题的一种方法是在训练过程中增加额外的约束条件。 这就是所谓的正则化，技术细节将在下一节讨论。\n",
    "\n",
    "逻辑回归的大多数实现允许正则化。为了使用这个功能，必须指定一个正则化参数。正则化参数是在模型训练过程中未自动学习的超参数。相反，他们必须手动进行调整，并将其提供给训练算法。这个过程称为超参数调整。（有关如何评估机器学习模型的详细信息，请参阅评估机器学习模型（Evaluating Machine Learning Models））.调整超参数的一种基本方法称为网格搜索：指定一个超参数值网格，并且调谐器以编程方式在网格中搜索最佳超参数设置 格。 找到最佳超参数设置后，使用该设置对整个训练集进行训练，并比较测试集上这些同类最佳模型的性能。\n",
    "\n",
    "## 重点：比较模型时调整超参数\n",
    "\n",
    "比较模型或特征时，调整超参数非常重要。 软件包的默认设置将始终返回一个模型。 但是除非软件在底层进行自动调整，否则很可能会返回一个基于次优超参数设置的次优模型。 分类器性能对超参数设置的敏感性取决于模型和训练数据的分布。 逻辑回归对超参数设置相对稳健（或不敏感）。 即便如此，仍然有必要找到并使用正确的超参数范围。 否则，一个模型相对于另一个模型的优点可能仅仅是由于参数的调整，并不能反映模型或特征的实际表现。\n",
    "\n",
    "即使是最好的自动调整软件包仍然需要指定搜索的上限和下限，并且找到这些限制可能需要几次手动尝试。\n",
    "\n",
    "在本例中，我们手动将逻辑正则化参数的搜索网格设置为{1e-5，0.001，0.1，1，10，100}。 上限和下限花费了几次尝试来缩小范围。 表4-1给出了每个特征集合的最优超参数设置。\n",
    "\n",
    "<center><h5>Table4-1.对夜场和餐厅的Yelp评论进行逻辑回归的最佳参数设置</h5></center>\n",
    "\n",
    " Method | L2 Regularization\n",
    "--------|-----------\n",
    "BOW | 0.1 \n",
    "L2-normalized| 10\n",
    "TF-IDF | 0.01\n",
    "\n",
    "我们也想测试tf-idf和BOW之间的精度差异是否是由于噪声造成的。 为此，我们使用k折交叉验证来模拟具有多个统计独立的数据集。它将数据集分为k个折叠。交叉验证过程通过分割后的数据进行迭代，使用除除去某一折之外的所有内容进行训练，并用那一折验证结果。Scikit-Learn中的GridSearchCV功能通过交叉验证进行网格搜索。 图4-4显示了在每个特征集上训练的模型的精度测量分布箱线图。 盒子中线表示中位精度，盒子本身表示四分之一和四分之三分位之间的区域，而线则延伸到剩余的分布。\n",
    "\n",
    "\n",
    "### 通过重采样估计方差\n",
    "\n",
    "现代统计方法假设底层数据是随机分布的。 数据导出模型的性能测量也受到随机噪声的影响。 在这种情况下，基于相似数据的数据集，不止一次进行测量总是比较好的。 这给了我们一个测量的置信区间。 K折交叉验证就是这样一种策略。 重采样是另一种从相同底层数据集生成多个小样本的技术。 有关重采样的更多详细信息，请参见评估机器学习模型。\n",
    "\n",
    "### 样例4-5：使用网格搜索调整逻辑回归超参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "param_grid_ = {'C': [1e-5, 1e-3, 1e-1, 1e0, 1e1, 1e2]}\n",
    "bow_search = modsel.GridSearchCV(\n",
    "    LogisticRegression(), cv=5, param_grid=param_grid_)\n",
    "l2_search = modsel.GridSearchCV(\n",
    "    LogisticRegression(), cv=5, param_grid=param_grid_)\n",
    "tfidf_search = modsel.GridSearchCV(\n",
    "    LogisticRegression(), cv=5, param_grid=param_grid_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n",
      "  FutureWarning)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "GridSearchCV(cv=5, error_score='raise-deprecating',\n",
       "       estimator=LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
       "          intercept_scaling=1, max_iter=100, multi_class='warn',\n",
       "          n_jobs=None, penalty='l2', random_state=None, solver='warn',\n",
       "          tol=0.0001, verbose=0, warm_start=False),\n",
       "       fit_params=None, iid='warn', n_jobs=None,\n",
       "       param_grid={'C': [1e-05, 0.001, 0.1, 1.0, 10.0, 100.0]},\n",
       "       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',\n",
       "       scoring=None, verbose=0)"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "bow_search.fit(X_tr_bow, y_tr)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.765209553853087"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "bow_search.best_score_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n",
      "  FutureWarning)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "GridSearchCV(cv=5, error_score='raise-deprecating',\n",
       "       estimator=LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
       "          intercept_scaling=1, max_iter=100, multi_class='warn',\n",
       "          n_jobs=None, penalty='l2', random_state=None, solver='warn',\n",
       "          tol=0.0001, verbose=0, warm_start=False),\n",
       "       fit_params=None, iid='warn', n_jobs=None,\n",
       "       param_grid={'C': [1e-05, 0.001, 0.1, 1.0, 10.0, 100.0]},\n",
       "       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',\n",
       "       scoring=None, verbose=0)"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "l2_search.fit(X_tr_l2, y_tr)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.7708427219468229"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "l2_search.best_score_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n",
      "  FutureWarning)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "GridSearchCV(cv=5, error_score='raise-deprecating',\n",
       "       estimator=LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
       "          intercept_scaling=1, max_iter=100, multi_class='warn',\n",
       "          n_jobs=None, penalty='l2', random_state=None, solver='warn',\n",
       "          tol=0.0001, verbose=0, warm_start=False),\n",
       "       fit_params=None, iid='warn', n_jobs=None,\n",
       "       param_grid={'C': [1e-05, 0.001, 0.1, 1.0, 10.0, 100.0]},\n",
       "       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',\n",
       "       scoring=None, verbose=0)"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tfidf_search.fit(X_tr_tfidf, y_tr)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.7893195132942767"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tfidf_search.best_score_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'C': 0.1}"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "bow_search.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'C': 1.0}"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "l2_search.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'C': 0.001}"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tfidf_search.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:125: FutureWarning: You are accessing a training score ('mean_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n",
      "  warnings.warn(*warn_args, **warn_kwargs)\n",
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:125: FutureWarning: You are accessing a training score ('split0_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n",
      "  warnings.warn(*warn_args, **warn_kwargs)\n",
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:125: FutureWarning: You are accessing a training score ('split1_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n",
      "  warnings.warn(*warn_args, **warn_kwargs)\n",
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:125: FutureWarning: You are accessing a training score ('split2_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n",
      "  warnings.warn(*warn_args, **warn_kwargs)\n",
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:125: FutureWarning: You are accessing a training score ('split3_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n",
      "  warnings.warn(*warn_args, **warn_kwargs)\n",
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:125: FutureWarning: You are accessing a training score ('split4_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n",
      "  warnings.warn(*warn_args, **warn_kwargs)\n",
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:125: FutureWarning: You are accessing a training score ('std_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True\n",
      "  warnings.warn(*warn_args, **warn_kwargs)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'mean_fit_time': array([0.02672362, 0.04170284, 0.14934993, 0.23928313, 0.37921972,\n",
       "        0.42493615]),\n",
       " 'mean_score_time': array([0.00079846, 0.00099654, 0.00039368, 0.00079846, 0.00059032,\n",
       "        0.00039887]),\n",
       " 'mean_test_score': array([0.54912123, 0.72239748, 0.76520955, 0.75259126, 0.74132492,\n",
       "        0.72577738]),\n",
       " 'mean_train_score': array([0.54912118, 0.77022266, 0.94338642, 0.99273309, 0.99909867,\n",
       "        0.99988734]),\n",
       " 'param_C': masked_array(data=[1e-05, 0.001, 0.1, 1.0, 10.0, 100.0],\n",
       "              mask=[False, False, False, False, False, False],\n",
       "        fill_value='?',\n",
       "             dtype=object),\n",
       " 'params': [{'C': 1e-05},\n",
       "  {'C': 0.001},\n",
       "  {'C': 0.1},\n",
       "  {'C': 1.0},\n",
       "  {'C': 10.0},\n",
       "  {'C': 100.0}],\n",
       " 'rank_test_score': array([6, 5, 1, 2, 3, 4]),\n",
       " 'split0_test_score': array([0.54842342, 0.76013514, 0.78490991, 0.76576577, 0.75112613,\n",
       "        0.73423423]),\n",
       " 'split0_train_score': array([0.54901408, 0.76676056, 0.94309859, 0.99239437, 0.99887324,\n",
       "        0.99971831]),\n",
       " 'split1_test_score': array([0.55067568, 0.72072072, 0.77702703, 0.76801802, 0.7545045 ,\n",
       "        0.75112613]),\n",
       " 'split1_train_score': array([0.54873239, 0.76591549, 0.93830986, 0.99183099, 0.99859155,\n",
       "        1.        ]),\n",
       " 'split2_test_score': array([0.5518018 , 0.71734234, 0.75900901, 0.72972973, 0.72635135,\n",
       "        0.70720721]),\n",
       " 'split2_train_score': array([0.54873239, 0.77070423, 0.94450704, 0.99183099, 0.99943662,\n",
       "        1.        ]),\n",
       " 'split3_test_score': array([0.54791432, 0.70800451, 0.77113867, 0.75422773, 0.7361894 ,\n",
       "        0.71476888]),\n",
       " 'split3_train_score': array([0.54970431, 0.77499296, 0.94564911, 0.99436778, 0.99915517,\n",
       "        0.99971839]),\n",
       " 'split4_test_score': array([0.54678692, 0.70574972, 0.73393461, 0.74520857, 0.73844419,\n",
       "        0.72153326]),\n",
       " 'split4_train_score': array([0.5494227 , 0.77274007, 0.9453675 , 0.99324134, 0.99943678,\n",
       "        1.        ]),\n",
       " 'std_fit_time': array([0.00416257, 0.00363773, 0.0199039 , 0.03759725, 0.02942488,\n",
       "        0.03680945]),\n",
       " 'std_score_time': array([3.99235538e-04, 7.44843452e-07, 4.82230192e-04, 3.99236620e-04,\n",
       "        4.82177678e-04, 4.88519238e-04]),\n",
       " 'std_test_score': array([0.00184359, 0.01968314, 0.01777104, 0.0140838 , 0.0102844 ,\n",
       "        0.01548232]),\n",
       " 'std_train_score': array([0.00038594, 0.00346014, 0.00268904, 0.00096673, 0.00032855,\n",
       "        0.00013798])}"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "bow_search.cv_results_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>bow</th>\n",
       "      <th>l2</th>\n",
       "      <th>tfidf</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0.549121</td>\n",
       "      <td>0.548220</td>\n",
       "      <td>0.589004</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0.722397</td>\n",
       "      <td>0.548220</td>\n",
       "      <td>0.789320</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0.765210</td>\n",
       "      <td>0.593961</td>\n",
       "      <td>0.759351</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0.752591</td>\n",
       "      <td>0.770843</td>\n",
       "      <td>0.747634</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>0.741325</td>\n",
       "      <td>0.769491</td>\n",
       "      <td>0.730509</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>0.725777</td>\n",
       "      <td>0.751465</td>\n",
       "      <td>0.702118</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "        bow        l2     tfidf\n",
       "0  0.549121  0.548220  0.589004\n",
       "1  0.722397  0.548220  0.789320\n",
       "2  0.765210  0.593961  0.759351\n",
       "3  0.752591  0.770843  0.747634\n",
       "4  0.741325  0.769491  0.730509\n",
       "5  0.725777  0.751465  0.702118"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "########\n",
    "# Plot the cross validation results in a box-and-whiskers plot to\n",
    "# visualize and compare classifier performance\n",
    "########\n",
    "search_results = pd.DataFrame.from_dict({\n",
    "    'bow':\n",
    "    bow_search.cv_results_['mean_test_score'],\n",
    "    'tfidf':\n",
    "    tfidf_search.cv_results_['mean_test_score'],\n",
    "    'l2':\n",
    "    l2_search.cv_results_['mean_test_score']\n",
    "})\n",
    "search_results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Table4-2.每个超参数设置的平均交叉验证分类器准确度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 画出交叉验证结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Our usual matplotlib incantations. Seaborn is used here to make\n",
    "# the plot pretty\n",
    "%matplotlib inline\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "sns.set_style(\"whitegrid\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZQAAAEBCAYAAABfblNQAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAHitJREFUeJzt3X+cVXW97/HXDMLYiBS/ckz8keL5OFSSoqaGtzShgaPXo+eQXD23sVPo8SRkqJ2OYSlJt/AcUjA0y2rsltLJk6GHQTT8RfkDthCW40fwyg+RUNiK4WaQH3P/+K6tm80MrIG1157NvJ+PxzzY67u+67s/y9nuz3zX97u+q6qtrQ0REZF9VV3uAEREZP+ghCIiIolQQhERkUQooYiISCKUUEREJBFKKCIikogD0nwzM6sBZgBjgC3ANHef2kHdM4CbgeOAZcDX3H1ewf4zgVuAwcAzwJfcfXlpz0BERDqSdg/lJuB04GzgMmCSmY0trmRmHwTuB/4TOB74FXCfmR0Z7T8cmA38X+Ak4C/Ab81MPS4RkTJJ7QvYzA4CxgFXunvG3X8LTAWuaKf6JwHc/bvu/pK7fwfYDJwa7R8H/NHdp7r788A/AYcDZ5X6PEREpH1p/kU/FKgBFhSULQBONrPiS28bgPeb2RgzqzKzvwMOBpZG+08FHs9Xdvcc8CxwWqmCFxGR3UtzDOVQIOvurQVl64BewEBgbUH5E8CtwCxgB9CDMEbSUtDWq0XtrwMG7S6AJUuWtNXU1Oz1CYiIdEe5XG79sGHDBu6pXpo9lFrCQHyh/Hbxt/xBwIeBG4GTgX8FbjGz/CWvjtpSthARSd7KOJXS7KG0susXfn47V1R+DVDj7t+Mtheb2UeAScA5u2lrw+4CqKmpob6+vrNxi4h0a5lMJla9NHsoa4C+ZtaroKyO0LPIFtU9GfhTUVkGOLqgrbqi/XXsfNlMRERSlGZCWQK8Q5g2nDccyLj7tqK6rxKmCxeqB16KXj8VHQuAmdUCJ0TlIiJSBqld8nL3nJk1ATPN7BJCj+Jq4FIAM6sDNrr7ZuAO4A9m9jXg18CZwBeA0VFzPwGuMbNvAL8BrgNWAb9L63xERGRnad8IOBFYCMwHbgcmu/usaN9a4EIAd38G+J/R9lLgSuBid58f7V8BXAD8b2ARcAhwnrvvSO1MRERkJ1Xd6YmNLS0tbRqUFxHpnEwmkxk2bNhJe6qnpUpERCQRqS4OKSJSCebOncucOXNi1c1mwyTVfv36xW5/9OjRNDQ07FVsXZkSiojIPtiwIdz+1pmEsr9SQhERKdLQ0BC7BzFhwgQApk+fXsqQKoLGUEREJBFKKCIikgglFBERSYQSioiIJEIJRUREEqGEIiIiiVBCERGRRCihiIhIIpRQREQkEUooIiKSCCUUERFJhBKKiIgkQotDipRIKZdA31+XP5fKpoQi0gVoCXTZHyihiJSIlkCX7kZjKCIikgglFBERSYQueXVhGtTteqZPn87y5csTb3fZsmXAe5e+kjR48OCStCtSTAllP6FB3XQsX76cF//0LEf03p5ou33aqgBoXbEw0XZXbeqRaHsiu6OE0oVpULdrOqL3diadtKncYcRy46Le5Q5BuhGNoYiISCKUUEREJBFKKCIikgglFBERSYQG5UWkW6jEKd9QWdO+U00oZlYDzADGAFuAae4+tZ16jwKfaqeJR9z9LDOrBt4GDiza39fd30w2ahHZHyxfvpzFf14MH0i44eg6z+I1ixNuGKiwb7O0eyg3AacDZwODgJ+b2Sp3v6eo3gVAr4LtjwBzge9H20cDNcBRhMSUt7EEMYvI/uIDsOPTO8odRWzVj1bWqERqCcXMDgLGAee6ewbImNlU4Apgp4Ti7tmC46qA2UCTu98fFQ8BVrn7ylSCFxGRPUoz/Q0l9CoWFJQtAE42s90ltrHAccA3CsqGAJ54hCIistfSvOR1KJB199aCsnWES1sDgbUdHHctcJu7rysoGwIcbGaPA8cCi4GvuruSjIhImaSZUGrZebyDgu2a9g4ws+GE3smool31QG/gy8Am4OvAI2ZW7+4djqNs2bKFlpaWvQg9ObNmzeKVV15JvN3Vq1cD8KUvfSnxtgEGDRrEhRdeWJK2K0kul6u4ufa5XK7sn/uuIJfLlTuEvVJJv780E0oruyaO/HZHv+kLgfnuXvwNfAbQw93fBjCzi4DVwHnAXR0FUFNTQ319fWfjTlQ2m8VfWsGO2mQXcaxqC3MYWta+lWi7ANW5LLW1tWX/b9cV1NbW0rrnal2KfndBbW0tvFHuKDqvK/z+MplMrHppJpQ1QF8z6+Xu70RldYReSraDY0YRZobtpOiyGe7eamYvA4clGG/J7KjtR+uQc8odRmwHPv9AuUMQkQqQZu99CfAOYdpw3nAg4+7biiub2QDgGOCxovIDzGyNmY0tKOtNGEt5oRSBi4jInqXWQ3H3nJk1ATPN7BJC7+Rq4FIAM6sDNrr75uiQjwJbgReL2tlmZg8CU8zsVULvZgphUP9+RESkLNIeX5wILATmA7cDk919VrRvLWHMJO8QQoJp7y6k8cAcYBbwdFTW0F5PR0RE0pHqnfLungMao5/ifVVF27MICaO9dt4mJJXxJQhTRET2QqXNgBQRkS5Kqw2LSLeQzWbhzQpbH+tNyL6vo0mwXU8F/ZcVEZGuTD0UEekW+vXrx8rNKytuteF+/ZK9CbqU1EMREZFEKKGIiEgidMkrZdlslurchopazqQ6t4FstteeK4pIt6YeioiIJEI9lJT169ePl994p+IWh6ykgUERKQ/1UEREJBFKKCIikgglFBERSYQSioiIJEIJRUREEqFZXmVQncsmfh9K1dbwXLK2nu9LtF0I8YbnoYmIdEwJJWWDBw+OXTebzbJhw4ZYdTe/ExLK+3rEe8ZY//79OzEVuK5TcYtI96SEkrIJEybErjt37lzmzJkTq242G5a4jpskRo8eTUNDQ+xYRET2RAmlC2toaNCXvohUDA3Ki4hIIpRQREQkEUooIiKSCCUUERFJhBKKiIgkQglFREQSoYQiIiKJUEIREZFE6MZGEek+3oTqRxP+O7o1+vfAZJsF4E3gsBK0WyJKKCLSLZRqPbply5YBcOxhxybf+GGli7sUlFBEpFvozDp6e9Pu9OnTS9J+JUk1oZhZDTADGANsAaa5+9R26j0KfKqdJh5x97OiOp8DvgN8CHgIGOfur5UodBER2YO0B+VvAk4HzgYuAyaZ2dh26l0AHFrwczawDfg+gJmdDDQBNwKnAn2Au0odvIiIdCy1HoqZHQSMA8519wyQMbOpwBXAPYV13T1bcFwVMBtocvf7o+LxwL3u/rOozueBVWY22N2Xl/xkRERkF7ESipl93N2X7ON7DQVqgAUFZQuA68zsAHfv6MlQY4HjgHMLyk4F/j2/4e6rzWwlcBqghCIi+6QzzyLKD8p3Zoxmf30eUdweypNmtgK4G7jH3V/ci/c6FMi6e2tB2TqgFzAQWNvBcdcCt7n7uqK2Xi2qtw4YtBdxiYjstf79+5c7hC4jbkL5IHA+8DngG2b2HCG5zHL3V2K2UUsYiC+U365p7wAzG07onYyK2Va77bxbYcsWWlpaYgUr0p5cLldxdwPncjl97jvpyCOP5PLLLy/pe+yPv5NYCcXd/0oY9L7LzD4A/B3wt8ANZpYBfgnc7e5v7aaZVnb9ws9v5zo45kJgfjtJq6O2OmonVKipob6+fndVRHartraW1j1X61Jqa2v1uZd9kslkYtXbmz+2jgXqgY8CO4DVhJ7Lyg5mbOWtAfqaWa+CsjpCzyLb/iGMAv6rg7bqisrq6PiymYiIlFjcQfmTCEnjHwj3fTQD3wJm58dEzOxfgR9QNGOrwBLgHcK04UejsuFApr0BeTMbABwDPNZOW09Fx/44qns4cERULiIiZRB3DOUpwhf7FODX7r6xgzr3dtSAu+fMrAmYaWaXEHoUVwOXAphZHbDR3TdHh3wU2Aq0NwHgNuAxM/t99L63AM3uvizm+YiISMLiXvI63N0/A/wqn0zMbEhhBXd/zN0v3UM7E4GFwHzgdmCyu8+K9q0ljJnkHUJIMDuKG3H3Jwn3tEwCngQ2Ao0xz0VEREogbg+lj5nNB+4HvhaV/c7MXgfOc/eX4zTi7jnCF/8uX/7uXlW0PQuYVVyvYH8T4W55ERHpAuL2UGYCzxCWOskbDPwx2iciIt1c3IRyCnBD4bRgd38buAH4ZCkCExGRyhI3obwGnNRO+fGER8CIiEg3F3cM5WbgDjP7CPBsVHYCMIGwhLyIiHRzce+Un2FmOcKS81cR7idZBlzh7r8sYXwiIlIhYi9f7+53AneWMBYREalgce+UryY89OojQI+ouIqwftYJ7j6iNOGJiEiliNtDuRX4ArCYMOPrD4RlUeoIy62IiEg3F3eW1xjgYnc/nfAAq8sJa2fdTVhKXkREurm4CaUPYckUgOeAT7j7duD/sOuzSkREpBuKm1BeAk6MXv+ZcNkrf3yfpIMSEZHKE3cM5SbgbjP7J8L6Ws+aWRvhGe4LdnukiIh0C7F6KO7+U2AE8KK7txCe2DiAsHT8F0oXnoiIVIq404bnAV+Jkgnu/iDwYCkDExGRyhJ3DOXjhIddiYiItCvuGMrtwK/N7IfASqC1cKe7z086MJGuKJvN8vpfe3Djot7lDiWWlX/twcBsttxhSDcRN6FMiv5t7ybGNt67e15ERLqpuItDxr00JrJf69evH7VvvcSkkzaVO5RYblzUmwP79St3GNJNxB2UP3p3+939/yUTjoiIVKq4l7yWEy5tFT73vS362QH0SjguERGpMHETyofbOe4Y4HpgSpIBiYhIZYo7hrKyneKXzGwDYYHI/040KhERqTj7OtheDRyWRCAiIlLZ4g7KT26nuA9wMTAv0YhERKQixR1DOaNou43wXPnbgWmJRiQiIhUp7hjKmQBmVuXubdHrD7j7m6UMTkREKkesMRQzO8TMHgS+XVD8gpk9YGYDShOaiIhUkriD8j+M/r2zoOwMoCcwI9GIRESkIsVNKGcC49395XyBuy8DrgQaShGYiIhUlriD8n8l3Nz4YlH5YYTB+VjMrIbQoxkDbAGmufvUDuoeB8wETgVeAf7N3e+N9lUDbwMHFh3WV+M6IiLlETeh3AncaWbXAc9GZScANwA/68T73QScDpwNDAJ+bmar3P2ewkpm1ht4GJgPDAVGER5B/HF3fx44GqgBjiIkpryNnYhFREQSFDeh3EC4PPZdYGBU9jpwC/C9OA2Y2UHAOOBcd88AGTObClwB3FNU/fOEB3p90d23AsvMbCThGfbPA0OAVR3cwS8iImUQd9rwDuA6M/sW0I9wmau6k5eXhhJ6FQsKyhZE7R7g7tsKys8CZkfJJB/DOQX7hwDeifcWEZESi3unfB3QBCx090lR2V/MbBFwibuvj9HMoUDW3Quf9riOsFLxQGBtQfkxwGIzmwmcH+37prs/EO0fAhxsZo8DxwKLga+6u5KMiEiZdOYRwAA/KSg7A7iVMMj+v2K0UcvO4x0UbNcUlR8MXEMYlB8NjATuM7NPRJfL6oHewJeBTcDXgUfMrN7dOxxH2bJlCy0tLTFCFWlfLpfb5wXw0pbL5fS5l1TETShnAicXPkjL3ZeZ2ZXAH2K20cquiSO/nSsq3wY85+7XRtuLzewM4FLgMkIy6+HubwOY2UXAauA84K6OAqipqaG+vj5muCK7qq2tpXXP1bqU2tpafe5ln2QymVj14v6xlZ82XOwwwuB5HGuAvmZW+DCuOkIvJVtU91XghaIyB44AcPfWfDLJbwMvo5WPRUTKZl+nDU8GfhqzjSWEwfzTgUejsuFApmhAHuBJdr1hcgiwwswOAFYCV+WnG0fTjI9l1yQkIiIp2Zdpw68Rpg3/Nk4D7p4zsyZgppldQuidXE24jJUf+N/o7psJS71MMLPvAXcA5xLuXTnF3bdF64pNMbNXCb2bKYSB+/tjno+IiCQs1iUvd9/h7te5+yHABwk3FN4IXAA814n3mwgsJNyweDsw2d1nRfvWAhdG77cKGEEYu/kzIen8vbsvjuqOB+YAs4Cno7KGdno6IiKSkrg9FMysB2HGVSNwDmFhyCcJNyHG4u656PjGdvZVFW0/BZzSQTtvE5LK+LjvLSIipbXHhGJmQ4FLgIuAAYSexAHAOe7eXNLoRESkYnSYUMzsq4SexMeAZYQ1u/4LeIYwuK5lT0RE5F2766H8ByGR/CMwK1p+BQAzK3VcIiJSYXaXUC4GxhKmBd9mZs3AbwBd5hIRkV10OMvL3e929/MI03uvAQ4BfklYZbga+EzRTYoiItKN7XHasLu/6e4/cvezgMOBa4EM4R6UtWY2vcQxiohIBYg9bRjA3dcC04BpZjaYMPNrbCkCExGRytKphFLI3ZcTll6ZnFw4IiJSqSptJW4REemilFBERCQRSigiIpIIJRQREUmEEoqIiCRCCUVERBKhhCIiIolQQhERkUQooYiISCKUUEREJBFKKCIikgglFBERSYQSioiIJEIJRUREEqGEIiIiiVBCERGRRCihiIhIIpRQREQkEUooIiKSCCUUERFJhBKKiIgk4oA038zMaoAZwBhgCzDN3ad2UPc4YCZwKvAK8G/ufm/B/s8B3wE+BDwEjHP310p7BiIi0pG0eyg3AacDZwOXAZPMbGxxJTPrDTxMSCRDgVuBu81sSLT/ZKAJuJGQcPoAd6VxAiIi0r7UEoqZHQSMA65094y7/xaYClzRTvXPA1uBL7r7MnefDswDTov2jwfudfefufvSqP5nzWxwyU9ERETalWYPZShQAywoKFsAnGxmxZfezgJmu/vWfIG7n+Pud0abpwKPF+xbDazkvYQjIiIpS3MM5VAg6+6tBWXrgF7AQGBtQfkxwGIzmwmcH+37prs/UNDWq0XtrwMGlSJwERHZszQTSi1hIL5QfrumqPxg4BrCoPxoYCRwn5l9wt0zu2mruJ2dK2zZQktLy16ELhLkcrmKmxqZy+X0uZdUpJlQWtn1Cz+/nSsq3wY85+7XRtuLzewM4FLCYH5HbRW3s3OFmhrq6+s7G7fIu2pra2ndc7Uupba2Vp972SeZTCZWvTT/2FoD9DWzXgVldYSeRbao7qvAC0VlDhxR0FZd0f46dr5sJiIiKUozoSwB3iFMG84bDmTcfVtR3SeBE4vKhgArotdPRccCYGaHE5LNUwnGKyIinZDaJS93z5lZEzDTzC4h9CiuJlzGwszqgI3uvhn4ITDBzL4H3AGcS7h35ZSouduAx8zs94QkcgvQ7O7L0jofERHZWdrjixOBhcB84HZgsrvPivatBS4EcPdVwAjgTODPhKTz9+6+ONr/JOGelkmE3sxGoDG90xARCdavX8/48ePZsGFDuUMpu1SXXnH3HOGLf5cvf3evKtp+ivd6JO211US4W15EpGyamppYunQpTU1NTJw4sdzhlFWlzYAUEeky1q9fT3NzM21tbTQ3N3f7XooSiojIXmpqaqKtrQ2AHTt20NTUvS+aKKGIiOylhx56iK1bwwpRW7duZd68eWWOqLyUUERE9tKIESPo2bMnAD179mTkyJFljqi8lFBERPZSY2MjVVVhPlF1dTWNjd17sqkSiojIXhowYACjRo2iqqqKUaNG0b9//3KHVFapThsWEdnfNDY2smLFim7fOwElFBGRfTJgwABmzJhR7jC6BF3yEhGRRCihiIhIIpRQREQkEUooIiKSCCUUERFJhBKKiIgkQglFREQSoYQiIiKJUEIREZFEKKGIiEgilFBERCQRSigiIpIIJRQREUmEEoqIiCRCCUVERBKhhCIiIolQQhERkUQooYiISCL0CGCRTlq1qQc3LuqdaJsb36kC4P292hJtd9WmHvxNoi2KdEwJRaQTBg8eXJJ2Vy9bBsAhRx2baLt/Q+liFimmhCLSCRMmTChpu9OnTy9J+yJpSDWhmFkNMAMYA2wBprn71A7qzgNGFBWf7+73mVk18DZwYNH+vu7+ZsJhi4hIDGn3UG4CTgfOBgYBPzezVe5+Tzt1hwBjgccKyt6I/j0aqAGOIiSmvI1JBywiIvGkllDM7CBgHHCuu2eAjJlNBa4A7imq2wc4DHja3f/STnNDgFXuvrLEYVeM9evXc8MNN3D99dfTv3//cocjIt1QmtOGhxJ6FQsKyhYAJ5tZcWIbArQCqzpoawjgiUdYwZqamli6dClNTU3lDkVEuqk0E8qhQNbdWwvK1gG9gIFFdYcAbwL3mNlaM3vGzEYX7T/YzB6P9s8xMytp9F3Y+vXraW5upq2tjebmZjZs2FDukESkG0pzDKWWncc7KNiuKSqvB3oDs4EpwPnA/WZ2urs/XbD/y8Am4OvAI2ZW7+4djqNs2bKFlpaWfT6RruYXv/gF27dvB2Dbtm3cfPPNXHTRRWWOSjojl8sB7JefT+k+0kworeyaOPLbuaLyrwNTCmZs/dHMhgGXAU8DZwA93P1tADO7CFgNnAfc1VEANTU11NfX79NJdEWLFi16N6Fs376dhQsX8u1vf7vMUcncuXOZM2dOrLpr1qwB4LbbbotVf/To0TQ0NOx1bCKdkclkYtVL85LXGqCvmfUqKKsj9FKyhRXdfXs7039bCAP1uHtrPpnkt4GX8/u7mxEjRtCzZ08AevbsyciRI8sckXRW//79NZlCKl6aPZQlwDuEacOPRmXDgYy7byusaGa/Bl5z938pKD4BeD4awF8JXJWfbmxmvYFjgRdKegZdVGNjI83NzQBUV1fT2NhY5ogEoKGhQb0I6VZSSyjunjOzJmCmmV1C6J1cDVwKYGZ1wEZ330wYO7nDzBYAC4F/JCSff3b3bWb2IDDFzF4l9G6mAGuB+9M6n65kwIABjBo1itmzZzNq1Cj9pSsiZZH2asMTCQliPnA7MNndZ0X71gIXArj7XYRkMxl4DhgNfNbdX4rqjgfmALMIYyoADcU9ne6ksbGR448/Xr0TESmbqra2ZFc37cpaWlra9sdBeRGRUspkMplhw4adtKd6eh6KiIgkQglFREQSoYQiIiKJUEIREZFEKKGIiEgiutUTG3O53PpMJqMl70VEOufIOJW61bRhEREpHV3yEhGRRCihiIhIIpRQREQkEUooIiKSCCUUERFJRLeaNlxpzOwowoPDjnX35WUORxJU+LsF3gK+D4wE2oAHgIntPGROUmZmHwcOdvcnzOyzwE+BPoSnys4Aera3yrmZ3QgMd/dPR9uNwDTCd+4Ru3tUeSVTD0Wk/H4JDAJGEB7V8DHgzrJGJHm/ASx6/T1gLvBR4A7g0E48MuMW4AfA0P01mYB6KCLlVgd8BjjO3R3AzL4CPGFmte6eK2t0UlXw+v3Ak+6+Itr+SyfaeT/wRMGx+yUllMpwgZmNJ3wof0p4/PE2MzsNuInweOTXgZvc/Qdmdj7wY2Cgu++Iuu2LgQvc/TcAZvYc8F13/0U5TkjetRH4W2BZQVkb4epBDaCEUiZm9ijhDvEfmdmPouI7zOxi4HrgEaJLXmY2hNBrORH4PbA8auMowqVNgHlm1uTul6R1DmnTJa/KMA4YC5wLnA9cb2b1hCdfPk5IKN8CpprZGOB3hOu8x0fHf4rwJfVJePdxy/XAvBTPQdq32d3nuPuOgrKvAM+5+xvlCkoAuAB4BbgKOKrg9QWFlcysBvhvQuI4kXCZbFy0ezVwaPT6c4Tf7X5LCaUyTHT337v7Y8B1wD8TPrBL3f1ad3/R3ZsIg4Rfc/e3gKeAT0fHfwpoJkoowNnAs+7+eponIXtmZl8FxhAely1l5O5ZYDvwlruvLHidLap6NjAQuNzdX3D3mcB9URvb3T1/aeyN/Xn8BJRQKsXCgtfPAv0JPYyni+r9ATguev0g8GkzqwLOIFwaO9HM3kcY/G0uacTSaWZ2FfAfwAR3f7jc8UhsQ4CX3H1TQdmicgVTTkoolaHwckj+d9baTr0evDcu9iDwPwgzhnLu/ihhnOUUwl9Uc0sSqewVM7sB+HfgK+7+g3LHI51WVbS9tSxRlJkSSmX4WMHrU4C1QAvwiaJ6pwEevc4QuuhXAE9EZU8QLpcdCDxTqmClc6JZXdcBl7n7jHLHIzuJsxz7n4DBZta3oOyEEsXTpWmWV2WYbmZfBHoDkwl/yc4CrjSz7wA/A04Fvkw06BfN7noYaAQmRO08TpgL/yt3357qGUhHPkS4v+E2YHY0YSLvdf2eym4TcJyZ9dtNnYeBlcBPzOwbhP8X/4EwjtmtqIdSGWYQBvn+E/g58H13f4Uw3fSzwHOEv3CvcvcfFxz3INCL93oojxO65rrc1XUcT5ge/C+Enmfhz4fLGJcEtwKXAT/qqIK7byXckNqHcGXgUmBmKtF1MXrAloiIJEI9FBERSYQSioiIJEIJRUREEqGEIiIiiVBCERGRRCihiIhIIpRQREQkEUooIiKSCCUUERFJxP8HeJrjBihqSv8AAAAASUVORK5CYII=\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x25b2a909eb8>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "ax = sns.boxplot(data=search_results, width=0.4)\n",
    "ax.set_ylabel('Accuracy', size=14)\n",
    "ax.tick_params(labelsize=14)\n",
    "# plt.savefig('tfidf_gridcv_results.png')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<center>Figure 4-4: 分类器精度在每个特征集和正则化设置下的分布。 准确度是以5折交叉验证的平均准确度来衡量的</center>\n",
    "\n",
    "\n",
    "在图4-4中，L2归一化后的特征结果看起来非常糟糕。 但不要被蒙蔽了 。准确率低是由于正则化参数设置不恰当造成的 - 实际证明次优超参数会得到相当错误的结论。 如果我们使用每个特征集的最佳超参数设置来训练模型，则不同特征集的测试精度非常接近。\n",
    "\n",
    "### 示例4-6：最终的训练和测试步骤来比较不同的特征集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n",
      "  FutureWarning)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test score with bow features: 0.7682606410930111\n",
      "Test score with l2-normalized features: 0.7856016815554387\n",
      "Test score with tf-idf features: 0.792433000525486\n"
     ]
    }
   ],
   "source": [
    "m1 = simple_logistic_classify(\n",
    "    X_tr_bow, y_tr, X_te_bow, y_te, 'bow', _C=bow_search.best_params_['C'])\n",
    "m2 = simple_logistic_classify(\n",
    "    X_tr_l2,\n",
    "    y_tr,\n",
    "    X_te_l2,\n",
    "    y_te,\n",
    "    'l2-normalized',\n",
    "    _C=l2_search.best_params_['C'])\n",
    "m3 = simple_logistic_classify(\n",
    "    X_tr_tfidf,\n",
    "    y_tr,\n",
    "    X_te_tfidf,\n",
    "    y_te,\n",
    "    'tf-idf',\n",
    "    _C=tfidf_search.best_params_['C'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0.54912123, 0.72239748, 0.76520955, 0.75259126, 0.74132492,\n",
       "       0.72577738])"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "bow_search.cv_results_['mean_test_score']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<center>Table4-3.BOW, Tf-Idf,以及L2正则化的最终分类精度</center>\n",
    "\n",
    "Feature Set |Test Accuracy\n",
    "--------|-----------\n",
    "Bag-of-Words | 0.7682606410930111\n",
    "L2 -normalized | 0.7856016815554387\n",
    "Tf-Idf | 0.792433000525486\n",
    "\n",
    "适当的调整提高了所有特征集的准确性，并且所有特征集在正则化后进行逻辑回归得到了相近的准确率。tf-idf模型准确率略高，但这点差异可能没有统计学意义。 这些结果是完全神秘的。 如果特征缩放效果不如vanilla词袋的效果好，那为什么要这么做呢？ 如果tf-idf没有做任何事情，为什么总是要这么折腾？ 我们将在本章的其余部分中探索答案。\n",
    "\n",
    "## 深入：发生了什么？\n",
    "为了明白结果背后隐含着什么，我们必须考虑模型是如何使用特征的。对于类似逻辑回归这种线性模型来说，是通过所谓的数据矩阵的中间对象来实现的。\n",
    "数据矩阵包含以固定长度平面向量表示的数据点。 根据词袋向量，数据矩阵也被称为文档词汇矩阵。 图3-1显示了一个向量形式的词袋向量，图4-1显示了特征空间中的四个词袋向量。 要形成文档词汇矩阵，只需将文档向量取出，平放，然后将它们堆叠在一起。 这些列表示词汇表中所有可能的单词。 由于大多数文档只包含所有可能单词的一小部分，因此该矩阵中的大多数都是零，是一个稀疏矩阵。\n",
    "\n",
    "![Figure 4-5: 包含5个文档7个单词的文档-词汇矩阵](images/chapter4/4-5.png)\n",
    "\n",
    "<center>Figure 4-5: 包含5个文档7个单词的文档-词汇矩阵</center>\n",
    "\n",
    "特征缩放方法本质上是对数据矩阵的列操作。特别的，tf-idf和L2归一化都将整列（例如n-gram特征）乘上一个常数。\n",
    "\n",
    "## Tf-idf=列缩放\n",
    "Tf-idf和L2归一化都是数据矩阵上的列操作。 正如附录A所讨论的那样，训练线性分类器归结为寻找最佳的线性组合特征，这是数据矩阵的列向量。 解空间的特征是列空间和数据矩阵的空间。训练过的线性分类器的质量直接取决于数据矩阵的零空间和列空间。 大的列空间意味着特征之间几乎没有线性相关性，这通常是好的。 零空间包含“新”数据点，不能将其表示为现有数据的线性组合; 大的零空间可能会有问题。（强烈建议希望对诸如线性决策表面，特征分解和矩阵的基本子空间等概念进行的回顾的读者阅读附录A。)\n",
    "\n",
    "列缩放操作如何影响数据矩阵的列空间和空间？ 答案是“不是很多”。但是在tf-idf和L2归一化之间有一个小小的差别。\n",
    "\n",
    "由于几个原因，数据矩阵的零空间可能很大。 首先，许多数据集包含彼此非常相似的数据点。 这使得有效的行空间与数据集中数据的数量相比较小。 其次，特征的数量可以远大于数据的数量。 词袋特别擅长创造巨大的特征空间。 在我们的Yelp例子中，训练集中有29K条评论，但有47K条特征。 而且，不同单词的数量通常随着数据集中文档的数量而增长。 因此，添加更多的文档不一定会降低特征与数据比率或减少零空间。\n",
    "\n",
    "在词袋模型中，与特征数量相比，列空间相对较小。 在相同的文档中可能会出现数目大致相同的词，相应的列向量几乎是线性相关的，这导致列空间不像它可能的那样满秩。 这就是所谓的秩亏。 （就像动物缺乏维生素和矿物质一样，矩阵秩亏，输出空间也不会像应该那样蓬松）。\n",
    "\n",
    "秩亏行空间和列空间导致模型空间预留过度的问题。 线性模型为数据集中的每个特征配置权重参数。 如果行和列空间满秩$^1$，那么该模型将允许我们在输出空间中生成任何目标向量。 当模型不满秩时，模型的自由度比需要的更大。 这使得找出解决方案变得更加棘手。\n",
    "\n",
    "可以通过特征缩放来解决数据矩阵的不满秩问题吗？ 让我们来看看。\n",
    "\n",
    "列空间被定义为所有列向量的线性组合：$a_{1} v_{1}+a_{2} v_{2}+\\ldots+a_{n} v_{n}$。比方说，特征缩放用一个常数倍来替换一个列向量，$v_{1}=c v_{1}$。但是我们仍然可以通过用$\\widetilde{a_{1}}=\\frac{a_{1}}{c}$来替换$a_1$，生成原始的线性组合。看起来，特征缩放不会改变列空间的秩。类似地，特征缩放不会影响空间的秩，因为可以通过反比例缩放权重向量中的对应条目来抵消缩放的特征列。\n",
    "\n",
    "但是，仍然存在一个陷阱。 如果标量为0，则无法恢复原始线性组合;$v_1$消失了。 如果该向量与所有其他列线性无关，那么我们已经有效地缩小了列空间并放大了零空间。\n",
    "\n",
    "如果该向量与目标输出不相关，那么这将有效地修剪掉噪声信号，这是一件好事。 这是tf-idf和L2归一化之间的关键区别。 L2归一化永远不会计算零的范数，除非该向量包含全零。 如果向量接近零，那么它的范数也接近于零。 按照小规范划分将突出向量并使其变大。\n",
    "\n",
    "另一方面，如图4-2所示，Tf-idf可以生成接近零的缩放因子。 当这个词出现在训练集中的大量文档中时，会发生这种情况。 这样的话有可能与目标向量没有很强的相关性。 修剪它可以使模型专注于列空间中的其他方向并找到更好的解决方案。 准确度的提高可能不会很大，因为很少有噪声方向可以通过这种方式修剪。\n",
    "\n",
    "在特征缩放的情况下，L2和tf-idf对于模型的收敛速度确实有促进。 这是该数据矩阵有一个更小的条件数的标志。 事实上，L2归一化使得条件数几乎一致。 但情况并非条件数越多，解决方案越好。 在这个实验中，L2归一化收敛比BOW或tf-idf快得多。 但它对过拟合也更敏感：它需要更多的正则化，并且对优化期间的迭代次数更敏感。\n",
    "\n",
    "## 总结\n",
    "\n",
    "在本章中，我们使用tf-idf作为入口点，详细分析特征变换如何影响（或不）模型。Tf-idf是特征缩放的一个例子，所以我们将它的性能与另一个特征缩放方法-L2标准化进行了对比。\n",
    "\n",
    "结果并不如预期。Tf-idf和L2归一化不会提高最终分类器的准确度，而不会超出纯词袋。 在获得了一些统计建模和线性代数处理知识之后，我们意识到了为什么：他们都没有改变数据矩阵的列空间。\n",
    "\n",
    "两者之间的一个小区别是，tf-idf可以“拉伸”字数以及“压缩”它。 换句话说，它使一些数字更大，其他数字更接近\n",
    "归零。 因此，tf-idf可以完全消除无意义的单词。\n",
    "\n",
    "我们还发现了另一个特征缩放效果：它改善了数据矩阵的条件数，使线性模型的训练速度更快。 L2标准化和tf-idf都有这种效果。\n",
    "\n",
    "总而言之，正确的特征缩放可以有助于分类。 正确的缩放突出了信息性词语，并降低了常见单词的权重。 它还可以改善数据矩阵的条件数。 正确的缩放并不一定是统一的列缩放。\n",
    "\n",
    "这个故事很好地说明了在一般情况下分析特征工程的影响的难度。 更改特征会影响训练过程和随后的模型。 线性模型是容易理解的模型。 然而，它仍然需要非常谨慎的实验方法和大量的深刻的数学知识来区分理论和实际的影响。 对于更复杂的模型或特征转换来说，这是不可能的。\n",
    "\n",
    "## 参考书目\n",
    "\n",
    "Strang, Gilbert. 2006. Linear Algebra and Its Applications. Brooks Cole Cengage, fourth edition.\n",
    "\n",
    "$^1$ 严格地说，矩阵矩阵的行空间和列空间不能都是满秩的。 两个子空间的最大秩是m（行数）和n（列数）中的较小者。 这就是我们所说的满秩。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
