{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Data Cleaning 数据清洗"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction  介绍"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook <a title=\"梳理了一遍，捋了一遍，过了一遍\">goes through</a> a necessary step of any data science project - data cleaning. Data cleaning is a time consuming and unenjoyable task, yet it's a very important one. Keep in mind, \"**garbage in, garbage out**\". Feeding dirty data into a model will give us results that are meaningless.\n",
    "\n",
    "本笔记梳理了一下所有数据科学项目都必须有的一个步骤-数据清洗。数据清洗是一件不太令人爽快且耗时的事情，但它又是一件极其重要的事情。记住，\"**输入垃圾，输出垃圾**\",给模型喂脏数据反馈给我们的结果将会是无意义的。\n",
    "\n",
    "Specifically, we'll be walking through:\n",
    "\n",
    "尤其，我们将会梳理一下：\n",
    "\n",
    "1. **Getting the data** - in this case, we'll be scraping data from a website\n",
    "  \n",
    "  **获取数据** - 在这个小节中，我们将从网站上抓取数据（笔记：这就是各种爬虫技术研究的领域）。\n",
    "2. **Cleaning the data** - we will walk through popular text pre-processing techniques\n",
    "\n",
    "  **数据清洗** - 我们将梳理一下常见的文本预处理技术\n",
    "3. **Organizing the data** - we will organize the cleaned data into a way that is easy to input into other algorithms\n",
    "\n",
    "  . **组织数据** -我们将把清洗后的数据按照容易输入到其它算法中的形式进行组织。 \n",
    "\n",
    "The output of this notebook will be clean, organized data in two standard text formats:\n",
    "\n",
    "本笔记的输出是干净的组织好的两种标准的文本格式：\n",
    "\n",
    "1. **Corpus** - a collection of text\n",
    "\n",
    "   **语料库** - 一种文本的集合\n",
    "2. **Document-Term Matrix** - word counts in matrix format\n",
    "\n",
    "  **词频矩阵** - 矩阵形式的词频统计结果\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Problem Statement 问题陈述"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As a reminder, our goal is to look at transcripts of various comedians(不同笑星的成绩单) and note their similarities and differences. Specifically, I'd like to know if Ali Wong's comedy style is different than other comedians, since she's the comedian that got me interested in stand up comedy（单口喜剧）.\n",
    "\n",
    "作为一个催剧者，我们的目标是看看各种喜剧演员的成绩单并记录他们的相似和不同。尤其，我想知道Ali Wong的喜剧风格是否与别的喜剧演员不同，因为她是 吸引我入迷单口喜剧的人。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Getting The Data 获取数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Luckily, there are wonderful people online that keep track of stand up routine transcripts（日常成绩单）. [Scraps From The Loft](http://scrapsfromtheloft.com) makes them available for non-profit and educational purposes.\n",
    "\n",
    "很幸运，网上有些很棒的人一直在跟踪单口剧日常成绩。可以从[Scraps From The Loft](http://scrapsfromtheloft.com) 网站上下载来作非获利和教育用途。\n",
    "\n",
    "To decide which comedians to **look** into, I went on IMDB and looked specifically at comedy specials that were released in the past 5 years. To **narrow it down** further, I looked only at those with greater than a 7.5/10 rating and more than 2000 votes. If a comedian had multiple specials that fit those requirements, I would pick the most highly rated one. I ended up with a dozen comedy specials.（1、数据源，2、设定筛选条件）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Web scraping, pickle imports\n",
    "import requests\n",
    "from bs4 import BeautifulSoup #\n",
    "import pickle\n",
    "\n",
    "# Scrapes transcript data from scrapsfromtheloft.com\n",
    "def url_to_transcript(url):\n",
    "    '''Returns transcript data specifically from scrapsfromtheloft.com.'''\n",
    "    page = requests.get(url).text\n",
    "    soup = BeautifulSoup(page, \"lxml\")\n",
    "    text = [p.text for p in soup.find(class_=\"post-content\").find_all('p')]\n",
    "    print(url)\n",
    "    return text\n",
    "\n",
    "# URLs of transcripts in scope\n",
    "urls = ['http://scrapsfromtheloft.com/2017/05/06/louis-ck-oh-my-god-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/04/11/dave-chappelle-age-spin-2017-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2018/03/15/ricky-gervais-humanity-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/08/07/bo-burnham-2013-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/05/24/bill-burr-im-sorry-feel-way-2014-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/04/21/jim-jefferies-bare-2014-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/08/02/john-mulaney-comeback-kid-2015-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/10/21/hasan-minhaj-homecoming-king-2017-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/09/19/ali-wong-baby-cobra-2016-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/08/03/anthony-jeselnik-thoughts-prayers-2015-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2018/03/03/mike-birbiglia-my-girlfriends-boyfriend-2013-full-transcript/',\n",
    "        'http://scrapsfromtheloft.com/2017/08/19/joe-rogan-triggered-2016-full-transcript/']\n",
    "\n",
    "# Comedian names\n",
    "comedians = ['louis', 'dave', 'ricky', 'bo', 'bill', 'jim', 'john', 'hasan', 'ali', 'anthony', 'mike', 'joe']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# # Actually request transcripts (takes a few minutes to run)\n",
    "# transcripts = [url_to_transcript(u) for u in urls]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# # Pickle files for later use #pickle:备用\n",
    "\n",
    "# # Make a new directory to hold the text files\n",
    "# !mkdir transcripts\n",
    "\n",
    "# for i, c in enumerate(comedians):\n",
    "#     with open(\"transcripts/\" + c + \".txt\", \"wb\") as file:\n",
    "#         pickle.dump(transcripts[i], file) # 备份到文件中"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load pickled files\n",
    "data = {} # {} 是创建空字典(字典相当于java中的map)。\n",
    "for i, c in enumerate(comedians):   # i 为数组index， c 为数组元素。\n",
    "    with open(\"transcripts/\" + c + \".txt\", \"rb\") as file:\n",
    "        data[c] = pickle.load(file) # 从备份文件中读取到字典变量中"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# data # excercise:test print the data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# data.items() # excercise:test print the data items."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Double check to make sure data has been loaded properly\n",
    "keys = data.keys()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# More checks\n",
    "data['louis'][:2]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Cleaning The Data 数据清洗"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When dealing with numerical data, data cleaning often involves removing null values and duplicate data, dealing with outliers, etc. With text data, there are some common data cleaning techniques, which are also known as text pre-processing techniques.\n",
    "\n",
    "当处理数值类型数据时，数据清晰常包括移除空值和重复数据，处理<a title=\"离群值（Outliers) ：是指严重偏离平均水平的观测数据\">离群值</a>等。对文本型数据，有一些常用的数据清洗技术，也称之为文本预处理技术。\n",
    "\n",
    "With text data, this cleaning process can go on forever. There's always an exception to every cleaning step. So, we're going to follow the <B>MVP (minimum viable product) approach - start simple and iterate</B>.Here are a bunch of things you can do to clean your data. We're going to execute just the common cleaning steps here and the rest can be done at a later point to improve our results.\n",
    "\n",
    "对于文本数据，这个清理过程可以永久进行。每个处理步骤总会存在异常。所以，我们将按照MVP（最小可行产品）方式：简单开始并不断迭代。清理数据，这里有一串你可以做的事情。这里我们将只做通常的清理步骤，其它的可以后续去提升我们的处理结果。\n",
    "\n",
    "**Common data cleaning steps on all text:常见的文本清理步骤**\n",
    "* Make text all lower case 将文本改成小写\n",
    "* Remove punctuation 删除标点符号（？标点符号，有时候对文意是有影响的呀，尤其问号、叹号等）\n",
    "* Remove numerical values 删除数值\n",
    "* Remove common non-sensical text (/n) 删除常见无意义文字\n",
    "* Tokenize text [符号化（笔记：即分词、断句等）](https://blog.csdn.net/weixin_42167712/article/details/110727139)\n",
    "* Remove stop words  [停用词](https://zhuanlan.zhihu.com/p/335347401)\n",
    "\n",
    "**More data cleaning steps after tokenization:**\n",
    "* <a title=\"词干提取/词干化\">Stemming</a> / <a title=\"Lemmatization is the process of identifying the base, non-inflected form of a word(词元化/词形还原 是确定一个单词的固定的基本部分的过程).\">lemmatization</a> [词干提取/词形还原](https://blog.csdn.net/m0_37744293/article/details/79065002)\n",
    "* Parts of speech tagging [词性标注](https://blog.csdn.net/kunpen8944/article/details/83241051)\n",
    "* Create bi-grams or tri-grams 创建[二元文法模型或三元文法模型](https://zhuanlan.zhihu.com/p/32829048) <a href=\"http://blog.sciencenet.cn/blog-713101-797384.html\">N-gram表</a>\n",
    "* Deal with <a title=\"拼写错误\">typos</a>  处理拼写错误\n",
    "* And more..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's take a look at our data again\n",
    "next(iter(data.keys()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Notice that our dictionary is currently in key: comedian, value: list of text format\n",
    "next(iter(data.values()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# We are going to change this to key: comedian, value: string format\n",
    "def combine_text(list_of_text):\n",
    "    '''Takes a list of text and combines them into one large chunk of text.'''\n",
    "    combined_text = ' '.join(list_of_text)\n",
    "    return combined_text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# def combinetmp(list_of_text):\n",
    "#     '''Takes a list of text and combines them into one large chunk of text.'''\n",
    "#     combined_text = ' '.join(list_of_text)\n",
    "#     return combined_text\n",
    "# vv = [\"cac 1.1\", 'dbd 1.2', 'dad', 'dc']\n",
    "# tt = {\"key\":[combinetmp(value)]for value in vv }\n",
    "# tt "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Combine it!\n",
    "data_combined = {key: [combine_text(value)] for (key, value) in data.items()} # 字典的 items() 方法返回可遍历的(键, 值) 元组数组"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# We can either keep it in dictionary format or put it into a pandas dataframe\n",
    "import pandas as pd\n",
    "pd.set_option('max_colwidth',150)\n",
    "\n",
    "data_df = pd.DataFrame.from_dict(data_combined).transpose() # 应该去学习一下pandas的使用知识。\n",
    "data_df.columns = ['transcript'] # transcript:副本/抄本\n",
    "data_df = data_df.sort_index()\n",
    "data_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's take a look at the transcript for Ali Wong\n",
    "data_df.transcript.loc['ali']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Apply a first round of text cleaning techniques\n",
    "import re\n",
    "import string\n",
    "\n",
    "def clean_text_round1(text):\n",
    "    '''Make text lowercase, remove text in square brackets, remove punctuation and remove words containing numbers.'''\n",
    "    text = text.lower()  # 转为小写\n",
    "    text = re.sub('\\[.*?\\]', '', text) # 删除[] 内的内容\n",
    "    text = re.sub('[%s]' % re.escape(string.punctuation), '', text) # 删除标点符号\n",
    "    text = re.sub('\\w*\\d\\w*', '', text) # 删除包含数字的词\n",
    "    return text\n",
    "\n",
    "round1 = lambda x: clean_text_round1(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's take a look at the updated text\n",
    "data_clean = pd.DataFrame(data_df.transcript.apply(round1))\n",
    "data_clean"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Apply a second round of cleaning\n",
    "def clean_text_round2(text):\n",
    "    '''Get rid of some additional punctuation and non-sensical text that was missed the first time around.'''\n",
    "    text = re.sub('[‘’“”…]', '', text) # 删除[]引号省略号\n",
    "    text = re.sub('\\n', '', text) # 删除换行符\n",
    "    return text\n",
    "\n",
    "round2 = lambda x: clean_text_round2(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's take a look at the updated text\n",
    "data_clean = pd.DataFrame(data_clean.transcript.apply(round2))\n",
    "data_clean"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**NOTE:** This data cleaning <a title=\"又名，又叫，又称\">aka</a> text pre-processing step could go on for a while, but we are going to stop for now. After going through some analysis techniques, if you see that the results don't make sense or could be improved, you can come back and make more edits such as:\n",
    "\n",
    "数据清洗(又叫文本预处理)步骤，本会持续一会儿，但我们现在要先停下了。如果进行了一些分析之后，你发现结果仍然不具备意义或可以再提高，你可以回来继续进行编辑，例如：\n",
    "\n",
    "* Mark 'cheering' and 'cheer' as the same word (stemming / lemmatization)   <p><b>将 'cheering' 和  'cheer'标记为同一个词 (词干提取/词形还原)</b></p>\n",
    "* Combine 'thank you' into one term (bi-grams)   <p><b>将'thank you'结合为一个专用词 (bi-grams)</b></p>\n",
    "* And a lot more...   <p><b>以及其它许多...</b></p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Organizing The Data         组织数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I mentioned earlier that the output of this notebook will be clean, organized data in two standard text formats:\n",
    "1. **Corpus - **a collection of text\n",
    "2. **Document-Term Matrix - **word counts in matrix format\n",
    "<p><b>我前面提到过，本笔记的输出结果将是以两种标准文本格式的干净有组织的数据:\n",
    "<p>1.语料库- 一个文本的集合</p>\n",
    "<p>2.词频矩阵-矩阵形式的词频数据</p>\n",
    "</b></p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Corpus"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We already created a corpus in an earlier step. The definition of a corpus is a collection of texts, and they are all put together neatly in a pandas dataframe here.\n",
    "<p><b>前面的步骤中，我们已经创建了一个语料库。语料库的定义是一系列文本的集合，这里，他们被整齐地放在pandas dataframe中。</b></p>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's take a look at our dataframe\n",
    "data_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's add the comedians' full names as well\n",
    "full_names = ['Ali Wong', 'Anthony Jeselnik', 'Bill Burr', 'Bo Burnham', 'Dave Chappelle', 'Hasan Minhaj',\n",
    "              'Jim Jefferies', 'Joe Rogan', 'John Mulaney', 'Louis C.K.', 'Mike Birbiglia', 'Ricky Gervais']\n",
    "\n",
    "data_df['full_name'] = full_names\n",
    "data_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's pickle it for later use\n",
    "data_df.to_pickle(\"corpus.pkl\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Document-Term Matrix  词频矩阵"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For many of the techniques we'll be using in future notebooks, the text must be tokenized, meaning broken down into smaller pieces. The most common tokenization technique is to break down text into words. We can do this using scikit-learn's CountVectorizer, where every row will represent a different document and every column will represent a different word.\n",
    "\n",
    "因为我们在将来的书中会用到的许多技术，文本必须被分好词，即被打散成小片段。最常用的分词技术是把文本打散成词。我们可以用<b> scikit-learn's [CountVectorizer](https://blog.csdn.net/weixin_38278334/article/details/82320307)</b>来做这个，每行代表一个不同的文档，每列代表一个不同的词。\n",
    "\n",
    "In addition, with CountVectorizer, we can remove stop words. Stop words are common words that add no additional meaning to text such as 'a', 'the', etc.\n",
    "<p>此外，用CountVectorizer，我们可以删除停顿词。停顿词是那些常见的不会给文本增加意义的词，例如：'a','the'等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# We are going to create a document-term matrix using CountVectorizer, and exclude common English stop words\n",
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "\n",
    "cv = CountVectorizer(stop_words='english')\n",
    "data_cv = cv.fit_transform(data_clean.transcript)\n",
    "data_dtm = pd.DataFrame(data_cv.toarray(), columns=cv.get_feature_names())\n",
    "data_dtm.index = data_clean.index\n",
    "data_dtm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's pickle it for later use\n",
    "data_dtm.to_pickle(\"dtm.pkl\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's also pickle the cleaned data (before we put it in document-term matrix format) and the CountVectorizer object\n",
    "data_clean.to_pickle('data_clean.pkl')\n",
    "pickle.dump(cv, open(\"cv.pkl\", \"wb\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## Additional Exercises"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. Can you add an additional regular expression to the clean_text_round2 function to further clean the text?\n",
    "2. <a title=\"尝试\">Play around</a> with [CountVectorizer](https://www.jianshu.com/p/f9a2accf6554)'s parameters. What is [ngram_range](https://zhuanlan.zhihu.com/p/393510735)? What is [min_df and max_df](https://blog.csdn.net/weixin_46265255/article/details/120250624)?"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.7"
  },
  "toc": {
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": "block",
   "toc_window_display": false
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
