{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 词袋模型(BOW)\n",
    "\n",
    "> https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html#bags-of-words\n",
    "\n",
    "- 训练集中的所有文档，使用分词器(tokenizer)拆分成词条\n",
    "- 对每一个的出现的词条赋一个常量id值\n",
    "- 对第i个文档、计算每个词条w的出现次数，将次数存储在 X[i,j] 上，其中j是w的索引"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array(['an', 'apple', 'banana', 'is', 'second', 'the', 'this'],\n",
       "      dtype=object)"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "\n",
    "corpus = [\n",
    "    'This is an apple',\n",
    "    'The second is banana',\n",
    "    'banana is banana',\n",
    "]\n",
    "\n",
    "v = CountVectorizer()\n",
    "x = v.fit_transform(corpus)\n",
    "v.get_feature_names_out()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(3, 7)"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[1, 1, 0, 1, 0, 0, 1],\n",
       "       [0, 0, 1, 1, 1, 1, 0],\n",
       "       [0, 0, 2, 1, 0, 0, 0]], dtype=int64)"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x.toarray()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "特点:\n",
    "- 文本的顺序被忽略"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# TF-IDF 算法\n",
    "\n",
    "* TF (Term Frequency): 某个词在文章中出现的次数 / 文章的总词数\n",
    "* IDF (Inverse Document Frequency): 逆文档频率 = log(文档总数 / (包含该词的文档数 + 1))\n",
    "   一个词出现得越多，则IDF越小越接近0。分母加1用于防止没有文档出现这个词\n",
    "* TF-IDF = TF * IDF\n",
    "\n",
    "特点：\n",
    "- 与词频成正比\n",
    "- 与IDF成反比"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['also' 'an' 'apple' 'banana' 'is' 'this']\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[0.        , 0.52682017, 0.52682017, 0.        , 0.40912286,\n",
       "        0.52682017],\n",
       "       [0.56943086, 0.43306685, 0.43306685, 0.        , 0.33631504,\n",
       "        0.43306685],\n",
       "       [0.        , 0.        , 0.        , 0.95905588, 0.28321692,\n",
       "        0.        ]])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 使用sklearn计算\n",
    "\n",
    "corpus = [\n",
    "    'This is an apple',\n",
    "    'This is also an apple',\n",
    "    'banana is banana',\n",
    "]\n",
    "\n",
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "\n",
    "tran = TfidfVectorizer()\n",
    "tfidf = tran.fit_transform(corpus)\n",
    "print(tran.get_feature_names_out())\n",
    "vectors = tfidf.toarray()\n",
    "vectors"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[1.        , 0.82203923, 0.11587052],\n",
       "       [0.82203923, 1.        , 0.09525011],\n",
       "       [0.11587052, 0.09525011, 1.        ]])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 计算一下相似度\n",
    "from sklearn.metrics.pairwise import cosine_similarity\n",
    "cosine_similarity(vectors)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Word2vec\n",
    "\n",
    "- 将 **单词** 转换成向量表示(Word embeddings)，挖掘词和词之间的关系\n",
    "\n",
    "> https://tensorflow.google.cn/tutorials/text/word2vec?hl=zh-cn\n",
    "\n",
    "> [Efficient estimation of word representations in vector space](https://arxiv.org/pdf/1301.3781.pdf)\n",
    "\n",
    "> [Distributed representations of words and phrases and their compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)\n",
    "\n",
    "上述论文提出了两种学习单词表示的方法：\n",
    "\n",
    "    CBOW（连续词袋模型）：根据周围的上下文单词预测中间单词。上下文由当前（中间）单词前后的几个单词组成。这种架构被称为词袋模型，因为上下文中的单词顺序并不重要。\n",
    "    SkipGram（连续跳字模型）：预测同一句子中当前单词前后一定范围内的单词。下面给出了一个工作示例。\n",
    "\n",
    "## CBOW\n",
    "\n",
    "- 一种神经网络，通过前后词来预览该位置词\n",
    "- 无监督学习，它能从未标注的数据中学习\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
