{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 基于神经网络的自然语言处理\n",
    "\n",
    "### 传统的词袋模型\n",
    "\n",
    "词袋模型（Bag of Words, BoW）是一种用于自然语言处理（NLP）任务的文本表示方法。它的主要思想是将文本转换为数字特征向量，以便能够使用机器学习算法进行处理。尽管它在现代深度学习和NLP领域中已经被更复杂的模型所取代，如TF-IDF、Word2Vec等，但它仍然是理解文本特征提取的基础概念之一。\n",
    "\n",
    "1. **词汇表构建**：首先，从文档集中抽取所有出现的单词，形成一个词汇表（也称为词典）。词汇表中的每个词都会被赋予一个唯一的索引号。\n",
    "\n",
    "2. **文档向量化**：然后，将每篇文档转换为一个固定长度的向量，向量的长度等于词汇表的大小。对于词汇表中的每一个词，计算它在文档中出现的次数，该次数作为相应位置上的值。\n",
    "\n",
    "### 特点\n",
    "\n",
    "- **无序性**：词袋模型忽略了词序信息，只关注词汇的出现频率。\n",
    "- **稀疏性**：由于词汇表可能非常大，而每篇文档中只包含一部分词汇，因此生成的向量通常是稀疏的（大部分元素为零）。\n",
    "- **高维性**：文档向量通常是高维的，这可能会导致“维度灾难”，使得计算变得复杂。\n",
    "\n",
    "### 优点\n",
    "\n",
    "- 简单易用：实现起来相对容易，可以快速地将文本转换为可计算的特征向量。\n",
    "- 广泛适用：适用于多种文本分类任务，如情感分析、主题分类等。\n",
    "\n",
    "词袋模型虽然简单，但在许多实际应用中仍然有效，尤其是在初步探索和快速原型设计阶段。然而，在需要捕捉更多语义信息的任务中，通常会选择更复杂的模型和技术。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Vocabulary: ['and' 'document' 'first' 'is' 'one' 'second' 'the' 'third' 'this']\n",
      "Document vectors:\n",
      " [[0 2 1 2 0 1 2 0 2]\n",
      " [0 2 0 1 0 1 1 0 1]\n",
      " [1 0 0 1 1 0 1 1 1]\n",
      " [0 1 1 1 0 0 1 0 1]]\n"
     ]
    }
   ],
   "source": [
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "\n",
    "# 文档集合\n",
    "corpus = [\n",
    "    'This is the first document. This is the second document.',\n",
    "    'This document is the second document.',\n",
    "    'And this is the third one.',\n",
    "    'Is this the first document?',\n",
    "]\n",
    "\n",
    "# 创建 CountVectorizer 对象\n",
    "vectorizer = CountVectorizer()\n",
    "\n",
    "# 学习词汇表并转换文档集\n",
    "X = vectorizer.fit_transform(corpus)\n",
    "\n",
    "# 获取词汇表\n",
    "vocab = vectorizer.get_feature_names_out()\n",
    "print(\"Vocabulary:\", vocab)\n",
    "\n",
    "# 查看文档向量\n",
    "print(\"Document vectors:\\n\", X.toarray())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 词嵌入(word embedding)\n",
    "\n",
    "### 从独热编码开始(one-hot encoding)\n",
    "\n",
    "在机器学习任务中,类别特征通常需要转换为独热编码(one-hot encoding)进行模型输入,主要有以下两个原因:\n",
    "\n",
    "1. 避免假定类别顺序。直接将类别特征用数字编码(比如0,1,2)会假定类别之间有大小顺序关系,而实际上类别仅仅是不同的类型,之间并无大小顺序可言。\n",
    "\n",
    "2. 线性可分。多分类任务需要模型输出维度等于类别数,而独热编码可以将类别扩展为一个个0/1的特征维度,使不同类别成为线性可分的。\n",
    "\n",
    "具体来说,对于一个有N个类别的特征,使用独热编码将其转换为一个N维0/1向量,其中类别的索引位置为1,其他位置为0。\n",
    "\n",
    "比如对颜色特征[\"红\",\"绿\",\"蓝\"]使用独热编码,可以将其转换为:\n",
    "\n",
    "红 -> [1, 0, 0] \n",
    "\n",
    "绿 -> [0, 1, 0]\n",
    "\n",
    "蓝 -> [0, 0, 1]\n",
    "\n",
    "这样模型就能够区分不同颜色,而不会对颜色顺序进行假设。因此,独热编码常用于处理机器学习模型中的类别特征。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>color</th>\n",
       "      <th>make</th>\n",
       "      <th>year</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>green</td>\n",
       "      <td>Chevrolet</td>\n",
       "      <td>2017</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>blue</td>\n",
       "      <td>BMW</td>\n",
       "      <td>2015</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>yellow</td>\n",
       "      <td>Lexus</td>\n",
       "      <td>2018</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    color       make  year\n",
       "0   green  Chevrolet  2017\n",
       "1    blue        BMW  2015\n",
       "2  yellow      Lexus  2018"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "df = pd.DataFrame([\n",
    "    ['green', 'Chevrolet', 2017],\n",
    "    ['blue', 'BMW', 2015],\n",
    "    ['yellow', 'Lexus', 2018],\n",
    "])\n",
    "df.columns = ['color', 'make', 'year']\n",
    "df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>year</th>\n",
       "      <th>color_blue</th>\n",
       "      <th>color_green</th>\n",
       "      <th>color_yellow</th>\n",
       "      <th>make_BMW</th>\n",
       "      <th>make_Chevrolet</th>\n",
       "      <th>make_Lexus</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>2017</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>2015</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>2018</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   year  color_blue  color_green  color_yellow  make_BMW  make_Chevrolet  \\\n",
       "0  2017           0            1             0         0               1   \n",
       "1  2015           1            0             0         1               0   \n",
       "2  2018           0            0             1         0               0   \n",
       "\n",
       "   make_Lexus  \n",
       "0           0  \n",
       "1           0  \n",
       "2           1  "
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_processed = pd.get_dummies(df, columns=['color', 'make'] ,dtype=int)\n",
    "df_processed"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 词嵌入原理\n",
    "\n",
    "词嵌入(word embedding)是自然语言处理中的一个重要技术,它的思想是将词映射到一个连续的低维向量空间中,使得语义相似的词在这个空间中距离较近。\n",
    "\n",
    "https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html\n",
    "\n",
    "https://blog.csdn.net/raelum/article/details/125462028"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"30.jpg\" width=\"800\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from IPython.display import Image\n",
    "Image(url= \"30.jpg\",width=800)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## pytorch 中的词嵌入nn.Embedding\n",
    "\n",
    "nn.Embedding和nn.Linear都是PyTorch中常用的层,两者的主要区别是:\n",
    "\n",
    "1. nn.Embedding用于处理离散特征,nn.Linear用于处理连续特征。\n",
    "\n",
    "2. nn.Embedding将整数索引映射到固定维度的稠密向量,nn.Linear将输入数据映射到输出维度。\n",
    "\n",
    "3. nn.Embedding输出维度由嵌入矩阵定义,nn.Linear输出维度由Linear层参数定义。\n",
    "\n",
    "4. nn.Embedding输入一般是词索引,nn.Linear输入可以是任意形状张量。\n",
    "\n",
    "上面例子展示了nn.Embedding和nn.Linear的不同之处:前者处理离散特征获得到词向量,后者用于连续特征的线性映射。两者在自然语言处理任务中经常联合使用。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"25.png\" width=\"400\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\n",
    "Image(url= \"25.png\",width=400)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"24.webp\" width=\"500\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Image(url= \"24.webp\",width=500)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "import math\n",
    "from torch.autograd import Variable\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import copy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 简单实现（在torch中并没有用独热，而是通过位置与nn.Embedding特殊的机制实现了类似独热的效果）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.1604,  0.7571,  0.0956, -0.5087,  2.5918],\n",
      "        [-0.6639,  0.1369, -0.1471, -0.6938,  0.7911]],\n",
      "       grad_fn=<EmbeddingBackward0>)\n"
     ]
    }
   ],
   "source": [
    "word_to_ix = {\"hello\": 0, \"world\": 1, \"pytorch\": 2}\n",
    "embeds = nn.Embedding(3, 5)  # 2 words in vocab, 5 dimensional embeddings\n",
    "lookup_tensor = torch.tensor([word_to_ix[\"hello\"], word_to_ix[\"world\"]], dtype=torch.long)\n",
    "hello_embed = embeds(lookup_tensor)\n",
    "print(hello_embed)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([0, 1])"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.tensor([word_to_ix[\"hello\"], word_to_ix[\"world\"]], dtype=torch.long)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Parameter containing:\n",
       "tensor([[-0.1604,  0.7571,  0.0956, -0.5087,  2.5918],\n",
       "        [-0.6639,  0.1369, -0.1471, -0.6938,  0.7911],\n",
       "        [-0.8335, -2.7482, -1.6768,  0.1432,  0.3586]], requires_grad=True)"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embeds.weight"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-0.1604,  0.7571,  0.0956, -0.5087,  2.5918],\n",
       "        [-0.6639,  0.1369, -0.1471, -0.6938,  0.7911]],\n",
       "       grad_fn=<EmbeddingBackward0>)"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embeds(torch.tensor([word_to_ix[\"hello\"], word_to_ix[\"world\"]], dtype=torch.long))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-0.1604,  0.7571,  0.0956, -0.5087,  2.5918, -0.6639,  0.1369, -0.1471,\n",
       "         -0.6938,  0.7911]], grad_fn=<ViewBackward0>)"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embeds(torch.tensor([word_to_ix[\"hello\"], word_to_ix[\"world\"]], dtype=torch.long)).view((1, -1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.8123,  0.4575, -0.9634,  0.3268, -0.0092,  0.8424, -0.2896, -0.2440,\n",
       "         -0.5013, -0.4043]], grad_fn=<ViewBackward0>)"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "hello_embed.view((1, -1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Parameter containing:\n",
       "tensor([[ 0.8123,  0.4575, -0.9634,  0.3268, -0.0092],\n",
       "        [ 0.8424, -0.2896, -0.2440, -0.5013, -0.4043]], requires_grad=True)"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embeds.weight"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# nn.Embedding\n",
    "embed = nn.Embedding(10, 64) # 10个词,embedding维度64 \n",
    "input = torch.LongTensor([1,5,8]) # 输入词索引\n",
    "embed_vector = embed(input) # 将词索引映射为词向量\n",
    "\n",
    "# nn.Linear \n",
    "fc = nn.Linear(32, 10) # 输入维度32,输出维度10\n",
    "input = torch.randn(8, 32) # 8个32维输入 \n",
    "output = fc(input) # 全连接层映射到10维输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'tool', 'the', 'away', 'useful', 'worker', 'a', 'an', 'old', 'english', 'he', 'is', 'far', 'cinema'}\n"
     ]
    }
   ],
   "source": [
    "corpus = [\"he is an old worker\", \"english is a useful tool\", \"the cinema is far away\"]\n",
    "word_list=[]\n",
    "for i in corpus:\n",
    "    #print(i)\n",
    "    for j in i.split():\n",
    "        word_list.append(j)\n",
    "word_set=set(word_list)\n",
    "print(word_set)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'tool': 0,\n",
       " 'the': 1,\n",
       " 'away': 2,\n",
       " 'useful': 3,\n",
       " 'worker': 4,\n",
       " 'a': 5,\n",
       " 'an': 6,\n",
       " 'old': 7,\n",
       " 'english': 8,\n",
       " 'he': 9,\n",
       " 'is': 10,\n",
       " 'far': 11,\n",
       " 'cinema': 12}"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "word_to_ix = {}\n",
    "for i,j in enumerate(word_set):\n",
    "    word_to_ix.update({j:i})\n",
    "word_to_ix"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 从语料到特征的转换"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 1.5263, -0.4624, -0.4639,  2.2752,  0.1722],\n",
      "        [-0.1584, -0.3063, -0.8228, -0.3899,  0.8475],\n",
      "        [-0.1584, -0.3063, -0.8228, -0.3899,  0.8475]],\n",
      "       grad_fn=<EmbeddingBackward0>)\n",
      "tensor([[ 1.5263, -0.4624, -0.4639,  2.2752,  0.1722, -0.1584, -0.3063, -0.8228,\n",
      "         -0.3899,  0.8475, -0.1584, -0.3063, -0.8228, -0.3899,  0.8475]],\n",
      "       grad_fn=<ViewBackward0>)\n"
     ]
    }
   ],
   "source": [
    "# 定义一个语料库\n",
    "corpus = [\"he is an old worker\", \"english is a useful tool\", \"the cinema is far away\"]\n",
    "word_list=[]\n",
    "for i in corpus:\n",
    "    # 将每一句话拆分成单词\n",
    "    for j in i.split():\n",
    "        # 将单词添加到列表中\n",
    "        word_list.append(j)\n",
    "# 将列表转换为集合，去除重复的单词\n",
    "word_set=set(word_list)\n",
    "# 将单词和索引对应起来\n",
    "word_to_ix = {word: i for i, word in enumerate(word_set)}\n",
    "word_to_ix\n",
    "\n",
    "# 定义一个嵌入层，输入维度为单词数量，输出维度为5\n",
    "embeds = nn.Embedding(len(word_set), 5)  \n",
    "# 定义一个张量，存储要查找的单词的索引\n",
    "lookup_tensor = torch.tensor([word_to_ix[\"an\"],word_to_ix[\"he\"],word_to_ix[\"he\"]], dtype=torch.long)\n",
    "# 使用嵌入层查找单词的嵌入向量\n",
    "hello_embed = embeds(lookup_tensor)\n",
    "print(hello_embed)\n",
    "# 将嵌入向量展平\n",
    "print(hello_embed.view((1, -1)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## classwork 1\n",
    "\n",
    "* 定义一个包含商品数据框，包含商品名称、价格、类别等字段，具体值请自己设定，然后对类别字段进行独热编码，并输出编码后的数据框。\n",
    "\n",
    "* 完成上面从语料库到特征的转换：1，先建立词表和索引表；2，定义嵌入层，输入几个词构成的特征，请输出此特征的嵌入向量。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## N-gram语言模型\n",
    "\n",
    "N-gram语言模型是自然语言处理中一种重要的语言模型,它通过计算语言序列中连续N个词的联合概率来建模语言。\n",
    "\n",
    "具体来说,N-gram模型假设词的出现只与前N-1个词相关。例如,在双gram(N=2)模型中,词$w_i$的条件概率可以表示为:\n",
    "\n",
    "$P(w_i|w_{i-1})$\n",
    "\n",
    "在trigram(N=3)模型中,词$w_i$的条件概率为:\n",
    "\n",
    "$P(w_i|w_{i-1},w_{i-2})$ \n",
    "\n",
    "一般地,N-gram模型中词$w_i$的条件概率为:\n",
    "\n",
    "$P(w_i|w_{i-1},...,w_{i-N+1})$\n",
    "\n",
    "根据链式法则,语言序列中所有词连乘的联合概率可以表示为:\n",
    "\n",
    "$P(w_1, ..., w_M) = \\prod_{i=1}^{M} P(w_i|w_{i-1},...,w_{i-N+1})$\n",
    "\n",
    "其中M是词序列长度。\n",
    "\n",
    "N-gram模型通过统计语料中N个词共现的频率来估计条件概率$P(w_i|w_{i-1},...,w_{i-N+1})$。通常采用最大似然估计或平滑技巧来解决数据稀疏性问题。\n",
    "\n",
    "N-gram建模简单易实现,可有效模拟语言局部词序列模式。但无法捕捉长距离依赖关系。\n",
    "\n",
    "### 语言模型的数据预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[(['forty', 'When'], 'winters'), (['winters', 'forty'], 'shall'), (['shall', 'winters'], 'besiege')]\n"
     ]
    }
   ],
   "source": [
    "CONTEXT_SIZE = 2\n",
    "EMBEDDING_DIM = 10\n",
    "torch.manual_seed(1)\n",
    "test_sentence = \"\"\"When forty winters shall besiege thy brow,\n",
    "And dig deep trenches in thy beauty's field,\n",
    "Thy youth's proud livery so gazed on now,\n",
    "Will be a totter'd weed of small worth held:\n",
    "Then being asked, where all thy beauty lies,\n",
    "Where all the treasure of thy lusty days;\n",
    "To say, within thine own deep sunken eyes,\n",
    "Were an all-eating shame, and thriftless praise.\n",
    "How much more praise deserv'd thy beauty's use,\n",
    "If thou couldst answer 'This fair child of mine\n",
    "Shall sum my count, and make my old excuse,'\n",
    "Proving his beauty by succession thine!\n",
    "This were to be new made when thou art old,\n",
    "And see thy blood warm when thou feel'st it cold.\"\"\".split()\n",
    "\n",
    "ngrams = [\n",
    "    ([test_sentence[i - j - 1] for j in range(CONTEXT_SIZE)],test_sentence[i])\n",
    "    for i in range(CONTEXT_SIZE, len(test_sentence))\n",
    "]\n",
    "print(ngrams[:3])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[(['forty', 'When'], 'winters'),\n",
       " (['winters', 'forty'], 'shall'),\n",
       " (['shall', 'winters'], 'besiege')]"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ngrams[:3]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "vocab = set(test_sentence)\n",
    "word_to_ix = {word: i for i, word in enumerate(vocab)}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'say,': 0,\n",
       " 'winters': 1,\n",
       " 'shame,': 2,\n",
       " 'This': 3,\n",
       " 'brow,': 4,\n",
       " 'fair': 5,\n",
       " 'his': 6,\n",
       " 'the': 7,\n",
       " 'And': 8,\n",
       " 'Thy': 9,\n",
       " 'much': 10,\n",
       " 'asked,': 11,\n",
       " 'Shall': 12,\n",
       " 'deep': 13,\n",
       " 'thy': 14,\n",
       " 'own': 15,\n",
       " 'so': 16,\n",
       " 'couldst': 17,\n",
       " 'all-eating': 18,\n",
       " 'thine!': 19,\n",
       " 'where': 20,\n",
       " 'thriftless': 21,\n",
       " \"feel'st\": 22,\n",
       " 'it': 23,\n",
       " 'field,': 24,\n",
       " 'answer': 25,\n",
       " 'trenches': 26,\n",
       " 'count,': 27,\n",
       " 'worth': 28,\n",
       " 'old': 29,\n",
       " 'beauty': 30,\n",
       " 'all': 31,\n",
       " 'see': 32,\n",
       " 'being': 33,\n",
       " \"totter'd\": 34,\n",
       " 'Were': 35,\n",
       " 'blood': 36,\n",
       " 'by': 37,\n",
       " 'proud': 38,\n",
       " 'lusty': 39,\n",
       " 'eyes,': 40,\n",
       " 'within': 41,\n",
       " 'weed': 42,\n",
       " \"deserv'd\": 43,\n",
       " 'an': 44,\n",
       " 'old,': 45,\n",
       " 'of': 46,\n",
       " 'when': 47,\n",
       " 'livery': 48,\n",
       " 'use,': 49,\n",
       " 'succession': 50,\n",
       " 'made': 51,\n",
       " \"excuse,'\": 52,\n",
       " 'besiege': 53,\n",
       " 'Where': 54,\n",
       " 'held:': 55,\n",
       " 'Proving': 56,\n",
       " 'gazed': 57,\n",
       " 'were': 58,\n",
       " 'days;': 59,\n",
       " 'lies,': 60,\n",
       " 'treasure': 61,\n",
       " 'thou': 62,\n",
       " 'my': 63,\n",
       " 'praise': 64,\n",
       " \"beauty's\": 65,\n",
       " 'cold.': 66,\n",
       " 'forty': 67,\n",
       " \"'This\": 68,\n",
       " 'art': 69,\n",
       " 'new': 70,\n",
       " 'a': 71,\n",
       " 'Will': 72,\n",
       " 'on': 73,\n",
       " 'sunken': 74,\n",
       " 'thine': 75,\n",
       " 'sum': 76,\n",
       " 'child': 77,\n",
       " 'When': 78,\n",
       " 'small': 79,\n",
       " 'How': 80,\n",
       " 'in': 81,\n",
       " 'now,': 82,\n",
       " 'praise.': 83,\n",
       " 'to': 84,\n",
       " 'and': 85,\n",
       " 'warm': 86,\n",
       " 'be': 87,\n",
       " 'make': 88,\n",
       " 'more': 89,\n",
       " 'dig': 90,\n",
       " \"youth's\": 91,\n",
       " 'Then': 92,\n",
       " 'If': 93,\n",
       " 'shall': 94,\n",
       " 'mine': 95,\n",
       " 'To': 96}"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "word_to_ix"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "log_probs = torch.randn(3, 2).log_softmax(dim=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-0.4371, -1.0383],\n",
       "        [-0.1610, -1.9057],\n",
       "        [-0.7871, -0.6073]])"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "log_probs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(['forty', 'When'], 'winters')"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ngrams[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### NLLLoss 损失函数\n",
    "\n",
    "`nn.NLLLoss` 是 PyTorch 中的一个损失函数，用于衡量模型输出的负对数似然（Negative Log-Likelihood，简称 NLL）。它通常用于分类任务，特别是在模型的输出层使用了对数softmax（log_softmax）激活函数的情况下。\n",
    "\n",
    "给定一个样本 $ x $，其对应的正确标签为 $ y $，模型的输出为 $ p $，其中 $ p $ 经过 log_softmax 处理后得到的是对数概率向量。对于一个包含 $ C $ 个类别的分类任务，$ p $ 的第 $ i $ 个元素表示的是第 $ i $ 类的概率的对数值。则负对数似然损失 $ L $ 可以表示为：\n",
    "\n",
    "$$ L(x, y) = -\\log(p(y)) $$\n",
    "\n",
    "对于一个批量（batch）的数据，损失可以是单个样本的平均（mean reduction）或总和（sum reduction）：\n",
    "\n",
    "$$ L_{\\text{batch}} = \\frac{1}{N} \\sum_{n=1}^{N} L(x_n, y_n) \\quad \\text{(mean reduction)} $$\n",
    "\n",
    "或\n",
    "\n",
    "$$ L_{\\text{batch}} = \\sum_{n=1}^{N} L(x_n, y_n) \\quad \\text{(sum reduction)} $$\n",
    "\n",
    "其中 $ N $ 是批量大小。\n",
    "\n",
    "下面这个例子中：\n",
    "- `log_probs` 是一个形状为 `(3, 2)` 的张量，代表一个批次中有三个样本，每个样本属于两个类别的对数概率。\n",
    "- `targets` 是一个形状为 `(3)` 的一维张量，表示每个样本的实际标签索引。\n",
    "- `loss.item()` 返回标量损失值。\n",
    "\n",
    "通过这个简单的例子，你可以看到如何定义 `NLLLoss` 并使用它来计算分类任务中的损失。如果你有任何疑问或需要进一步的帮助，请随时询问。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loss: 0.5339012742042542\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "# 创建一个 NLLLoss 对象\n",
    "criterion = nn.NLLLoss()\n",
    "\n",
    "# 假设有两个类，即 C = 2\n",
    "# 创建一个 3x2 的随机张量，模拟一个批次的数据通过 log_softmax 后的输出\n",
    "log_probs = torch.randn(3, 2).log_softmax(dim=1)\n",
    "\n",
    "# 创建一个张量，表示每个样本的实际标签\n",
    "targets = torch.tensor([1, 0, 1], dtype=torch.long)  # 0 和 1 表示类别\n",
    "\n",
    "# 计算损失\n",
    "loss = criterion(log_probs, targets)\n",
    "\n",
    "# 打印损失值\n",
    "print(\"Loss:\", loss.item())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 模型的代码实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[(['forty', 'When'], 'winters'), (['winters', 'forty'], 'shall'), (['shall', 'winters'], 'besiege')]\n",
      "[523.6641716957092, 520.8901543617249, 518.1371247768402, 515.4032278060913, 512.6889622211456, 509.9923801422119, 507.3103280067444, 504.6435902118683, 501.99337005615234, 499.3561701774597, 496.73104524612427, 494.1173939704895, 491.5132586956024, 488.91930198669434, 486.3346767425537, 483.75973081588745, 481.193528175354, 478.6359317302704, 476.0870225429535, 473.54870867729187]\n",
      "tensor([ 0.7903,  1.3658, -0.8506,  0.5156,  1.0474, -0.3156,  0.1405,  2.3403,\n",
      "        -0.6116,  0.8145], grad_fn=<SelectBackward0>)\n"
     ]
    }
   ],
   "source": [
    "CONTEXT_SIZE = 2\n",
    "EMBEDDING_DIM = 10\n",
    "torch.manual_seed(1)\n",
    "# We will use Shakespeare Sonnet 2\n",
    "test_sentence = \"\"\"When forty winters shall besiege thy brow,\n",
    "And dig deep trenches in thy beauty's field,\n",
    "Thy youth's proud livery so gazed on now,\n",
    "Will be a totter'd weed of small worth held:\n",
    "Then being asked, where all thy beauty lies,\n",
    "Where all the treasure of thy lusty days;\n",
    "To say, within thine own deep sunken eyes,\n",
    "Were an all-eating shame, and thriftless praise.\n",
    "How much more praise deserv'd thy beauty's use,\n",
    "If thou couldst answer 'This fair child of mine\n",
    "Shall sum my count, and make my old excuse,'\n",
    "Proving his beauty by succession thine!\n",
    "This were to be new made when thou art old,\n",
    "And see thy blood warm when thou feel'st it cold.\"\"\".split()\n",
    "# we should tokenize the input, but we will ignore that for now\n",
    "# build a list of tuples.\n",
    "# Each tuple is ([ word_i-CONTEXT_SIZE, ..., word_i-1 ], target word)\n",
    "ngrams = [\n",
    "    (\n",
    "        [test_sentence[i - j - 1] for j in range(CONTEXT_SIZE)],\n",
    "        test_sentence[i]\n",
    "    )\n",
    "    for i in range(CONTEXT_SIZE, len(test_sentence))\n",
    "]\n",
    "# Print the first 3, just so you can see what they look like.\n",
    "print(ngrams[:3])\n",
    "\n",
    "vocab = set(test_sentence)\n",
    "word_to_ix = {word: i for i, word in enumerate(vocab)}\n",
    "\n",
    "\n",
    "class NGramLanguageModeler(nn.Module):\n",
    "\n",
    "    def __init__(self, vocab_size, embedding_dim, context_size):\n",
    "        super(NGramLanguageModeler, self).__init__()\n",
    "        self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n",
    "        self.linear2 = nn.Linear(128, vocab_size)\n",
    "\n",
    "    def forward(self, inputs):\n",
    "        embeds = self.embeddings(inputs).view((1, -1))\n",
    "        out = F.relu(self.linear1(embeds))\n",
    "        out = self.linear2(out)\n",
    "        log_probs = F.log_softmax(out, dim=1)\n",
    "        return log_probs\n",
    "\n",
    "\n",
    "losses = []\n",
    "loss_function = nn.NLLLoss()\n",
    "model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)\n",
    "optimizer = optim.SGD(model.parameters(), lr=0.001)\n",
    "\n",
    "for epoch in range(20):\n",
    "    total_loss = 0\n",
    "    for context, target in ngrams:\n",
    "        context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n",
    "        model.zero_grad()\n",
    "        log_probs = model(context_idxs)\n",
    "        loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        total_loss += loss.item()\n",
    "    losses.append(total_loss)\n",
    "print(losses)  # The loss decreased every iteration over the training data!\n",
    "\n",
    "# To get the embedding of a particular word, e.g. \"beauty\"\n",
    "print(model.embeddings.weight[word_to_ix[\"beauty\"]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['forty', 'When'] winters\n"
     ]
    }
   ],
   "source": [
    "for context,target in ngrams:\n",
    "    print(context,target)\n",
    "    break"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([35,  8, 30, 33])"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### classwork2\n",
    "\n",
    "1. 已知test_sentence语料，完成词表，索引表，基于N-gram建立特征矩阵及其目标向量\n",
    "\n",
    "2. 完成N-gram模型的神经网络训练\n",
    "\n",
    "3. 完成下面CBOW模型的数据预处理与训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## CBOW(Continuous Bag of Words)和skip-gram\n",
    "\n",
    "CBOW(Continuous Bag of Words)和skip-gram都是词向量训练算法,两者的主要区别如下:\n",
    "\n",
    "**CBOW**\n",
    "\n",
    "- 输入是上下文词,预测目标词\n",
    "- 输入层到隐层投影是一个连续的词袋(不区分词顺序)\n",
    "- 目标是根据上下文词预测当前词\n",
    "\n",
    "**skip-gram**\n",
    "\n",
    "- 输入是中心词,预测上下文词 \n",
    "- 从输入词向量映射到输出词向量 \n",
    "- 目标是通过当前词预测上下文\n",
    "\n",
    "具体来说:\n",
    "\n",
    "**CBOW**  \n",
    "\n",
    "给定一个词序列(w1, w2, w3, ..., wT),CBOW 的目标是最大化给定上下文词预测当前词wt的概率:\n",
    "\n",
    "P(wt | wt-k, ..., wt+k)\n",
    "\n",
    "其中wt-k,...,wt+k为wt的上下文词窗口。\n",
    "\n",
    "**skip-gram**\n",
    "\n",
    "给定一个词序列(w1, w2, w3, ..., wT),skip-gram 的目标是最大化通过当前词wt预测上下文词wj的概率:\n",
    "\n",
    "P(wj | wt) \n",
    "\n",
    "其中wj为wt的上下文词。\n",
    "\n",
    "CBOW 通过上下文词预测当前词,更关注上下文的语义信息。skip-gram 通过当前词预测上下文,更关注中心词的语义信息。两者分别从不同方面学到词向量的语义信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"27.png\" width=\"800\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Image(url= \"27.png\",width=800)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[(['are', 'We', 'to', 'study'], 'about'), (['about', 'are', 'study', 'the'], 'to'), (['to', 'about', 'the', 'idea'], 'study'), (['study', 'to', 'idea', 'of'], 'the'), (['the', 'study', 'of', 'a'], 'idea')]\n"
     ]
    }
   ],
   "source": [
    "CONTEXT_SIZE = 2  # 2 words to the left, 2 to the right\n",
    "raw_text = \"\"\"We are about to study the idea of a computational process.\n",
    "Computational processes are abstract beings that inhabit computers.\n",
    "As they evolve, processes manipulate other abstract things called data.\n",
    "The evolution of a process is directed by a pattern of rules\n",
    "called a program. People create programs to direct processes. In effect,\n",
    "we conjure the spirits of the computer with our spells.\"\"\".split()\n",
    "\n",
    "# By deriving a set from `raw_text`, we deduplicate the array\n",
    "vocab = set(raw_text)\n",
    "vocab_size = len(vocab)\n",
    "\n",
    "word_to_ix = {word: i for i, word in enumerate(vocab)}\n",
    "data = []\n",
    "for i in range(CONTEXT_SIZE, len(raw_text) - CONTEXT_SIZE):\n",
    "    context = (\n",
    "        [raw_text[i - j - 1] for j in range(CONTEXT_SIZE)]\n",
    "        + [raw_text[i + j + 1] for j in range(CONTEXT_SIZE)]\n",
    "    )\n",
    "    target = raw_text[i]\n",
    "    data.append((context, target))\n",
    "print(data[:5])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([41, 15,  5, 27])"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "context_idxs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[226.42033410072327, 224.91781544685364, 223.42698192596436, 221.94492840766907, 220.47052597999573, 219.00309443473816, 217.54419946670532, 216.09297513961792, 214.6481363773346, 213.210031747818]\n",
      "Parameter containing:\n",
      "tensor([[-5.8585e-01, -1.3563e+00,  6.6131e-01,  2.8232e-02,  6.3295e-01,\n",
      "          1.0595e+00,  1.0390e+00, -6.1170e-02,  7.2656e-01,  1.3601e-01],\n",
      "        [-1.4624e+00, -1.0491e-01,  2.4525e-01,  1.8917e+00, -1.5939e-01,\n",
      "          4.2361e-01,  3.2680e-01, -1.3162e-01,  6.4910e-01, -1.6663e+00],\n",
      "        [ 5.7637e-01,  8.9270e-01, -1.2334e+00,  1.4321e+00, -1.0238e+00,\n",
      "         -1.3552e+00,  6.8550e-01,  4.5344e-01, -6.2759e-01, -3.5659e-01],\n",
      "        [-6.8798e-01,  2.1950e+00,  1.6118e+00, -9.2169e-01,  1.4742e+00,\n",
      "          2.0857e+00,  7.5543e-01,  9.2551e-01,  1.6955e+00, -5.4772e-01],\n",
      "        [ 1.0931e+00,  1.2244e+00, -5.8556e-01, -9.4666e-01, -7.2124e-01,\n",
      "         -3.4694e-01, -2.8819e+00, -3.9432e-01,  4.3597e-02, -9.7642e-01],\n",
      "        [-6.5408e-01,  6.8744e-01,  5.6649e-01,  2.6556e-01,  2.1217e-01,\n",
      "          3.1361e-02,  8.9488e-01, -9.0929e-01, -8.4579e-01, -4.5862e-01],\n",
      "        [-4.1915e-01, -4.6693e-01, -6.3167e-02,  3.8787e-01, -5.1636e-01,\n",
      "         -4.5139e-01,  3.6657e-01, -2.8147e-01,  2.5557e-01,  8.8716e-02],\n",
      "        [-4.7644e-01,  5.8995e-02,  5.0463e-01,  8.4133e-01,  9.3743e-01,\n",
      "         -2.2637e-01, -1.1707e+00,  1.5536e+00,  1.4893e+00, -8.0451e-01],\n",
      "        [ 9.1552e-01,  6.3963e-01,  1.1316e-01, -1.3244e+00,  3.8864e-01,\n",
      "         -1.2031e-01, -1.0175e+00,  2.2992e+00, -4.2096e-01,  1.5401e+00],\n",
      "        [-5.1850e-01,  1.4545e+00, -1.0549e+00, -1.5781e+00, -3.0002e-01,\n",
      "         -9.9528e-01, -3.1539e-01,  1.0379e+00, -5.4274e-01, -7.7343e-01],\n",
      "        [ 1.7671e+00, -2.5786e-01, -2.7391e-01, -1.1049e-01, -7.4519e-01,\n",
      "         -9.0208e-01,  1.5015e-01,  3.6776e-01, -5.1059e-01,  6.9297e-01],\n",
      "        [ 1.5943e+00,  4.3123e-01, -1.0434e+00, -1.1133e+00,  4.5514e-01,\n",
      "         -8.9976e-01, -6.2263e-01,  2.4672e-01,  6.4921e-01,  1.3772e+00],\n",
      "        [ 1.6378e-01,  6.1044e-01,  6.5194e-01, -1.2174e-01, -5.0687e-01,\n",
      "         -7.5336e-01, -1.1743e+00, -2.0763e+00,  8.8445e-01,  4.1646e-01],\n",
      "        [ 8.7729e-01, -1.2814e+00, -6.7208e-02,  8.9684e-01, -5.8162e-01,\n",
      "          1.1271e+00, -9.3154e-01,  1.6596e-01,  6.5691e-01, -1.6049e+00],\n",
      "        [-9.9853e-01, -6.2103e-01,  1.5418e+00,  1.4944e+00,  4.4327e-01,\n",
      "          1.1302e-01, -1.1708e+00,  4.6137e-01,  1.3887e+00,  6.8911e-01],\n",
      "        [ 6.8418e-01,  8.0202e-01,  4.3287e-01, -5.2533e-01, -3.2338e+00,\n",
      "          9.7604e-02, -1.1002e+00,  3.4943e-01,  1.7683e+00, -1.0224e-01],\n",
      "        [-1.4385e+00, -1.5692e+00, -6.0626e-01,  2.0555e+00,  5.9634e-01,\n",
      "         -1.5527e-01,  1.2103e+00,  2.0149e-01, -1.6659e-01,  1.1965e+00],\n",
      "        [-8.0166e-01, -1.8357e-01,  1.3912e+00,  2.0369e+00,  4.7942e-01,\n",
      "          1.5357e-03,  3.0455e-01,  5.3629e-01,  3.5253e-01,  1.4419e-01],\n",
      "        [-1.6639e-01, -2.1597e-01, -2.6277e-01, -1.7133e+00, -2.3083e-01,\n",
      "         -7.5714e-02, -1.0846e+00, -9.3778e-01,  2.4387e-01,  1.1619e+00],\n",
      "        [ 5.0767e-02, -1.1034e+00, -7.1236e-01,  4.3501e-01, -9.5200e-01,\n",
      "         -6.6140e-02,  8.3317e-01,  1.8942e-01,  7.0936e-01, -1.8484e+00],\n",
      "        [-1.1327e+00, -6.2986e-01, -1.8803e-01, -5.7981e-01,  2.9746e-01,\n",
      "         -3.5582e-01,  9.6126e-01, -9.0157e-02, -1.3345e+00,  2.7479e+00],\n",
      "        [ 8.8450e-01, -1.2104e-01, -6.0326e-01,  4.9418e-01, -4.1563e-01,\n",
      "         -6.4194e-01,  7.2745e-01, -3.4836e-01,  1.4419e-03,  3.5625e-01],\n",
      "        [-1.5395e-01, -1.1836e+00,  5.1297e-02, -1.4790e+00, -1.1271e+00,\n",
      "         -3.6444e-01, -2.1755e-01,  2.9579e+00,  5.3785e-02,  1.3622e+00],\n",
      "        [-3.2366e-01,  1.3531e+00,  2.0866e-02, -5.0266e-01,  1.5914e+00,\n",
      "          8.2188e-01, -7.8115e-01,  1.6094e+00, -8.9620e-02,  1.1777e+00],\n",
      "        [ 6.5238e-01,  1.2309e+00, -1.5889e+00, -4.3997e-01, -2.5850e-01,\n",
      "         -3.1382e+00,  9.0910e-01, -5.3692e-01,  5.7371e-02,  1.5096e+00],\n",
      "        [-9.9642e-01,  7.6031e-01, -1.2445e+00,  5.2465e-01,  8.0510e-01,\n",
      "         -1.6564e+00, -7.1241e-01, -4.2491e-01,  9.6077e-02, -1.7751e+00],\n",
      "        [-1.9640e+00, -4.3194e-01,  5.4566e-01, -1.0636e+00,  5.1320e-01,\n",
      "          1.1626e+00,  3.1663e-01, -2.3496e+00,  6.2295e-01,  3.4678e-01],\n",
      "        [ 2.0751e-01,  1.5718e+00,  1.4774e+00, -1.0513e+00, -9.1963e-01,\n",
      "          4.7410e-01,  1.0305e+00, -3.2099e+00, -4.0597e-01, -8.3613e-01],\n",
      "        [-1.4355e+00, -1.5760e+00, -3.1999e-01,  4.2555e-01,  4.2307e-01,\n",
      "         -6.6225e-01,  4.9707e-01,  9.4435e-02,  8.8910e-01, -2.9557e-01],\n",
      "        [ 3.8333e-01, -1.8821e+00, -3.8180e-02,  1.1619e+00, -1.7543e+00,\n",
      "         -2.1373e+00,  6.1566e-01,  1.1337e+00, -5.7556e-01, -4.1502e-01],\n",
      "        [ 3.4671e-01, -7.5505e-02,  2.7767e-01, -1.3378e+00,  5.2857e-01,\n",
      "         -4.9737e-01, -1.0186e+00, -1.2259e+00, -1.8462e-01, -1.1985e-01],\n",
      "        [ 3.6767e-01,  2.4021e-02,  1.6728e+00,  4.1518e-01, -6.6032e-01,\n",
      "          4.4475e-01,  1.5302e+00,  9.0582e-01,  1.4593e+00,  9.9419e-01],\n",
      "        [ 9.8646e-01, -8.1917e-01,  4.4842e-01, -1.4279e+00, -7.2617e-02,\n",
      "          3.5900e-01,  9.6456e-03, -7.4676e-01,  2.5282e-02,  4.6752e-01],\n",
      "        [ 9.5548e-01,  6.0414e-01,  6.5918e-01,  3.7736e-01, -9.8834e-01,\n",
      "         -4.8218e-01,  1.3600e-01,  4.5575e-01,  1.1010e+00,  1.7058e+00],\n",
      "        [ 8.3583e-01,  1.8281e-01, -6.0617e-01,  7.6926e-01,  1.8572e-01,\n",
      "          7.7760e-01,  8.3808e-01,  1.3193e+00,  2.6310e-01,  7.9553e-01],\n",
      "        [ 1.0595e+00,  1.2500e+00, -1.6636e+00, -6.1276e-01, -1.9481e+00,\n",
      "          1.1384e+00,  1.7051e+00,  5.9329e-01,  9.6591e-01, -1.5616e+00],\n",
      "        [ 6.6970e-01, -2.2620e-01, -8.7575e-01, -7.1926e-01,  1.2399e+00,\n",
      "          6.8593e-01,  3.8390e-01,  1.1939e+00, -1.2215e+00, -6.5624e-01],\n",
      "        [ 3.9173e-01, -1.3326e+00,  5.8796e-01, -2.8019e-02, -1.2660e+00,\n",
      "          1.2255e+00, -2.5688e-01, -1.8697e+00, -1.1492e-01,  5.3675e-01],\n",
      "        [-2.1959e-01, -1.5587e+00,  3.9163e-01, -4.5634e-01,  1.3653e-01,\n",
      "          7.6006e-02, -5.6619e-01, -1.1175e+00,  3.6297e-01,  1.1963e+00],\n",
      "        [-4.6137e-01, -9.4553e-01,  8.9634e-02, -1.2162e+00, -3.6686e-01,\n",
      "          2.4467e+00,  3.3693e-01,  2.0985e-01,  1.4508e+00,  1.2708e-01],\n",
      "        [ 2.4357e+00,  9.0952e-01,  1.6108e+00, -1.3859e+00, -1.2958e+00,\n",
      "          5.1417e-01,  9.4030e-01,  3.5608e-01, -5.3708e-01,  1.0430e+00],\n",
      "        [-1.4075e+00, -1.3149e-01, -8.6653e-01,  8.4824e-01, -1.0450e-01,\n",
      "         -1.1301e+00, -1.7683e+00, -6.0489e-01, -1.0546e+00, -3.8920e-01],\n",
      "        [-5.9921e-02, -1.8406e+00,  7.0502e-01, -1.2640e+00,  8.8273e-01,\n",
      "          1.9282e-01, -8.9629e-01,  1.5600e-01, -1.1231e+00, -7.1041e-01],\n",
      "        [ 4.6507e-01,  1.6997e-01, -1.4701e-01,  3.7801e-01,  1.0855e-01,\n",
      "         -6.9618e-02, -1.9974e-01, -2.6743e-01, -2.0306e-01, -1.2250e+00],\n",
      "        [-9.0113e-01, -1.5668e+00, -5.6131e-01, -2.1152e+00,  2.9461e-01,\n",
      "         -6.8438e-01, -6.1624e-01,  1.3969e+00, -6.3617e-01, -3.4625e-01],\n",
      "        [-4.0315e-01,  1.1688e-01,  7.9457e-01, -1.0947e+00, -1.4717e+00,\n",
      "         -5.1447e-01,  1.2559e+00, -1.0177e+00,  7.3963e-01, -8.8111e-01],\n",
      "        [ 2.4721e-01, -1.3858e+00, -4.3190e-01,  1.2524e+00,  1.1526e+00,\n",
      "          8.9850e-01,  1.0021e+00,  2.6968e+00, -6.2299e-01,  1.1838e+00],\n",
      "        [-9.6367e-01,  3.8785e-01,  1.8939e+00, -4.1751e-01,  1.5089e-01,\n",
      "          1.8499e-01,  1.9132e+00, -1.3410e-01, -2.2101e+00,  4.2131e-01],\n",
      "        [-1.3070e+00, -5.6961e-01,  5.3642e-01,  1.6019e+00,  4.7930e-01,\n",
      "         -6.2263e-01, -7.4787e-01,  1.1536e-01,  3.8318e-01,  7.6997e-01]],\n",
      "       requires_grad=True)\n"
     ]
    }
   ],
   "source": [
    "CONTEXT_SIZE = 2  # 2 words to the left, 2 to the right\n",
    "raw_text = \"\"\"We are about to study the idea of a computational process.\n",
    "Computational processes are abstract beings that inhabit computers.\n",
    "As they evolve, processes manipulate other abstract things called data.\n",
    "The evolution of a process is directed by a pattern of rules\n",
    "called a program. People create programs to direct processes. In effect,\n",
    "we conjure the spirits of the computer with our spells.\"\"\".split()\n",
    "\n",
    "# By deriving a set from `raw_text`, we deduplicate the array\n",
    "vocab = set(raw_text)\n",
    "vocab_size = len(vocab)\n",
    "\n",
    "word_to_ix = {word: i for i, word in enumerate(vocab)}\n",
    "data = []\n",
    "for i in range(CONTEXT_SIZE, len(raw_text) - CONTEXT_SIZE):\n",
    "    context = (\n",
    "        [raw_text[i - j - 1] for j in range(CONTEXT_SIZE)]\n",
    "        + [raw_text[i + j + 1] for j in range(CONTEXT_SIZE)]\n",
    "    )\n",
    "    target = raw_text[i]\n",
    "    data.append((context, target))\n",
    "\n",
    "ngrams=data\n",
    "\n",
    "class NGramLanguageModeler(nn.Module):\n",
    "\n",
    "    def __init__(self, vocab_size, embedding_dim, context_size):\n",
    "        super(NGramLanguageModeler, self).__init__()\n",
    "        self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n",
    "        self.linear2 = nn.Linear(128, vocab_size)\n",
    "\n",
    "    def forward(self, inputs):\n",
    "        embeds = self.embeddings(inputs).view((1, -1))\n",
    "        out = F.relu(self.linear1(embeds))\n",
    "        out = self.linear2(out)\n",
    "        log_probs = F.log_softmax(out, dim=1)\n",
    "        return log_probs\n",
    "\n",
    "\n",
    "losses = []\n",
    "loss_function = nn.NLLLoss()\n",
    "model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE*2)\n",
    "optimizer = optim.SGD(model.parameters(), lr=0.001)\n",
    "\n",
    "for epoch in range(10):\n",
    "    total_loss = 0\n",
    "    for context, target in ngrams:\n",
    "        context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n",
    "        model.zero_grad()\n",
    "        log_probs = model(context_idxs)\n",
    "        loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        total_loss += loss.item()\n",
    "    losses.append(total_loss)\n",
    "print(losses)  # The loss decreased every iteration over the training data!\n",
    "\n",
    "# To get the embedding of a particular word, e.g. \"beauty\"\n",
    "print(model.embeddings.weight)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "torch24",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.19"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
