{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "adbbe443-a480-4084-80ca-b7b802a193a5",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:06:48.420231Z",
     "iopub.status.busy": "2024-10-11T08:06:48.419608Z",
     "iopub.status.idle": "2024-10-11T08:06:48.425782Z",
     "shell.execute_reply": "2024-10-11T08:06:48.424459Z",
     "shell.execute_reply.started": "2024-10-11T08:06:48.420195Z"
    },
    "tags": []
   },
   "source": [
    "# Chapter 2 - Tokens and Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12dc9c75-80bb-4e4b-b2a8-c6a4dc46761d",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:08:50.341781Z",
     "iopub.status.busy": "2024-10-11T08:08:50.341293Z",
     "iopub.status.idle": "2024-10-11T08:08:50.345958Z",
     "shell.execute_reply": "2024-10-11T08:08:50.345143Z",
     "shell.execute_reply.started": "2024-10-11T08:08:50.341740Z"
    },
    "tags": []
   },
   "source": [
    "## 下载一个开源模型运行看看\n",
    "\n",
    "和前一章一样，分开加载并且分别解释不同部分的作用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "4e59808b-e458-4b81-8e02-5ad50d607ffb",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:10:03.367780Z",
     "iopub.status.busy": "2024-10-11T08:10:03.367043Z",
     "iopub.status.idle": "2024-10-11T08:10:08.325328Z",
     "shell.execute_reply": "2024-10-11T08:10:08.324841Z",
     "shell.execute_reply.started": "2024-10-11T08:10:03.367741Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n"
     ]
    }
   ],
   "source": [
    "## step1: 加载模型\n",
    "\n",
    "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
    "\n",
    "# 加载模型和 tokenizer \n",
    "model = AutoModelForCausalLM.from_pretrained(\n",
    "    \"Qwen/Qwen2.5-0.5B-Instruct\",\n",
    "    device_map=\"cuda\",\n",
    "    torch_dtype=\"auto\",\n",
    "    trust_remote_code=True,\n",
    ")\n",
    "tokenizer = AutoTokenizer.from_pretrained(\"Qwen/Qwen2.5-0.5B-Instruct\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "62d4ddce-2f15-4554-b1a9-3d5f76521628",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:17:05.874371Z",
     "iopub.status.busy": "2024-10-11T08:17:05.873880Z",
     "iopub.status.idle": "2024-10-11T08:17:05.883891Z",
     "shell.execute_reply": "2024-10-11T08:17:05.882202Z",
     "shell.execute_reply.started": "2024-10-11T08:17:05.874329Z"
    },
    "tags": []
   },
   "source": [
    ">  解释原文中为什么要写一个 <|assistant|>？这是因为原文中用得是：microsoft/Phi-3-mini-4k-instruct，它用 <|assistant|> 表示 AI 回答开始。构成的格式如下：\n",
    "> ```\n",
    "> <|system|>\n",
    "> You are a helpful assistant.<|end|>\n",
    "> <|user|>\n",
    "> How to explain Internet for a medieval knight?<|end|>\n",
    "> <|assistant|>\n",
    "> ```\n",
    "\n",
    "\n",
    "```python\n",
    "\n",
    "# 原文。这里的 <|assistant|> 是为了让模型能\n",
    "prompt = \"帮我写一个请教条，原因是自己生病了。<|assistant|>\"\n",
    "\n",
    "# Tokenize the input prompt\n",
    "input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(\"cuda\")\n",
    "```\n",
    "\n",
    "Qwen 的模型和它有什么区别呢？\n",
    "```\n",
    "<|im_start|>user\n",
    "讲一个猫有关的笑话？<|im_end|>\n",
    "<|im_start|>assistant\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "fa357893-bac3-4146-abbb-06472530317a",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:22:55.991097Z",
     "iopub.status.busy": "2024-10-11T08:22:55.990640Z",
     "iopub.status.idle": "2024-10-11T08:22:56.401340Z",
     "shell.execute_reply": "2024-10-11T08:22:56.400841Z",
     "shell.execute_reply.started": "2024-10-11T08:22:55.991062Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "帮我写一个请教条，原因是自己生病了。<|im_start|>assistant：尊敬的老师、同学们：\n",
      "\n",
      "您好！我最近因为身体不适而感到非常疲劳和难受\n"
     ]
    }
   ],
   "source": [
    "## 所以对于 Qwen 可以这么做？ \n",
    "### 中文 \n",
    "prompt = \"帮我写一个请教条，原因是自己生病了。<|im_start|>assistant\"\n",
    "\n",
    "# Tokenize the input prompt\n",
    "input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(\"cuda\")\n",
    "# Generate the text\n",
    "generation_output = model.generate(\n",
    "  input_ids=input_ids,\n",
    "  max_new_tokens=20\n",
    ")\n",
    "\n",
    "# Print the output\n",
    "print(tokenizer.decode(generation_output[0]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "3756cdca-0626-44e7-bcd4-726251009c15",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:23:30.407269Z",
     "iopub.status.busy": "2024-10-11T08:23:30.406802Z",
     "iopub.status.idle": "2024-10-11T08:23:30.421172Z",
     "shell.execute_reply": "2024-10-11T08:23:30.420664Z",
     "shell.execute_reply.started": "2024-10-11T08:23:30.407234Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[108965,  61443,  46944, 116069,  38989,   3837, 107711,  99283, 109281,\n",
      "          34187,   1773, 151644,  77091]], device='cuda:0')\n"
     ]
    }
   ],
   "source": [
    "print(input_ids)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "99c2d8dd-d5ed-4024-9fe3-deadda8787b2",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:24:49.545360Z",
     "iopub.status.busy": "2024-10-11T08:24:49.544792Z",
     "iopub.status.idle": "2024-10-11T08:24:49.558999Z",
     "shell.execute_reply": "2024-10-11T08:24:49.558495Z",
     "shell.execute_reply.started": "2024-10-11T08:24:49.545323Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "帮我\n",
      "写\n",
      "一个\n",
      "请教\n",
      "条\n",
      "，\n",
      "原因是\n",
      "自己\n",
      "生病\n",
      "了\n",
      "。\n",
      "<|im_start|>\n",
      "assistant\n"
     ]
    }
   ],
   "source": [
    "for id_ in input_ids[0]:\n",
    "    print(tokenizer.decode(id_))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "c755519f-0431-4b91-bf33-a2df70d4edd0",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:26:41.069165Z",
     "iopub.status.busy": "2024-10-11T08:26:41.068664Z",
     "iopub.status.idle": "2024-10-11T08:26:41.467020Z",
     "shell.execute_reply": "2024-10-11T08:26:41.466515Z",
     "shell.execute_reply.started": "2024-10-11T08:26:41.069128Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Write an email apologizing to Sarah for the tragic gardening mishap. Explain how it happened.<|assistant|> Dear Sarah,\n",
      "\n",
      "I hope this message finds you well.\n",
      "\n",
      "I wanted to apologize for the gardening mishap\n"
     ]
    }
   ],
   "source": [
    "## 英文\n",
    "prompt = \"Write an email apologizing to Sarah for the tragic gardening mishap. Explain how it happened.<|assistant|>\"\n",
    "\n",
    "# Tokenize the input prompt\n",
    "input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(\"cuda\")\n",
    "# Generate the text\n",
    "generation_output = model.generate(\n",
    "  input_ids=input_ids,\n",
    "  max_new_tokens=20\n",
    ")\n",
    "\n",
    "# Print the output\n",
    "print(tokenizer.decode(generation_output[0]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "fa947d6c-cdb5-424c-8561-0856d9a7e595",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:27:55.948426Z",
     "iopub.status.busy": "2024-10-11T08:27:55.947978Z",
     "iopub.status.idle": "2024-10-11T08:27:55.960386Z",
     "shell.execute_reply": "2024-10-11T08:27:55.959931Z",
     "shell.execute_reply.started": "2024-10-11T08:27:55.948391Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Write\n",
      " an\n",
      " email\n",
      " apolog\n",
      "izing\n",
      " to\n",
      " Sarah\n",
      " for\n",
      " the\n",
      " tragic\n",
      " gardening\n",
      " mish\n",
      "ap\n",
      ".\n",
      " Explain\n",
      " how\n",
      " it\n",
      " happened\n",
      ".<\n",
      "|\n",
      "assistant\n",
      "|\n",
      ">\n"
     ]
    }
   ],
   "source": [
    "for id in input_ids[0]:\n",
    "   print(tokenizer.decode(id))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b770c2aa-8cc9-4c9a-8b4f-cb4d582b7a2f",
   "metadata": {},
   "source": [
    "## 对比训练过的 LLM tokenizers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "0b43064c-afed-485a-82c9-b3b52cf9195e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:29:16.577506Z",
     "iopub.status.busy": "2024-10-11T08:29:16.576738Z",
     "iopub.status.idle": "2024-10-11T08:29:16.584120Z",
     "shell.execute_reply": "2024-10-11T08:29:16.583156Z",
     "shell.execute_reply.started": "2024-10-11T08:29:16.577469Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
    "\n",
    "colors_list = [\n",
    "    '102;194;165', '252;141;98', '141;160;203',\n",
    "    '231;138;195', '166;216;84', '255;217;47'\n",
    "]\n",
    "\n",
    "def show_tokens(sentence, tokenizer_name):\n",
    "    tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)\n",
    "    token_ids = tokenizer(sentence).input_ids\n",
    "    for idx, t in enumerate(token_ids):\n",
    "        print(\n",
    "            f'\\x1b[0;30;48;2;{colors_list[idx % len(colors_list)]}m' +\n",
    "            tokenizer.decode(t) +\n",
    "            '\\x1b[0m',\n",
    "            end=' '\n",
    "        )\n",
    "text = \"\"\"\n",
    "English and CAPITALIZATION\n",
    "🎵 鸟\n",
    "show_tokens False None elif == >= else: two tabs:\"    \" Three tabs: \"       \"\n",
    "12.0*50=600\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "9e863362-1c55-433c-abd2-e55220a9fd69",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:29:28.237371Z",
     "iopub.status.busy": "2024-10-11T08:29:28.236734Z",
     "iopub.status.idle": "2024-10-11T08:29:31.519687Z",
     "shell.execute_reply": "2024-10-11T08:29:31.519145Z",
     "shell.execute_reply.started": "2024-10-11T08:29:28.237329Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5cad96cd68f84f21a840995626b0303f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/48.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/output/envs/hands-on-llm/lib/python3.10/site-packages/huggingface_hub/file_download.py:1142: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b1c50099b14b410f9e51387669f6a356",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/570 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "cf6228744f9940aa890cfded11d0f2bf",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.txt:   0%|          | 0.00/232k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8151c598a39c448994a010b611b7a149",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/466k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[0;30;48;2;102;194;165m[CLS]\u001b[0m \u001b[0;30;48;2;252;141;98menglish\u001b[0m \u001b[0;30;48;2;141;160;203mand\u001b[0m \u001b[0;30;48;2;231;138;195mcapital\u001b[0m \u001b[0;30;48;2;166;216;84m##ization\u001b[0m \u001b[0;30;48;2;255;217;47m[UNK]\u001b[0m \u001b[0;30;48;2;102;194;165m[UNK]\u001b[0m \u001b[0;30;48;2;252;141;98mshow\u001b[0m \u001b[0;30;48;2;141;160;203m_\u001b[0m \u001b[0;30;48;2;231;138;195mtoken\u001b[0m \u001b[0;30;48;2;166;216;84m##s\u001b[0m \u001b[0;30;48;2;255;217;47mfalse\u001b[0m \u001b[0;30;48;2;102;194;165mnone\u001b[0m \u001b[0;30;48;2;252;141;98meli\u001b[0m \u001b[0;30;48;2;141;160;203m##f\u001b[0m \u001b[0;30;48;2;231;138;195m=\u001b[0m \u001b[0;30;48;2;166;216;84m=\u001b[0m \u001b[0;30;48;2;255;217;47m>\u001b[0m \u001b[0;30;48;2;102;194;165m=\u001b[0m \u001b[0;30;48;2;252;141;98melse\u001b[0m \u001b[0;30;48;2;141;160;203m:\u001b[0m \u001b[0;30;48;2;231;138;195mtwo\u001b[0m \u001b[0;30;48;2;166;216;84mtab\u001b[0m \u001b[0;30;48;2;255;217;47m##s\u001b[0m \u001b[0;30;48;2;102;194;165m:\u001b[0m \u001b[0;30;48;2;252;141;98m\"\u001b[0m \u001b[0;30;48;2;141;160;203m\"\u001b[0m \u001b[0;30;48;2;231;138;195mthree\u001b[0m \u001b[0;30;48;2;166;216;84mtab\u001b[0m \u001b[0;30;48;2;255;217;47m##s\u001b[0m \u001b[0;30;48;2;102;194;165m:\u001b[0m \u001b[0;30;48;2;252;141;98m\"\u001b[0m \u001b[0;30;48;2;141;160;203m\"\u001b[0m \u001b[0;30;48;2;231;138;195m12\u001b[0m \u001b[0;30;48;2;166;216;84m.\u001b[0m \u001b[0;30;48;2;255;217;47m0\u001b[0m \u001b[0;30;48;2;102;194;165m*\u001b[0m \u001b[0;30;48;2;252;141;98m50\u001b[0m \u001b[0;30;48;2;141;160;203m=\u001b[0m \u001b[0;30;48;2;231;138;195m600\u001b[0m \u001b[0;30;48;2;166;216;84m[SEP]\u001b[0m "
     ]
    }
   ],
   "source": [
    "show_tokens(text, \"bert-base-uncased\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "edd113f8-2520-4a34-9683-6a9954381bcf",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:29:41.739704Z",
     "iopub.status.busy": "2024-10-11T08:29:41.738714Z",
     "iopub.status.idle": "2024-10-11T08:29:46.105615Z",
     "shell.execute_reply": "2024-10-11T08:29:46.105134Z",
     "shell.execute_reply.started": "2024-10-11T08:29:41.739658Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4bc8e59946d241adbaef42af6ffe7321",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/26.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5744056f7b6f4b42b169ce858f3bd572",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/665 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ef638e44958646b6b35e0c305549f01f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.json:   0%|          | 0.00/1.04M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "886a7742e8414bf69db0e5ddbb7abf55",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "merges.txt:   0%|          | 0.00/456k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b8741a4c5e5d4050878d17d619164b77",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/1.36M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[0;30;48;2;102;194;165m\n",
      "\u001b[0m \u001b[0;30;48;2;252;141;98mEnglish\u001b[0m \u001b[0;30;48;2;141;160;203m and\u001b[0m \u001b[0;30;48;2;231;138;195m CAP\u001b[0m \u001b[0;30;48;2;166;216;84mITAL\u001b[0m \u001b[0;30;48;2;255;217;47mIZ\u001b[0m \u001b[0;30;48;2;102;194;165mATION\u001b[0m \u001b[0;30;48;2;252;141;98m\n",
      "\u001b[0m \u001b[0;30;48;2;141;160;203m�\u001b[0m \u001b[0;30;48;2;231;138;195m�\u001b[0m \u001b[0;30;48;2;166;216;84m�\u001b[0m \u001b[0;30;48;2;255;217;47m �\u001b[0m \u001b[0;30;48;2;102;194;165m�\u001b[0m \u001b[0;30;48;2;252;141;98m�\u001b[0m \u001b[0;30;48;2;141;160;203m\n",
      "\u001b[0m \u001b[0;30;48;2;231;138;195mshow\u001b[0m \u001b[0;30;48;2;166;216;84m_\u001b[0m \u001b[0;30;48;2;255;217;47mt\u001b[0m \u001b[0;30;48;2;102;194;165mok\u001b[0m \u001b[0;30;48;2;252;141;98mens\u001b[0m \u001b[0;30;48;2;141;160;203m False\u001b[0m \u001b[0;30;48;2;231;138;195m None\u001b[0m \u001b[0;30;48;2;166;216;84m el\u001b[0m \u001b[0;30;48;2;255;217;47mif\u001b[0m \u001b[0;30;48;2;102;194;165m ==\u001b[0m \u001b[0;30;48;2;252;141;98m >=\u001b[0m \u001b[0;30;48;2;141;160;203m else\u001b[0m \u001b[0;30;48;2;231;138;195m:\u001b[0m \u001b[0;30;48;2;166;216;84m two\u001b[0m \u001b[0;30;48;2;255;217;47m tabs\u001b[0m \u001b[0;30;48;2;102;194;165m:\"\u001b[0m \u001b[0;30;48;2;252;141;98m \u001b[0m \u001b[0;30;48;2;141;160;203m \u001b[0m \u001b[0;30;48;2;231;138;195m \u001b[0m \u001b[0;30;48;2;166;216;84m \"\u001b[0m \u001b[0;30;48;2;255;217;47m Three\u001b[0m \u001b[0;30;48;2;102;194;165m tabs\u001b[0m \u001b[0;30;48;2;252;141;98m:\u001b[0m \u001b[0;30;48;2;141;160;203m \"\u001b[0m \u001b[0;30;48;2;231;138;195m \u001b[0m \u001b[0;30;48;2;166;216;84m \u001b[0m \u001b[0;30;48;2;255;217;47m \u001b[0m \u001b[0;30;48;2;102;194;165m \u001b[0m \u001b[0;30;48;2;252;141;98m \u001b[0m \u001b[0;30;48;2;141;160;203m \u001b[0m \u001b[0;30;48;2;231;138;195m \"\u001b[0m \u001b[0;30;48;2;166;216;84m\n",
      "\u001b[0m \u001b[0;30;48;2;255;217;47m12\u001b[0m \u001b[0;30;48;2;102;194;165m.\u001b[0m \u001b[0;30;48;2;252;141;98m0\u001b[0m \u001b[0;30;48;2;141;160;203m*\u001b[0m \u001b[0;30;48;2;231;138;195m50\u001b[0m \u001b[0;30;48;2;166;216;84m=\u001b[0m \u001b[0;30;48;2;255;217;47m600\u001b[0m \u001b[0;30;48;2;102;194;165m\n",
      "\u001b[0m "
     ]
    }
   ],
   "source": [
    "show_tokens(text, \"gpt2\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "bf25e2a0-d62d-4d31-af5a-dbbd0946f0ad",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:29:53.956299Z",
     "iopub.status.busy": "2024-10-11T08:29:53.955911Z",
     "iopub.status.idle": "2024-10-11T08:29:54.410542Z",
     "shell.execute_reply": "2024-10-11T08:29:54.410086Z",
     "shell.execute_reply.started": "2024-10-11T08:29:53.956269Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[0;30;48;2;102;194;165m\n",
      "\u001b[0m \u001b[0;30;48;2;252;141;98mEnglish\u001b[0m \u001b[0;30;48;2;141;160;203m and\u001b[0m \u001b[0;30;48;2;231;138;195m CAPITAL\u001b[0m \u001b[0;30;48;2;166;216;84mIZATION\u001b[0m \u001b[0;30;48;2;255;217;47m\n",
      "\u001b[0m \u001b[0;30;48;2;102;194;165m🎵\u001b[0m \u001b[0;30;48;2;252;141;98m �\u001b[0m \u001b[0;30;48;2;141;160;203m�\u001b[0m \u001b[0;30;48;2;231;138;195m�\u001b[0m \u001b[0;30;48;2;166;216;84m\n",
      "\u001b[0m \u001b[0;30;48;2;255;217;47mshow\u001b[0m \u001b[0;30;48;2;102;194;165m_tokens\u001b[0m \u001b[0;30;48;2;252;141;98m False\u001b[0m \u001b[0;30;48;2;141;160;203m None\u001b[0m \u001b[0;30;48;2;231;138;195m elif\u001b[0m \u001b[0;30;48;2;166;216;84m ==\u001b[0m \u001b[0;30;48;2;255;217;47m >=\u001b[0m \u001b[0;30;48;2;102;194;165m else\u001b[0m \u001b[0;30;48;2;252;141;98m:\u001b[0m \u001b[0;30;48;2;141;160;203m two\u001b[0m \u001b[0;30;48;2;231;138;195m tabs\u001b[0m \u001b[0;30;48;2;166;216;84m:\"\u001b[0m \u001b[0;30;48;2;255;217;47m   \u001b[0m \u001b[0;30;48;2;102;194;165m \"\u001b[0m \u001b[0;30;48;2;252;141;98m Three\u001b[0m \u001b[0;30;48;2;141;160;203m tabs\u001b[0m \u001b[0;30;48;2;231;138;195m:\u001b[0m \u001b[0;30;48;2;166;216;84m \"\u001b[0m \u001b[0;30;48;2;255;217;47m      \u001b[0m \u001b[0;30;48;2;102;194;165m \"\n",
      "\u001b[0m \u001b[0;30;48;2;252;141;98m1\u001b[0m \u001b[0;30;48;2;141;160;203m2\u001b[0m \u001b[0;30;48;2;231;138;195m.\u001b[0m \u001b[0;30;48;2;166;216;84m0\u001b[0m \u001b[0;30;48;2;255;217;47m*\u001b[0m \u001b[0;30;48;2;102;194;165m5\u001b[0m \u001b[0;30;48;2;252;141;98m0\u001b[0m \u001b[0;30;48;2;141;160;203m=\u001b[0m \u001b[0;30;48;2;231;138;195m6\u001b[0m \u001b[0;30;48;2;166;216;84m0\u001b[0m \u001b[0;30;48;2;255;217;47m0\u001b[0m \u001b[0;30;48;2;102;194;165m\n",
      "\u001b[0m "
     ]
    }
   ],
   "source": [
    "show_tokens(text, \"Qwen/Qwen2.5-0.5B-Instruct\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe00a8c0-4bed-4c43-b952-798beb85856d",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:42:29.450389Z",
     "iopub.status.busy": "2024-10-11T08:42:29.449981Z",
     "iopub.status.idle": "2024-10-11T08:42:29.454554Z",
     "shell.execute_reply": "2024-10-11T08:42:29.453680Z",
     "shell.execute_reply.started": "2024-10-11T08:42:29.450354Z"
    },
    "tags": []
   },
   "source": [
    "## 理解 Embedding"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55538ab8-82d1-41e1-b800-b19b979a682b",
   "metadata": {
    "tags": []
   },
   "source": [
    "### 原始 word2vec embedding （和 LLM 无关）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "1b0cc16d-e386-4cdc-807d-e8ac81612339",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:39:39.323387Z",
     "iopub.status.busy": "2024-10-11T08:39:39.322944Z",
     "iopub.status.idle": "2024-10-11T08:39:57.810310Z",
     "shell.execute_reply": "2024-10-11T08:39:57.809573Z",
     "shell.execute_reply.started": "2024-10-11T08:39:39.323351Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[=================---------------------------------] 35.5% 23.4/66.0MB downloaded"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[==================================================] 100.0% 66.0/66.0MB downloaded\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[('king', 1.0000001192092896),\n",
       " ('prince', 0.8236179351806641),\n",
       " ('queen', 0.7839043140411377),\n",
       " ('ii', 0.7746230363845825),\n",
       " ('emperor', 0.7736247777938843),\n",
       " ('son', 0.766719400882721),\n",
       " ('uncle', 0.7627150416374207),\n",
       " ('kingdom', 0.7542161345481873),\n",
       " ('throne', 0.7539914846420288),\n",
       " ('brother', 0.7492411136627197),\n",
       " ('ruler', 0.7434253692626953)]"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import gensim.downloader as api\n",
    "\n",
    "# Download embeddings (66MB, glove, trained on wikipedia, vector size: 50)\n",
    "# Other options include \"word2vec-google-news-300\"\n",
    "# More options at https://github.com/RaRe-Technologies/gensim-data\n",
    "model = api.load(\"glove-wiki-gigaword-50\")\n",
    "\n",
    "model.most_similar([model['king']], topn=11)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "704965f4-e3a9-4bd3-b5e9-20b91010b51e",
   "metadata": {},
   "source": [
    "\n",
    "### 上下文相关的 Embedding\n",
    "Contextualized Word Embeddings From a Language Model (Like BERT)，最早应该是 ElMO， 然后是 bert\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "d33cc3af-878a-4142-96be-6bc26e934853",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:36:28.354306Z",
     "iopub.status.busy": "2024-10-11T08:36:28.354008Z",
     "iopub.status.idle": "2024-10-11T08:36:48.344769Z",
     "shell.execute_reply": "2024-10-11T08:36:48.344186Z",
     "shell.execute_reply.started": "2024-10-11T08:36:28.354289Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "362a545f31bb4f90b4caf51ebfc30856",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/440M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from transformers import AutoModel, AutoTokenizer\n",
    "\n",
    "# Load a tokenizer\n",
    "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n",
    "\n",
    "# Load a language model\n",
    "model = AutoModel.from_pretrained(\"bert-base-uncased\")\n",
    "\n",
    "# Tokenize the sentence\n",
    "tokens = tokenizer('Hello world', return_tensors='pt')\n",
    "\n",
    "# Process the tokens\n",
    "output = model(**tokens)[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "c8b141ec-2a79-4a68-85c7-7a52b9bcd29d",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:36:52.335480Z",
     "iopub.status.busy": "2024-10-11T08:36:52.335048Z",
     "iopub.status.idle": "2024-10-11T08:36:52.340668Z",
     "shell.execute_reply": "2024-10-11T08:36:52.340273Z",
     "shell.execute_reply.started": "2024-10-11T08:36:52.335445Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[-0.1689,  0.1361, -0.1394,  ..., -0.6251,  0.0522,  0.3671],\n",
       "         [-0.3633,  0.1412,  0.8800,  ...,  0.1043,  0.2888,  0.3727],\n",
       "         [-0.6986, -0.6988,  0.0645,  ..., -0.2210,  0.0099, -0.5940],\n",
       "         [ 0.8310,  0.1237, -0.1512,  ...,  0.1031, -0.6779, -0.2629]]],\n",
       "       grad_fn=<NativeLayerNormBackward0>)"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "e550b222-d9c2-4f38-9d2f-49b5d72a0fe8",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:36:48.346054Z",
     "iopub.status.busy": "2024-10-11T08:36:48.345858Z",
     "iopub.status.idle": "2024-10-11T08:36:48.349548Z",
     "shell.execute_reply": "2024-10-11T08:36:48.349142Z",
     "shell.execute_reply.started": "2024-10-11T08:36:48.346037Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([1, 4, 768])"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "30ac1fa0-2c79-4c48-9134-6f3dd6030339",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:36:48.350161Z",
     "iopub.status.busy": "2024-10-11T08:36:48.350024Z",
     "iopub.status.idle": "2024-10-11T08:36:48.353002Z",
     "shell.execute_reply": "2024-10-11T08:36:48.352643Z",
     "shell.execute_reply.started": "2024-10-11T08:36:48.350147Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[CLS]\n",
      "hello\n",
      "world\n",
      "[SEP]\n"
     ]
    }
   ],
   "source": [
    "for token in tokens['input_ids'][0]:\n",
    "    print(tokenizer.decode(token))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7518b35b-75fa-4f16-a73c-137b7bf7e1d5",
   "metadata": {},
   "source": [
    "### 文本向量（句向量）\n",
    "Text Embeddings (For Sentences and Whole Documents)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "df6bf774-6b78-427f-8cea-c4776d7f5026",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:38:38.518761Z",
     "iopub.status.busy": "2024-10-11T08:38:38.518321Z",
     "iopub.status.idle": "2024-10-11T08:39:04.196637Z",
     "shell.execute_reply": "2024-10-11T08:39:04.196034Z",
     "shell.execute_reply.started": "2024-10-11T08:38:38.518725Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "bc091db001e6495eb01e14795d3ba568",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "modules.json:   0%|          | 0.00/349 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a013ea3a5b2c44649e5ff2eea7b9287c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config_sentence_transformers.json:   0%|          | 0.00/116 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "55e35540fa5640b79bc50f0ca3422617",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "README.md:   0%|          | 0.00/10.3k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e403d02f48d248dfa52c66798ef78e9c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sentence_bert_config.json:   0%|          | 0.00/53.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/output/envs/hands-on-llm/lib/python3.10/site-packages/huggingface_hub/file_download.py:1142: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "df827fbd533542779756d8100fb03783",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/653 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3eee53188bb04126a6c65541fc600ea4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/328M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1c3bb88231a1426aa4056c24ff9e7ff0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/333 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1ee0aff347744615a728d1cc194bc4a5",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.json:   0%|          | 0.00/798k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3b185a6819834afea9a6f4cb26c49282",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "merges.txt:   0%|          | 0.00/456k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9a7266cc5c714c928c31375955c9a716",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/1.36M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "66f49de194144e1c87d2b3ec4a77da25",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/239 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8113f465cedc490d9b9c8e45b48be1bf",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "1_Pooling/config.json:   0%|          | 0.00/190 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from sentence_transformers import SentenceTransformer\n",
    "\n",
    "# Load model\n",
    "model = SentenceTransformer('sentence-transformers/all-distilroberta-v1')\n",
    "\n",
    "# Convert text to text embeddings\n",
    "vector = model.encode(\"测试一个小模型的 embedding 能力\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "a5a357a4-8b07-4380-b94c-7f28b45e4138",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:39:04.198959Z",
     "iopub.status.busy": "2024-10-11T08:39:04.198300Z",
     "iopub.status.idle": "2024-10-11T08:39:04.204634Z",
     "shell.execute_reply": "2024-10-11T08:39:04.203864Z",
     "shell.execute_reply.started": "2024-10-11T08:39:04.198925Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(768,)"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vector.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d1f0b5e8-8648-462a-8d6c-79e6baaf4d7e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:40:36.154257Z",
     "iopub.status.busy": "2024-10-11T08:40:36.153605Z",
     "iopub.status.idle": "2024-10-11T08:40:36.157233Z",
     "shell.execute_reply": "2024-10-11T08:40:36.156584Z",
     "shell.execute_reply.started": "2024-10-11T08:40:36.154226Z"
    },
    "tags": []
   },
   "source": [
    "### 基于 embeddings的歌曲推理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "c768f230-95aa-4576-9989-f3c9ada247fa",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:43:05.123480Z",
     "iopub.status.busy": "2024-10-11T08:43:05.122503Z",
     "iopub.status.idle": "2024-10-11T08:43:09.629853Z",
     "shell.execute_reply": "2024-10-11T08:43:09.629218Z",
     "shell.execute_reply.started": "2024-10-11T08:43:05.123434Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "from urllib import request\n",
    "\n",
    "# Get the playlist dataset file\n",
    "data = request.urlopen('https://storage.googleapis.com/maps-premium/dataset/yes_complete/train.txt')\n",
    "\n",
    "# Parse the playlist dataset file. Skip the first two lines as\n",
    "# they only contain metadata\n",
    "lines = data.read().decode(\"utf-8\").split('\\n')[2:]\n",
    "\n",
    "# Remove playlists with only one song\n",
    "playlists = [s.rstrip().split() for s in lines if len(s.split()) > 1]\n",
    "\n",
    "# Load song metadata\n",
    "songs_file = request.urlopen('https://storage.googleapis.com/maps-premium/dataset/yes_complete/song_hash.txt')\n",
    "songs_file = songs_file.read().decode(\"utf-8\").split('\\n')\n",
    "songs = [s.rstrip().split('\\t') for s in songs_file]\n",
    "songs_df = pd.DataFrame(data=songs, columns = ['id', 'title', 'artist'])\n",
    "songs_df = songs_df.set_index('id')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "75a673c0-1d53-4da0-acfd-f7a1f658b3b5",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:43:19.312184Z",
     "iopub.status.busy": "2024-10-11T08:43:19.311781Z",
     "iopub.status.idle": "2024-10-11T08:43:19.316367Z",
     "shell.execute_reply": "2024-10-11T08:43:19.315786Z",
     "shell.execute_reply.started": "2024-10-11T08:43:19.312154Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Playlist #1:\n",
      "  ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '2', '42', '43', '44', '45', '46', '47', '48', '20', '49', '8', '50', '51', '52', '53', '54', '55', '56', '57', '25', '58', '59', '60', '61', '62', '3', '63', '64', '65', '66', '46', '47', '67', '2', '48', '68', '69', '70', '57', '50', '71', '72', '53', '73', '25', '74', '59', '20', '46', '75', '76', '77', '59', '20', '43'] \n",
      "\n",
      "Playlist #2:\n",
      "  ['78', '79', '80', '3', '62', '81', '14', '82', '48', '83', '84', '17', '85', '86', '87', '88', '74', '89', '90', '91', '4', '73', '62', '92', '17', '53', '59', '93', '94', '51', '50', '27', '95', '48', '96', '97', '98', '99', '100', '57', '101', '102', '25', '103', '3', '104', '105', '106', '107', '47', '108', '109', '110', '111', '112', '113', '25', '63', '62', '114', '115', '84', '116', '117', '118', '119', '120', '121', '122', '123', '50', '70', '71', '124', '17', '85', '14', '82', '48', '125', '47', '46', '72', '53', '25', '73', '4', '126', '59', '74', '20', '43', '127', '128', '129', '13', '82', '48', '130', '131', '132', '133', '134', '135', '136', '137', '59', '46', '138', '43', '20', '139', '140', '73', '57', '70', '141', '3', '1', '74', '142', '143', '144', '145', '48', '13', '25', '146', '50', '147', '126', '59', '20', '148', '149', '150', '151', '152', '56', '153', '154', '155', '156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '60', '176', '51', '177', '178', '179', '180', '181', '182', '183', '184', '185', '57', '186', '187', '188', '189', '190', '191', '46', '192', '193', '194', '195', '196', '197', '198', '25', '199', '200', '49', '201', '100', '202', '203', '204', '205', '206', '207', '32', '208', '209', '210']\n"
     ]
    }
   ],
   "source": [
    "print( 'Playlist #1:\\n ', playlists[0], '\\n')\n",
    "print( 'Playlist #2:\\n ', playlists[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "9a16504f-ebf8-4a62-8a6f-3cff34ff95bf",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:43:26.541384Z",
     "iopub.status.busy": "2024-10-11T08:43:26.540618Z",
     "iopub.status.idle": "2024-10-11T08:43:41.172210Z",
     "shell.execute_reply": "2024-10-11T08:43:41.171480Z",
     "shell.execute_reply.started": "2024-10-11T08:43:26.541348Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "from gensim.models import Word2Vec\n",
    "\n",
    "# Train our Word2Vec model\n",
    "model = Word2Vec(\n",
    "    playlists, vector_size=32, window=20, negative=50, min_count=1, workers=4\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "c4b4af87-2faa-4d9f-8e3a-7bcb10741426",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:43:41.173516Z",
     "iopub.status.busy": "2024-10-11T08:43:41.173305Z",
     "iopub.status.idle": "2024-10-11T08:43:41.208176Z",
     "shell.execute_reply": "2024-10-11T08:43:41.207654Z",
     "shell.execute_reply.started": "2024-10-11T08:43:41.173499Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('3167', 0.9990279674530029),\n",
       " ('3094', 0.9980515241622925),\n",
       " ('2976', 0.9978846907615662),\n",
       " ('10084', 0.9973754286766052),\n",
       " ('2704', 0.9972402453422546),\n",
       " ('6624', 0.9970847368240356),\n",
       " ('5586', 0.9969120621681213),\n",
       " ('2640', 0.9966724514961243),\n",
       " ('6658', 0.9966153502464294),\n",
       " ('2849', 0.9962793588638306)]"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "song_id = 2172\n",
    "\n",
    "# Ask the model for songs similar to song #2172\n",
    "model.wv.most_similar(positive=str(song_id))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "6f5c8beb-2ce4-4eb1-826e-6f977e6ac7fa",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:43:41.541249Z",
     "iopub.status.busy": "2024-10-11T08:43:41.540934Z",
     "iopub.status.idle": "2024-10-11T08:43:41.544766Z",
     "shell.execute_reply": "2024-10-11T08:43:41.544368Z",
     "shell.execute_reply.started": "2024-10-11T08:43:41.541231Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "title     Fade To Black\n",
      "artist        Metallica\n",
      "Name: 2172 , dtype: object\n"
     ]
    }
   ],
   "source": [
    "print(songs_df.iloc[2172])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "294f8559-735a-4a86-9e74-f0e57d2f0327",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:43:56.876531Z",
     "iopub.status.busy": "2024-10-11T08:43:56.876070Z",
     "iopub.status.idle": "2024-10-11T08:43:56.964994Z",
     "shell.execute_reply": "2024-10-11T08:43:56.964393Z",
     "shell.execute_reply.started": "2024-10-11T08:43:56.876496Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>artist</th>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>id</th>\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>3167</th>\n",
       "      <td>Unchained</td>\n",
       "      <td>Van Halen</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3094</th>\n",
       "      <td>Breaking The Law</td>\n",
       "      <td>Judas Priest</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2976</th>\n",
       "      <td>I Don't Know</td>\n",
       "      <td>Ozzy Osbourne</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10084</th>\n",
       "      <td>Detroit Rock City</td>\n",
       "      <td>Kiss</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2704</th>\n",
       "      <td>Over The Mountain</td>\n",
       "      <td>Ozzy Osbourne</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                    title         artist\n",
       "id                                      \n",
       "3167            Unchained      Van Halen\n",
       "3094     Breaking The Law   Judas Priest\n",
       "2976         I Don't Know  Ozzy Osbourne\n",
       "10084   Detroit Rock City           Kiss\n",
       "2704    Over The Mountain  Ozzy Osbourne"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "def print_recommendations(song_id):\n",
    "    similar_songs = np.array(\n",
    "        model.wv.most_similar(positive=str(song_id),topn=5)\n",
    "    )[:,0]\n",
    "    return  songs_df.iloc[similar_songs]\n",
    "\n",
    "# Extract recommendations\n",
    "print_recommendations(2172)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "72e22b13-e869-44fb-95ed-c692aeaecc42",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-10-11T08:45:09.075637Z",
     "iopub.status.busy": "2024-10-11T08:45:09.074844Z",
     "iopub.status.idle": "2024-10-11T08:45:09.103418Z",
     "shell.execute_reply": "2024-10-11T08:45:09.102867Z",
     "shell.execute_reply.started": "2024-10-11T08:45:09.075593Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "title     California Love (w\\/ Dr. Dre & Roger Troutman)\n",
      "artist                                              2Pac\n",
      "Name: 842 , dtype: object\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>artist</th>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>id</th>\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>330</th>\n",
       "      <td>Hate It Or Love It (w\\/ 50 Cent)</td>\n",
       "      <td>The Game</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5788</th>\n",
       "      <td>Drop It Like It's Hot (w\\/ Pharrell)</td>\n",
       "      <td>Snoop Dogg</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>886</th>\n",
       "      <td>Heartless</td>\n",
       "      <td>Kanye West</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>413</th>\n",
       "      <td>If I Ruled The World (Imagine That) (w\\/ Laury...</td>\n",
       "      <td>Nas</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5668</th>\n",
       "      <td>How We Do (w\\/ 50 Cent)</td>\n",
       "      <td>The Game</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                                   title      artist\n",
       "id                                                                  \n",
       "330                     Hate It Or Love It (w\\/ 50 Cent)    The Game\n",
       "5788                Drop It Like It's Hot (w\\/ Pharrell)  Snoop Dogg\n",
       "886                                            Heartless  Kanye West\n",
       "413    If I Ruled The World (Imagine That) (w\\/ Laury...         Nas\n",
       "5668                             How We Do (w\\/ 50 Cent)    The Game"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "print(songs_df.iloc[842])\n",
    "\n",
    "print_recommendations(842)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ab45d251-31bc-4d07-945d-a390a2f51125",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "hands-on-llm",
   "language": "python",
   "name": "hands-on-llm"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.15"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
