{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c2a0b654-1e45-40d3-ac5b-98b716ec8054",
   "metadata": {},
   "source": [
    "## 赛事背景\n",
    "\n",
    "在电商直播爆发式增长的数字化浪潮下，短视频平台积累了海量带货视频及用户互动数据。这些数据不仅是消费者对商品体验的直接反馈，更蕴含着驱动商业决策的深层价值。在此背景下，基于带货视频评论的用户洞察分析，已成为品牌优化选品策略、评估网红带货效能的关键突破口。\n",
    "\n",
    "带货视频评论用户洞察的核心逻辑，在于对视频内容与评论数据的联合深度挖掘。通过智能识别视频中推广的核心商品，结合评论区用户的情感表达与观点聚合，企业能够精准捕捉消费者对商品的真实态度与需求痛点。这种分析方式不仅能揭示用户对商品功能、价格、服务的多维评价，还可通过情感倾向聚类，构建消费者偏好画像，为选品策略优化和网红合作评估提供数据支撑。\n",
    "\n",
    "本挑战赛聚焦\"商品识别-情感分析-聚类洞察\"的完整链条：参赛者需先基于视频内容建立商品关联关系，进而从非结构化评论中提取情感倾向，最终通过聚类总结形成结构化洞察。这一研究路径将碎片化的用户评论转化为可量化分析的商业智能，既可帮助品牌穿透数据迷雾把握消费心理，又能科学评估网红的内容种草效果与带货转化潜力，实现从内容营销到消费决策的全链路价值提升。在直播电商竞争白热化的当下，此类分析能力正成为企业构建差异化竞争优势的核心武器。\n",
    "\n",
    "## 赛事任务\n",
    "\n",
    "参赛者需基于提供的带货视频文本及评论文本数据，完成以下三阶段分析任务：\n",
    "\n",
    "- 【商品识别】精准识别推广商品；\n",
    "\n",
    "- 【情感分析】对评论文本进行多维度情感分析，涵盖维度见数据说明；\n",
    "\n",
    "- 【评论聚类】按商品对归属指定维度的评论进行聚类，并提炼类簇总结词。\n",
    "\n",
    "## 评审规则\n",
    "\n",
    "### 1.平台说明\n",
    "\n",
    "参赛选手需基于星火大模型Spark 4.0 Ultra、星火文本向量化模型或其他开源模型完成任务，允许使用微调开源模型的方式进行洞察分析。\n",
    "\n",
    "关于星火大模型Spark 4.0 Ultra和文本向量化模型的资源，组委会将为报名参赛选手统一发放API资源福利，选手用参赛账号登录讯飞开放平台个人控制台：https://console.xfyun.cn/app/myapp ，点击应用，查询API能力和接口文档。\n",
    "\n",
    "微调资源不统一发放，参赛期间选手如希望使用讯飞星辰MaaS平台进行微调，可前往https://maas.xfyun.cn/modelSquare?activityId=2512065084115968 ，完成实名认证后领取微调代金券资源并开启答题。请注意统一用参赛账号登录星辰MaaS平台；如在比赛前已参与活动则无法重复领取。星辰代金券消耗完毕后，如需继续使用，选手自行选择按需付费。\n",
    "\n",
    "### 2.数据说明\n",
    "\n",
    "本次挑战赛为参赛选手提供包含85条脱敏后的带货视频数据及6477条评论文本数据，数据包括少量有人工标注结果的训练集（仅包含商品识别和情感分析的标注结果）以及未标注的测试集。所有数据均经过脱敏处理，确保信息安全，其格式说明如下：\n",
    "\n",
    "- 带货视频内容文本信息的数据格式\n",
    "\n",
    "| 序号 | 变量名称     | 变量格式 | 解释         |\n",
    "| ---- | ------------ | -------- | ------------ |\n",
    "| 1    | video_id     | string   | 视频id       |\n",
    "| 2    | video_desc   | string   | 视频描述     |\n",
    "| 3    | video_tags   | string   | 视频标签     |\n",
    "| 4    | product_name | string   | 推广商品名称 |\n",
    "\n",
    "注：product_name需根据提供的视频信息进行提取，并从匹配到商品列表**[Xfaiyx Smart Translator, Xfaiyx Smart Recorder]**中的一项。\n",
    "\n",
    "- 评论区文本信息的数据格式\n",
    "\n",
    "| 序号 | 变量名称                 | 变量格式 | 解释                                 |\n",
    "| ---- | ------------------------ | -------- | ------------------------------------ |\n",
    "| 1    | video_id                 | string   | 视频id                               |\n",
    "| 2    | comment_id               | string   | 评论id                               |\n",
    "| 3    | comment_text             | string   | 评论文本                             |\n",
    "| 4    | sentiment_category       | int      | 关于商品的情感倾向分类               |\n",
    "| 5    | user_scenario            | int      | 是否与用户场景有关，0表示否，1表示是 |\n",
    "| 6    | user_question            | int      | 是否与用户疑问有关，0表示否，1表示是 |\n",
    "| 7    | user_suggestion          | int      | 是否与用户建议有关，0表示否，1表示是 |\n",
    "| 8    | positive_cluster_theme   | string   | 按正面倾向聚类的类簇主题词           |\n",
    "| 9    | negative_cluster_theme   | string   | 按负面倾向聚类的类簇主题词           |\n",
    "| 10   | scenario_cluster_theme   | string   | 按用户场景聚类的类簇主题词           |\n",
    "| 11   | question_cluster_theme   | string   | 按用户疑问聚类的类簇主题词           |\n",
    "| 12   | suggestion_cluster_theme | string   | 按用户建议聚类的类簇主题词           |\n",
    "\n",
    "注：\n",
    "\n",
    "a. 需进行情感分析的字段包括sentiment_category、user_scenario、user_question和user_suggestion。训练集中部分数据已提供标注，测试集需自行预测。其中字段sentiment_category情感倾向分类的数值含义见下表：\n",
    "\n",
    "| 分类值 | 1    | 2    | 3          | 4    | 5      |\n",
    "| ------ | ---- | ---- | ---------- | ---- | ------ |\n",
    "| 含义   | 正面 | 负面 | 正负都包含 | 中性 | 不相关 |\n",
    "\n",
    "b. 需进行聚类的字段包括：\n",
    "\n",
    "- positive_cluster_theme：基于训练集和测试集中正面倾向（sentiment_category=1 或 sentiment_category=3）的评论进行聚类并提炼主题词，聚类数范围为 5~8。\n",
    "- negative_cluster_theme：基于训练集和测试集中负面倾向（sentiment_category=2 或 sentiment_category=3）的评论进行聚类并提炼主题词，聚类数范围为 5~8。\n",
    "- scenario_cluster_theme：基于训练集和测试集中用户场景相关评论（user_scenario=1）进行聚类并提炼主题词，聚类数范围为 5~8。\n",
    "- question_cluster_theme：基于训练集和测试集中用户疑问相关评论（user_question=1）进行聚类并提炼主题词，聚类数范围为 5~8。\n",
    "- suggestion_cluster_theme：基于训练集和测试集中用户建议相关评论（user_suggestion=1）进行聚类并提炼主题词，聚类数范围为 5~8。\n",
    "\n",
    "**注意，聚类样本包含训练集和测试集的全部满足上述条件的评论样本。**\n",
    "\n",
    "## 评估指标\n",
    "\n",
    "本挑战赛依据参赛者提交的结果文件，采用不同评估方法对各阶段任务进行评分。最终得分由三部分相加，总分300分。具体评估标准如下：\n",
    "\n",
    "- 商品识别（100分）\n",
    "\n",
    "结果采用精确匹配评估，每个正确识别的商品得1分，错误识别的商品得0分。该阶段总评分计算公式如下：\n",
    "\n",
    "![img](https://openres.xfyun.cn/xfyundoc/2025-06-09/dfd86997-835a-4acb-aff5-3c4ca4bd45e9/1749469134692/579-1.bmp)\n",
    "\n",
    "- 情感分析（100分）\n",
    "\n",
    "结果评估采用加权平均F1-score，衡量分类模型的整体性能。该阶段总评分计算公式如下：\n",
    "\n",
    "![img](https://openres.xfyun.cn/xfyundoc/2025-06-09/483768ba-8485-436c-b777-cbb120b2c5ea/1749469379306/579-2.bmp)\n",
    "\n",
    "其中F1ᵢ为维度i的分析结果的加权F1-score，N为情感类别总数。\n",
    "\n",
    "- 评论聚类（100分）\n",
    "\n",
    "结果评估采用轮廓系数（仅计算商品识别和情感分析均正确的评论聚类结果），衡量聚类结果的紧密性和分离度。该阶段总评分计算公式如下：\n",
    "\n",
    "![img](https://openres.xfyun.cn/xfyundoc/2025-06-09/06d30dd5-471b-4762-95fa-ce097e9ca912/1749469510941/579-3.bmp)\n",
    "\n",
    "其中Silhouette coefficientᵢ为维度i的聚类结果的轮廓系数，M为需聚类的维度总数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "c87fa707-a62d-4d4e-abd8-346b93dcbdde",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "video_data = pd.read_csv(\"origin_videos_data.csv\")\n",
    "comments_data = pd.read_csv(\"origin_comments_data.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "e6a4784f-f5a4-4587-b88a-c86ca1fe43a7",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>video_id</th>\n",
       "      <th>video_desc</th>\n",
       "      <th>video_tags</th>\n",
       "      <th>product_name</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>34</th>\n",
       "      <td>vVS_NZR</td>\n",
       "      <td>Patient in distress but don't speak their lang...</td>\n",
       "      <td>ad;;Xfaiyx;;translator;;nurse;;nursing;;tiktok...</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>29</th>\n",
       "      <td>vz-3GxD</td>\n",
       "      <td>99%老外惊呼我们被骗了! 国外摆摊手抓饭揭秘真实新疆</td>\n",
       "      <td>德国;;中国;;中餐;;外国人吃中餐;;老外;;中国美食;;美食;;摆摊;;试吃;;路人;;...</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>64</th>\n",
       "      <td>v5cquLT</td>\n",
       "      <td>Xfaiyx Smart Translator told me Spanish was th...</td>\n",
       "      <td>tiktokbacktoschool;;tiktokshopfinds;;Xfaiyxtra...</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>60</th>\n",
       "      <td>vgIkaeR</td>\n",
       "      <td>ESCAPE THE BUSY LONDON AND GO ON A DAY TRIP TO...</td>\n",
       "      <td>vlog;;nhan ta;;nhân;;nhan's diaries;;living al...</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>61</th>\n",
       "      <td>vb-Daa_</td>\n",
       "      <td>MEDITERRANEAN CRUISE on P&amp;O Arvia! Ship Tour, ...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>77</th>\n",
       "      <td>v_Ur2e7</td>\n",
       "      <td>Check out the new Xfaiyx Smart Translator! #Xf...</td>\n",
       "      <td>Xfaiyxtranslator;;Xfaiyxtranslatordevice;;Xfai...</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>26</th>\n",
       "      <td>vJOK_jc</td>\n",
       "      <td>New Canada PR pathways for International Stude...</td>\n",
       "      <td>sandy talks canada;;canada work visa;;ircc upd...</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>66</th>\n",
       "      <td>vOqoEVw</td>\n",
       "      <td>Accroccando case 🏡🤣|| Gemmina</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>vUt-uCX</td>\n",
       "      <td>Xfaiyx SMART TRANSLATOR  Amazon link in link t...</td>\n",
       "      <td>Xfaiyx;;Xfaiyxtranslator;;instantvoicetranslat...</td>\n",
       "      <td>Xfaiyx Smart Translator</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>53</th>\n",
       "      <td>v4Htv2Q</td>\n",
       "      <td>We WONT Travel to China Without This! 🇨🇳</td>\n",
       "      <td>On tour with dridgers;;#otwd;;China;;China tra...</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   video_id                                         video_desc  \\\n",
       "34  vVS_NZR  Patient in distress but don't speak their lang...   \n",
       "29  vz-3GxD                        99%老外惊呼我们被骗了! 国外摆摊手抓饭揭秘真实新疆   \n",
       "64  v5cquLT  Xfaiyx Smart Translator told me Spanish was th...   \n",
       "60  vgIkaeR  ESCAPE THE BUSY LONDON AND GO ON A DAY TRIP TO...   \n",
       "61  vb-Daa_  MEDITERRANEAN CRUISE on P&O Arvia! Ship Tour, ...   \n",
       "77  v_Ur2e7  Check out the new Xfaiyx Smart Translator! #Xf...   \n",
       "26  vJOK_jc  New Canada PR pathways for International Stude...   \n",
       "66  vOqoEVw                      Accroccando case 🏡🤣|| Gemmina   \n",
       "1   vUt-uCX  Xfaiyx SMART TRANSLATOR  Amazon link in link t...   \n",
       "53  v4Htv2Q           We WONT Travel to China Without This! 🇨🇳   \n",
       "\n",
       "                                           video_tags             product_name  \n",
       "34  ad;;Xfaiyx;;translator;;nurse;;nursing;;tiktok...                      NaN  \n",
       "29  德国;;中国;;中餐;;外国人吃中餐;;老外;;中国美食;;美食;;摆摊;;试吃;;路人;;...                      NaN  \n",
       "64  tiktokbacktoschool;;tiktokshopfinds;;Xfaiyxtra...                      NaN  \n",
       "60  vlog;;nhan ta;;nhân;;nhan's diaries;;living al...                      NaN  \n",
       "61                                                NaN                      NaN  \n",
       "77  Xfaiyxtranslator;;Xfaiyxtranslatordevice;;Xfai...                      NaN  \n",
       "26  sandy talks canada;;canada work visa;;ircc upd...                      NaN  \n",
       "66                                                NaN                      NaN  \n",
       "1   Xfaiyx;;Xfaiyxtranslator;;instantvoicetranslat...  Xfaiyx Smart Translator  \n",
       "53  On tour with dridgers;;#otwd;;China;;China tra...                      NaN  "
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "video_data.sample(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "8357ddfe-abbe-4dfe-83d1-c1aaec2bb084",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>video_id</th>\n",
       "      <th>comment_id</th>\n",
       "      <th>comment_text</th>\n",
       "      <th>sentiment_category</th>\n",
       "      <th>user_scenario</th>\n",
       "      <th>user_question</th>\n",
       "      <th>user_suggestion</th>\n",
       "      <th>positive_cluster_theme</th>\n",
       "      <th>negative_cluster_theme</th>\n",
       "      <th>scenario_cluster_theme</th>\n",
       "      <th>question_cluster_theme</th>\n",
       "      <th>suggestion_cluster_theme</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>v8TdcdK</td>\n",
       "      <td>c-0KPeWgS</td>\n",
       "      <td>Pro just develop bro codes</td>\n",
       "      <td>5.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>vB3iY9I</td>\n",
       "      <td>c-0uMbk_C</td>\n",
       "      <td>Frivolous lawsuits are against the law!!!!!&lt;br...</td>\n",
       "      <td>5.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>v8TdcdK</td>\n",
       "      <td>c-4-XnVYb</td>\n",
       "      <td>Part 2 two pls I want to know what there punis...</td>\n",
       "      <td>5.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>v8TdcdK</td>\n",
       "      <td>c-7CvRdAr</td>\n",
       "      <td>I tried using the Xfaiyx Translator during my ...</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>vWN5LPe</td>\n",
       "      <td>c-9SGtvvf</td>\n",
       "      <td>I’ve been using the Xfaiyx Smart Translator du...</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  video_id comment_id                                       comment_text  \\\n",
       "0  v8TdcdK  c-0KPeWgS                         Pro just develop bro codes   \n",
       "1  vB3iY9I  c-0uMbk_C  Frivolous lawsuits are against the law!!!!!<br...   \n",
       "2  v8TdcdK  c-4-XnVYb  Part 2 two pls I want to know what there punis...   \n",
       "3  v8TdcdK  c-7CvRdAr  I tried using the Xfaiyx Translator during my ...   \n",
       "4  vWN5LPe  c-9SGtvvf  I’ve been using the Xfaiyx Smart Translator du...   \n",
       "\n",
       "   sentiment_category  user_scenario  user_question  user_suggestion  \\\n",
       "0                 5.0            0.0            0.0              0.0   \n",
       "1                 5.0            0.0            0.0              0.0   \n",
       "2                 5.0            0.0            0.0              0.0   \n",
       "3                 1.0            1.0            0.0              0.0   \n",
       "4                 1.0            1.0            0.0              0.0   \n",
       "\n",
       "   positive_cluster_theme  negative_cluster_theme  scenario_cluster_theme  \\\n",
       "0                     NaN                     NaN                     NaN   \n",
       "1                     NaN                     NaN                     NaN   \n",
       "2                     NaN                     NaN                     NaN   \n",
       "3                     NaN                     NaN                     NaN   \n",
       "4                     NaN                     NaN                     NaN   \n",
       "\n",
       "   question_cluster_theme  suggestion_cluster_theme  \n",
       "0                     NaN                       NaN  \n",
       "1                     NaN                       NaN  \n",
       "2                     NaN                       NaN  \n",
       "3                     NaN                       NaN  \n",
       "4                     NaN                       NaN  "
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "comments_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "ecf68f7e-2a86-4be4-a26a-8cec72873f00",
   "metadata": {},
   "outputs": [],
   "source": [
    "video_data[\"text\"] = video_data[\"video_desc\"].fillna(\"\") + \" \" + video_data[\"video_tags\"].fillna(\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "0fd65630-d75f-4d95-a809-fef578d6b47e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import jieba\n",
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "from sklearn.neighbors import KNeighborsClassifier\n",
    "from sklearn.svm import LinearSVC\n",
    "from sklearn.cluster import KMeans\n",
    "from sklearn.pipeline import make_pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "52b089e3-9235-4627-93b0-163d96f05b57",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/miniconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_extraction/text.py:517: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "product_name_predictor = make_pipeline(\n",
    "    TfidfVectorizer(tokenizer=jieba.lcut), LinearSVC()\n",
    ")\n",
    "product_name_predictor.fit(\n",
    "    video_data[~video_data[\"product_name\"].isnull()][\"text\"],\n",
    "    video_data[~video_data[\"product_name\"].isnull()][\"product_name\"],\n",
    ")\n",
    "video_data[\"product_name\"] = product_name_predictor.predict(video_data[\"text\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "565eee40-0c0d-4fdf-8e4b-3f5771c5dce6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Index(['video_id', 'comment_id', 'comment_text', 'sentiment_category',\n",
       "       'user_scenario', 'user_question', 'user_suggestion',\n",
       "       'positive_cluster_theme', 'negative_cluster_theme',\n",
       "       'scenario_cluster_theme', 'question_cluster_theme',\n",
       "       'suggestion_cluster_theme'],\n",
       "      dtype='object')"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "comments_data.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "b795c601-afb5-4aa8-946e-5f621780e03f",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/miniconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_extraction/text.py:517: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'\n",
      "  warnings.warn(\n",
      "/opt/miniconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_extraction/text.py:517: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'\n",
      "  warnings.warn(\n",
      "/opt/miniconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_extraction/text.py:517: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'\n",
      "  warnings.warn(\n",
      "/opt/miniconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_extraction/text.py:517: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "for col in ['sentiment_category',\n",
    "       'user_scenario', 'user_question', 'user_suggestion']:\n",
    "    predictor = make_pipeline(\n",
    "        TfidfVectorizer(tokenizer=jieba.lcut), LinearSVC()\n",
    "    )\n",
    "    predictor.fit(\n",
    "        comments_data[~comments_data[col].isnull()][\"comment_text\"],\n",
    "        comments_data[~comments_data[col].isnull()][col],\n",
    "    )\n",
    "    comments_data[col] = predictor.predict(comments_data[\"comment_text\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "ae9a39ca-5a63-4562-840c-8d6d421d44de",
   "metadata": {},
   "outputs": [],
   "source": [
    "kmeans_predictor = make_pipeline(\n",
    "    TfidfVectorizer(tokenizer=jieba.lcut), KMeans(n_clusters=8)\n",
    ")\n",
    "\n",
    "kmeans_predictor.fit(comments_data[comments_data[\"sentiment_category\"].isin([1, 3])][\"comment_text\"])\n",
    "kmeans_cluster_label = kmeans_predictor.predict(comments_data[comments_data[\"sentiment_category\"].isin([1, 3])][\"comment_text\"])\n",
    "\n",
    "kmeans_top_word = []\n",
    "tfidf_vectorizer = kmeans_predictor.named_steps['tfidfvectorizer']\n",
    "kmeans_model = kmeans_predictor.named_steps['kmeans']\n",
    "feature_names = tfidf_vectorizer.get_feature_names_out()\n",
    "cluster_centers = kmeans_model.cluster_centers_\n",
    "for i in range(kmeans_model.n_clusters):\n",
    "    top_feature_indices = cluster_centers[i].argsort()[::-1]\n",
    "    top_word = ' '.join([feature_names[idx] for idx in top_feature_indices[:top_n_words]])\n",
    "    kmeans_top_word.append(top_word)\n",
    "\n",
    "comments_data.loc[comments_data[\"sentiment_category\"].isin([1, 3]), \"positive_cluster_theme\"] = [kmeans_top_word[x] for x in kmeans_cluster_label]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "id": "bbc02eb3-46f4-4482-9669-d83cf647c3b4",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/miniconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_extraction/text.py:517: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "kmeans_predictor = make_pipeline(\n",
    "    TfidfVectorizer(tokenizer=jieba.lcut), KMeans(n_clusters=8)\n",
    ")\n",
    "\n",
    "kmeans_predictor.fit(comments_data[comments_data[\"sentiment_category\"].isin([2, 3])][\"comment_text\"])\n",
    "kmeans_cluster_label = kmeans_predictor.predict(comments_data[comments_data[\"sentiment_category\"].isin([2, 3])][\"comment_text\"])\n",
    "\n",
    "kmeans_top_word = []\n",
    "tfidf_vectorizer = kmeans_predictor.named_steps['tfidfvectorizer']\n",
    "kmeans_model = kmeans_predictor.named_steps['kmeans']\n",
    "feature_names = tfidf_vectorizer.get_feature_names_out()\n",
    "cluster_centers = kmeans_model.cluster_centers_\n",
    "for i in range(kmeans_model.n_clusters):\n",
    "    top_feature_indices = cluster_centers[i].argsort()[::-1]\n",
    "    top_word = ' '.join([feature_names[idx] for idx in top_feature_indices[:top_n_words]])\n",
    "    kmeans_top_word.append(top_word)\n",
    "\n",
    "comments_data.loc[comments_data[\"sentiment_category\"].isin([2, 3]), \"negative_cluster_theme\"] = [kmeans_top_word[x] for x in kmeans_cluster_label]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "id": "e0f21f45-b535-4173-b924-38d87a00f479",
   "metadata": {},
   "outputs": [],
   "source": [
    "kmeans_predictor = make_pipeline(\n",
    "    TfidfVectorizer(tokenizer=jieba.lcut), KMeans(n_clusters=8)\n",
    ")\n",
    "\n",
    "kmeans_predictor.fit(comments_data[comments_data[\"user_scenario\"].isin([1])][\"comment_text\"])\n",
    "kmeans_cluster_label = kmeans_predictor.predict(comments_data[comments_data[\"user_scenario\"].isin([1])][\"comment_text\"])\n",
    "\n",
    "kmeans_top_word = []\n",
    "tfidf_vectorizer = kmeans_predictor.named_steps['tfidfvectorizer']\n",
    "kmeans_model = kmeans_predictor.named_steps['kmeans']\n",
    "feature_names = tfidf_vectorizer.get_feature_names_out()\n",
    "cluster_centers = kmeans_model.cluster_centers_\n",
    "for i in range(kmeans_model.n_clusters):\n",
    "    top_feature_indices = cluster_centers[i].argsort()[::-1]\n",
    "    top_word = ' '.join([feature_names[idx] for idx in top_feature_indices[:top_n_words]])\n",
    "    kmeans_top_word.append(top_word)\n",
    "\n",
    "comments_data.loc[comments_data[\"user_scenario\"].isin([1]), \"scenario_cluster_theme\"] = [kmeans_top_word[x] for x in kmeans_cluster_label]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "e0130512-8c16-49fc-819c-bc4e34d5ca85",
   "metadata": {},
   "outputs": [],
   "source": [
    "kmeans_predictor = make_pipeline(\n",
    "    TfidfVectorizer(tokenizer=jieba.lcut), KMeans(n_clusters=8)\n",
    ")\n",
    "\n",
    "kmeans_predictor.fit(comments_data[comments_data[\"user_question\"].isin([1])][\"comment_text\"])\n",
    "kmeans_cluster_label = kmeans_predictor.predict(comments_data[comments_data[\"user_question\"].isin([1])][\"comment_text\"])\n",
    "\n",
    "kmeans_top_word = []\n",
    "tfidf_vectorizer = kmeans_predictor.named_steps['tfidfvectorizer']\n",
    "kmeans_model = kmeans_predictor.named_steps['kmeans']\n",
    "feature_names = tfidf_vectorizer.get_feature_names_out()\n",
    "cluster_centers = kmeans_model.cluster_centers_\n",
    "for i in range(kmeans_model.n_clusters):\n",
    "    top_feature_indices = cluster_centers[i].argsort()[::-1]\n",
    "    top_word = ' '.join([feature_names[idx] for idx in top_feature_indices[:top_n_words]])\n",
    "    kmeans_top_word.append(top_word)\n",
    "\n",
    "comments_data.loc[comments_data[\"user_question\"].isin([1]), \"question_cluster_theme\"] = [kmeans_top_word[x] for x in kmeans_cluster_label]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "id": "2d91aa69-cefb-4501-a961-7929a50d1dbc",
   "metadata": {},
   "outputs": [],
   "source": [
    "kmeans_predictor = make_pipeline(\n",
    "    TfidfVectorizer(tokenizer=jieba.lcut), KMeans(n_clusters=8)\n",
    ")\n",
    "\n",
    "kmeans_predictor.fit(comments_data[comments_data[\"user_suggestion\"].isin([1])][\"comment_text\"])\n",
    "kmeans_cluster_label = kmeans_predictor.predict(comments_data[comments_data[\"user_suggestion\"].isin([1])][\"comment_text\"])\n",
    "\n",
    "kmeans_top_word = []\n",
    "tfidf_vectorizer = kmeans_predictor.named_steps['tfidfvectorizer']\n",
    "kmeans_model = kmeans_predictor.named_steps['kmeans']\n",
    "feature_names = tfidf_vectorizer.get_feature_names_out()\n",
    "cluster_centers = kmeans_model.cluster_centers_\n",
    "for i in range(kmeans_model.n_clusters):\n",
    "    top_feature_indices = cluster_centers[i].argsort()[::-1]\n",
    "    top_word = ' '.join([feature_names[idx] for idx in top_feature_indices[:top_n_words]])\n",
    "    kmeans_top_word.append(top_word)\n",
    "\n",
    "comments_data.loc[comments_data[\"user_suggestion\"].isin([1]), \"suggestion_cluster_theme\"] = [kmeans_top_word[x] for x in kmeans_cluster_label]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "id": "dbded330-9ba7-456e-88ca-faa7d9da3a8f",
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir submit"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "id": "1c7325ad-cd3c-48a0-b0ae-949c8c0210a1",
   "metadata": {},
   "outputs": [],
   "source": [
    "video_data[[\"video_id\", \"product_name\"]].to_csv(\"submit/submit_videos.csv\", index=None)\n",
    "comments_data[['video_id', 'comment_id', 'sentiment_category',\n",
    "       'user_scenario', 'user_question', 'user_suggestion',\n",
    "       'positive_cluster_theme', 'negative_cluster_theme',\n",
    "       'scenario_cluster_theme', 'question_cluster_theme',\n",
    "       'suggestion_cluster_theme']].to_csv(\"submit/submit_comments.csv\", index=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "id": "f144e5b4-ac93-425c-85a5-06e250a04e5f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  adding: submit/ (stored 0%)\n",
      "  adding: submit/submit_videos.csv (deflated 78%)\n",
      "  adding: submit/submit_comments.csv (deflated 80%)\n"
     ]
    }
   ],
   "source": [
    "!zip -r submit.zip submit/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c8db79b4-e9dd-4194-bc9c-469dfd8ce429",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
