{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 【开源实习】针对任务类型Text Ranking，开发可在香橙派AIpro开发板运行的应用\n",
    "任务编号：#ICJ8RF\n",
    "任务链接：[【开源实习】针对任务类型Text Ranking，开发可在香橙派AIpro开发板运行的应用](https://gitee.com/mindspore/community/issues/ICJ8RF)  \n",
    "\n",
    "\n",
    "## 环境准备\n",
    "开发者拿到香橙派开发板后，首先需要进行硬件资源确认，镜像烧录及CANN和MindSpore版本的升级，才可运行该案例，具体如下：\n",
    "\n",
    "开发板：香橙派Aipro或其他同硬件开发板  \n",
    "开发板镜像: Ubuntu镜像  \n",
    "CANN Toolkit/Kernels：8.0.0.beta1  \n",
    "MindSpore: 2.6.0  \n",
    "MindSpore NLP: 0.4.1  \n",
    "Python: 3.9\n",
    "\n",
    "### 镜像烧录\n",
    "运行该案例需要烧录香橙派官网ubuntu镜像，烧录流程参考[昇思MindSpore官网--香橙派开发专区--环境搭建指南--镜像烧录](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0rc1/orange_pi/environment_setup.html) 章节。\n",
    "\n",
    "### CANN升级\n",
    "CANN升级参考[昇思MindSpore官网--香橙派开发专区--环境搭建指南--CANN升级](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0rc1/orange_pi/environment_setup.html)章节。\n",
    "\n",
    "### MindSpore升级\n",
    "MindSpore升级参考[昇思MindSpore官网--香橙派开发专区--环境搭建指南--MindSpore升级](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0rc1/orange_pi/environment_setup.html)章节。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: mindnlp==0.4.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (0.4.1)\n",
      "Requirement already satisfied: jieba==0.42.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (0.42.1)\n",
      "Requirement already satisfied: sympy==1.14.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (1.14.0)\n",
      "Requirement already satisfied: mindspore>=2.2.14 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (2.6.0)\n",
      "Requirement already satisfied: tqdm in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (4.67.1)\n",
      "Requirement already satisfied: requests in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (2.32.4)\n",
      "Requirement already satisfied: datasets in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (4.0.0)\n",
      "Requirement already satisfied: evaluate in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (0.4.5)\n",
      "Requirement already satisfied: tokenizers==0.19.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (0.19.1)\n",
      "Requirement already satisfied: safetensors in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (0.6.2)\n",
      "Requirement already satisfied: sentencepiece in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (0.2.0)\n",
      "Requirement already satisfied: regex in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (2025.7.34)\n",
      "Requirement already satisfied: addict in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (2.4.0)\n",
      "Requirement already satisfied: ml-dtypes in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (0.5.3)\n",
      "Requirement already satisfied: pyctcdecode in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (0.5.0)\n",
      "Requirement already satisfied: pytest==7.2.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (7.2.0)\n",
      "Requirement already satisfied: pillow>=10.0.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindnlp==0.4.1) (11.3.0)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from sympy==1.14.0) (1.3.0)\n",
      "Requirement already satisfied: attrs>=19.2.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp==0.4.1) (25.3.0)\n",
      "Requirement already satisfied: iniconfig in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp==0.4.1) (2.1.0)\n",
      "Requirement already satisfied: packaging in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp==0.4.1) (25.0)\n",
      "Requirement already satisfied: pluggy<2.0,>=0.12 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp==0.4.1) (1.6.0)\n",
      "Requirement already satisfied: exceptiongroup>=1.0.0rc8 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp==0.4.1) (1.3.0)\n",
      "Requirement already satisfied: tomli>=1.0.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp==0.4.1) (2.2.1)\n",
      "Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from tokenizers==0.19.1->mindnlp==0.4.1) (0.34.4)\n",
      "Requirement already satisfied: filelock in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers==0.19.1->mindnlp==0.4.1) (3.18.0)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers==0.19.1->mindnlp==0.4.1) (2025.3.0)\n",
      "Requirement already satisfied: pyyaml>=5.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers==0.19.1->mindnlp==0.4.1) (6.0.2)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers==0.19.1->mindnlp==0.4.1) (4.14.1)\n",
      "Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers==0.19.1->mindnlp==0.4.1) (1.1.7)\n",
      "Requirement already satisfied: numpy<2.0.0,>=1.20.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindspore>=2.2.14->mindnlp==0.4.1) (1.26.4)\n",
      "Requirement already satisfied: protobuf>=3.13.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindspore>=2.2.14->mindnlp==0.4.1) (6.31.1)\n",
      "Requirement already satisfied: asttokens>=2.0.4 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindspore>=2.2.14->mindnlp==0.4.1) (3.0.0)\n",
      "Requirement already satisfied: scipy>=1.5.4 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindspore>=2.2.14->mindnlp==0.4.1) (1.13.1)\n",
      "Requirement already satisfied: psutil>=5.6.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindspore>=2.2.14->mindnlp==0.4.1) (5.9.0)\n",
      "Requirement already satisfied: astunparse>=1.6.3 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindspore>=2.2.14->mindnlp==0.4.1) (1.6.3)\n",
      "Requirement already satisfied: dill>=0.3.7 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from mindspore>=2.2.14->mindnlp==0.4.1) (0.3.8)\n",
      "Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from astunparse>=1.6.3->mindspore>=2.2.14->mindnlp==0.4.1) (0.45.1)\n",
      "Requirement already satisfied: six<2.0,>=1.6.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from astunparse>=1.6.3->mindspore>=2.2.14->mindnlp==0.4.1) (1.17.0)\n",
      "Requirement already satisfied: pyarrow>=15.0.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from datasets->mindnlp==0.4.1) (21.0.0)\n",
      "Requirement already satisfied: pandas in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from datasets->mindnlp==0.4.1) (2.3.1)\n",
      "Requirement already satisfied: xxhash in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from datasets->mindnlp==0.4.1) (3.5.0)\n",
      "Requirement already satisfied: multiprocess<0.70.17 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from datasets->mindnlp==0.4.1) (0.70.16)\n",
      "Requirement already satisfied: aiohttp!=4.0.0a0,!=4.0.0a1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (3.12.15)\n",
      "Requirement already satisfied: aiohappyeyeballs>=2.5.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (2.6.1)\n",
      "Requirement already satisfied: aiosignal>=1.4.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (1.4.0)\n",
      "Requirement already satisfied: async-timeout<6.0,>=4.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (5.0.1)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (1.7.0)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (6.6.3)\n",
      "Requirement already satisfied: propcache>=0.2.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (0.3.2)\n",
      "Requirement already satisfied: yarl<2.0,>=1.17.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (1.20.1)\n",
      "Requirement already satisfied: idna>=2.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from yarl<2.0,>=1.17.0->aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets->mindnlp==0.4.1) (3.10)\n",
      "Requirement already satisfied: charset_normalizer<4,>=2 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from requests->mindnlp==0.4.1) (3.4.3)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from requests->mindnlp==0.4.1) (2.5.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from requests->mindnlp==0.4.1) (2025.8.3)\n",
      "Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pandas->datasets->mindnlp==0.4.1) (2.9.0.post0)\n",
      "Requirement already satisfied: pytz>=2020.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pandas->datasets->mindnlp==0.4.1) (2025.2)\n",
      "Requirement already satisfied: tzdata>=2022.7 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pandas->datasets->mindnlp==0.4.1) (2025.2)\n",
      "Requirement already satisfied: pygtrie<3.0,>=2.1 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pyctcdecode->mindnlp==0.4.1) (2.5.0)\n",
      "Requirement already satisfied: hypothesis<7,>=6.14 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from pyctcdecode->mindnlp==0.4.1) (6.137.1)\n",
      "Requirement already satisfied: sortedcontainers<3.0.0,>=2.1.0 in /usr/local/miniconda3/envs/text/lib/python3.9/site-packages (from hypothesis<7,>=6.14->pyctcdecode->mindnlp==0.4.1) (2.4.0)\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.\u001b[0m\u001b[33m\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "# 安装必要的库\n",
    "!pip install mindnlp==0.4.1 jieba==0.42.1 sympy==1.14.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/miniconda3/envs/text/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.\n",
      "  setattr(self, word, getattr(machar, word).flat[0])\n",
      "/usr/local/miniconda3/envs/text/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.\n",
      "  return self._float_to_str(self.smallest_subnormal)\n",
      "/usr/local/miniconda3/envs/text/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.\n",
      "  setattr(self, word, getattr(machar, word).flat[0])\n",
      "/usr/local/miniconda3/envs/text/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.\n",
      "  return self._float_to_str(self.smallest_subnormal)\n",
      "Building prefix dict from the default dictionary ...\n",
      "Loading model from cache /tmp/jieba.cache\n",
      "Loading model cost 2.350 seconds.\n",
      "Prefix dict has been built successfully.\n"
     ]
    }
   ],
   "source": [
    "# 导入必要的库\n",
    "import time\n",
    "import re\n",
    "import os\n",
    "import numpy as np\n",
    "from mindspore import context\n",
    "from mindspore import Tensor\n",
    "from mindnlp.transformers import BertTokenizer, BertModel\n",
    "from docx import Document"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 加载模型和分词器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b59ba455d08f414293f0d114ba0f6859",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0.00/319 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "55339d77264646a59095864ab1d77723",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "28703433ad00409ba2a045f773c01ee2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0.00/112 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/miniconda3/envs/text/lib/python3.9/site-packages/mindnlp/transformers/tokenization_utils_base.py:1526: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted, and will be then set to `False` by default. \n",
      "  warnings.warn(\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "35b72ce6d5cc4d878abe3aa9bace9bc4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0.00/434 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "796cda6233174f939674e0cfbf575faf",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0.00/390M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[MS_ALLOC_CONF]Runtime config:  enable_vmm:True  vmm_align_size:2MB\n",
      "模型加载完成! 耗时: 52.52秒\n"
     ]
    }
   ],
   "source": [
    "\n",
    "start_time = time.time()\n",
    "tokenizer = BertTokenizer.from_pretrained('shibing624/text2vec-base-chinese')\n",
    "model = BertModel.from_pretrained('shibing624/text2vec-base-chinese')\n",
    "model.set_train(False)  # 设置为评估模式\n",
    "load_time = time.time() - start_time\n",
    "print(f\"模型加载完成! 耗时: {load_time:.2f}秒\")\n",
    "\n",
    "# 参数设置\n",
    "TOP_K = 3\n",
    "DAMPING = 0.85\n",
    "MAX_ITER = 100\n",
    "SIM_THRESHOLD = 0.5\n",
    "MAX_TEXT_LENGTH = 5000"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 加载Text Rank方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "def get_sentence_embedding(sentence):\n",
    "    \"\"\"获取句子嵌入向量\"\"\"\n",
    "    inputs = tokenizer(sentence, \n",
    "                      max_length=128, \n",
    "                      padding='max_length', \n",
    "                      return_tensors='ms')\n",
    "    outputs = model(**inputs)\n",
    "    return outputs['pooler_output'].asnumpy().squeeze()\n",
    "\n",
    "def cosine_similarity(a, b):\n",
    "    \"\"\"计算余弦相似度\"\"\"\n",
    "    return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b) + 1e-8)\n",
    "\n",
    "def build_similarity_matrix(sentences):\n",
    "    \"\"\"构建句子相似度矩阵\"\"\"\n",
    "    print(f\"处理 {len(sentences)} 个句子...\")\n",
    "    start_time = time.time()\n",
    "    embeddings = []\n",
    "    \n",
    "    batch_size = 10\n",
    "    for i in range(0, len(sentences), batch_size):\n",
    "        batch = sentences[i:i+batch_size]\n",
    "        batch_embeddings = []\n",
    "        \n",
    "        for sent in batch:\n",
    "            batch_embeddings.append(get_sentence_embedding(sent))\n",
    "            \n",
    "        embeddings.extend(batch_embeddings)\n",
    "        \n",
    "    n = len(sentences)\n",
    "    matrix = np.zeros((n, n))\n",
    "    \n",
    "    for i in range(n):\n",
    "        for j in range(n):\n",
    "            if i != j:\n",
    "                sim = cosine_similarity(embeddings[i], embeddings[j])\n",
    "                if sim > SIM_THRESHOLD:\n",
    "                    matrix[i][j] = sim\n",
    "                \n",
    "        if i % 10 == 0 or i == n-1:\n",
    "            print(f\"进度: {i+1}/{n}\")\n",
    "            \n",
    "    build_time = time.time() - start_time\n",
    "    return matrix\n",
    "\n",
    "def textrank(matrix):\n",
    "    \"\"\"TextRank算法实现\"\"\"\n",
    "    start_time = time.time()\n",
    "    \n",
    "    d = DAMPING\n",
    "    n = matrix.shape[0]\n",
    "    weights = np.ones(n) / n  \n",
    "    \n",
    "    row_sums = matrix.sum(axis=1)\n",
    "    norm_matrix = matrix / (row_sums[:, np.newaxis] + 1e-8)\n",
    "    \n",
    "    for iter in range(MAX_ITER):\n",
    "        new_weights = (1 - d) + d * np.dot(norm_matrix.T, weights)\n",
    "        diff = np.linalg.norm(new_weights - weights)\n",
    "        \n",
    "        if diff < 1e-5:\n",
    "            break\n",
    "        weights = new_weights\n",
    "\n",
    "   \n",
    "    algo_time = time.time() - start_time\n",
    "    return weights\n",
    "\n",
    "def preprocess_text(text):\n",
    "    \"\"\"预处理文本：清理、截断等\"\"\"\n",
    "    text = re.sub(r'\\s+', ' ', text).strip()\n",
    "    \n",
    "    # 截断过长的文本\n",
    "    if len(text) > MAX_TEXT_LENGTH:\n",
    "        print(f\"警告: 文本过长({len(text)}字符)，将截取前{MAX_TEXT_LENGTH}字符\")\n",
    "        text = text[:MAX_TEXT_LENGTH]\n",
    "        \n",
    "    return text\n",
    "\n",
    "def summarize(text, top_k=TOP_K):\n",
    "    \"\"\"生成rank评分前top_k个\"\"\"\n",
    "    total_start = time.time()\n",
    "    \n",
    "    text = preprocess_text(text)\n",
    "    \n",
    "    sentences = re.split(r'[。！？；\\n]', text)\n",
    "    sentences = [s.strip() for s in sentences if len(s.strip()) > 5]\n",
    "    \n",
    "    print(f\"原始文本句子数: {len(sentences)}\")\n",
    "    \n",
    "    if len(sentences) <= 1:\n",
    "        return \"文本过短\"\n",
    "    \n",
    "\n",
    "    sim_matrix = build_similarity_matrix(sentences)\n",
    "    scores = textrank(sim_matrix)\n",
    "    \n",
    "    top_indices = scores.argsort()[-top_k:][::-1]\n",
    "    top_indices.sort()\n",
    "    \n",
    "    # 构建top_k输出\n",
    "    summary = ''.join([sentences[i] + '。' for i in top_indices])\n",
    "    \n",
    "    total_time = time.time() - total_start\n",
    "    \n",
    "    # 打印每个句子的详细TextRank情况\n",
    "    print(\"\\n\" + \"=\"*80)\n",
    "    print(\"句子TextRank评分详情:\")\n",
    "    for i, (sentence, score) in enumerate(zip(sentences, scores)):\n",
    "        # 对长句子进行截断，以便在控制台显示\n",
    "        display_sentence = sentence\n",
    "        if len(display_sentence) > 60:\n",
    "            display_sentence = display_sentence[:57] + \"...\"\n",
    "        \n",
    "        # 标记被选中的句子\n",
    "        mark = \" ★\" if i in top_indices else \"\"\n",
    "        \n",
    "        print(f\"句子 {i+1} (得分: {score:.6f}{mark}):\")\n",
    "        print(f\"内容: {display_sentence}\")\n",
    "        print(\"-\"*80)\n",
    "    \n",
    "    return summary\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 加载文件处理方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "def read_txt(file_path):\n",
    "    \"\"\"读取TXT文件\"\"\"\n",
    "    with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:\n",
    "        return f.read()\n",
    "\n",
    "def read_docx(file_path):\n",
    "    \"\"\"读取DOCX文件\"\"\"\n",
    "    doc = Document(file_path)\n",
    "    full_text = []\n",
    "    for para in doc.paragraphs:\n",
    "        full_text.append(para.text)\n",
    "    return '\\n'.join(full_text)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 加载主函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "def main(top_k=TOP_K):\n",
    "    \"\"\"处理输入的文件\"\"\"\n",
    "    print(\"=\"*80)\n",
    "    print(\"说明:\")\n",
    "    print(\"- 支持TXT和DOCX格式文件\")\n",
    "    print(\"- 输入文件路径进行处理\")\n",
    "    print(\"- 输入'exit'退出程序\")\n",
    "    print(\"=\"*80)\n",
    "    \n",
    "    while True:\n",
    "        file_path = input(\"\\n请输入文本文件路径(TXT或DOCX格式, 输入'exit'退出): \").strip()\n",
    "        \n",
    "        if file_path.lower() == 'exit':\n",
    "            print(\"程序已退出。\")\n",
    "            break\n",
    "            \n",
    "        if not os.path.exists(file_path):\n",
    "            print(f\"文件不存在: {file_path}\")\n",
    "            continue\n",
    "            \n",
    "        file_ext = os.path.splitext(file_path)[1].lower()\n",
    "        \n",
    "        if file_ext not in ['.txt', '.docx']:\n",
    "            print(\"不支持的文件格式! 仅支持TXT和DOCX文件。\")\n",
    "            continue\n",
    "            \n",
    "        print(f\"\\n处理文件: {file_path}\")\n",
    "        \n",
    "        # 读取文件内容\n",
    "        try:\n",
    "            if file_ext == '.txt':\n",
    "                text = read_txt(file_path)\n",
    "            else:  # .docx\n",
    "                text = read_docx(file_path)\n",
    "            \n",
    "            print(f\"读取文本长度: {len(text)} 字符\")\n",
    "        \n",
    "            summary = summarize(text, top_k)\n",
    "            \n",
    "            # 显示结果\n",
    "            print(\"\\n\" + \"=\"*80)\n",
    "            print(\"text ranking top3 结果:\")\n",
    "            print(\"=\"*80)\n",
    "            print(summary)\n",
    "            print(\"=\"*80)\n",
    "            \n",
    "            \n",
    "        except Exception as e:\n",
    "            print(f\"处理文件时出错: {str(e)}\")\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 运行程序"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================================================================\n",
      "说明:\n",
      "- 支持TXT和DOCX格式文件\n",
      "- 输入文件路径进行处理\n",
      "- 输入'exit'退出程序\n",
      "================================================================================\n",
      "\n",
      "处理文件: /opt/image2text/shixi/orange-pi-mindspore/Online/community/18-Text-Ranking/test_01.txt\n",
      "读取文本长度: 144 字符\n",
      "原始文本句子数: 9\n",
      "处理 9 个句子...\n",
      "进度: 1/9\n",
      "进度: 9/9\n",
      "\n",
      "================================================================================\n",
      "句子TextRank评分详情:\n",
      "句子 1 (得分: 0.964721):\n",
      "内容: 水陆草木之花，可爱者甚蕃\n",
      "--------------------------------------------------------------------------------\n",
      "句子 2 (得分: 0.985499):\n",
      "内容: 晋陶渊明独爱菊\n",
      "--------------------------------------------------------------------------------\n",
      "句子 3 (得分: 1.006646):\n",
      "内容: 自李唐来，世人甚爱牡丹\n",
      "--------------------------------------------------------------------------------\n",
      "句子 4 (得分: 1.007241):\n",
      "内容: 予独爱莲之出淤泥而不染，濯清涟而不妖，中通外直，不蔓不枝，香远益清，亭亭净植，可远观而不可亵玩焉\n",
      "--------------------------------------------------------------------------------\n",
      "句子 5 (得分: 1.007288):\n",
      "内容: 予谓菊，花之隐逸者也\n",
      "--------------------------------------------------------------------------------\n",
      "句子 6 (得分: 1.008366 ★):\n",
      "内容: 牡丹，花之富贵者也\n",
      "--------------------------------------------------------------------------------\n",
      "句子 7 (得分: 1.021729 ★):\n",
      "内容: 莲，花之君子者也\n",
      "--------------------------------------------------------------------------------\n",
      "句子 8 (得分: 1.019221 ★):\n",
      "内容: 菊之爱，陶后鲜有闻\n",
      "--------------------------------------------------------------------------------\n",
      "句子 9 (得分: 0.979114):\n",
      "内容: 莲之爱，同予者何人?牡丹之爱，宜乎众矣!\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "================================================================================\n",
      "text ranking top3 结果:\n",
      "================================================================================\n",
      "牡丹，花之富贵者也。莲，花之君子者也。菊之爱，陶后鲜有闻。\n",
      "================================================================================\n",
      "\n",
      "处理文件: /opt/image2text/shixi/orange-pi-mindspore/Online/community/18-Text-Ranking/test.docx\n",
      "读取文本长度: 102 字符\n",
      "原始文本句子数: 9\n",
      "处理 9 个句子...\n",
      "进度: 1/9\n",
      "进度: 9/9\n",
      "\n",
      "================================================================================\n",
      "句子TextRank评分详情:\n",
      "句子 1 (得分: 1.025204):\n",
      "内容: 山不在高，有仙则名\n",
      "--------------------------------------------------------------------------------\n",
      "句子 2 (得分: 0.884524):\n",
      "内容: 水不在深，有龙则灵\n",
      "--------------------------------------------------------------------------------\n",
      "句子 3 (得分: 0.976087):\n",
      "内容: 斯是陋室，惟吾德馨\n",
      "--------------------------------------------------------------------------------\n",
      "句子 4 (得分: 1.016263):\n",
      "内容: 苔痕上阶绿，草色入帘青\n",
      "--------------------------------------------------------------------------------\n",
      "句子 5 (得分: 1.046990 ★):\n",
      "内容: 谈笑有鸿儒，往来无白丁\n",
      "--------------------------------------------------------------------------------\n",
      "句子 6 (得分: 1.049686 ★):\n",
      "内容: 可以调素琴，阅金经\n",
      "--------------------------------------------------------------------------------\n",
      "句子 7 (得分: 1.018093):\n",
      "内容: 无丝竹之乱耳，无案牍之劳形\n",
      "--------------------------------------------------------------------------------\n",
      "句子 8 (得分: 1.062738 ★):\n",
      "内容: 南阳诸葛庐，西蜀子云亭\n",
      "--------------------------------------------------------------------------------\n",
      "句子 9 (得分: 0.920238):\n",
      "内容: 孔子云：“何陋之有\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "================================================================================\n",
      "text ranking top3 结果:\n",
      "================================================================================\n",
      "谈笑有鸿儒，往来无白丁。可以调素琴，阅金经。南阳诸葛庐，西蜀子云亭。\n",
      "================================================================================\n",
      "程序已退出。\n"
     ]
    }
   ],
   "source": [
    "main(3)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "text",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.23"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
