{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "https://python.langchain.com/docs/integrations/llms/ollama/\n",
    "\n",
    "https://python.langchain.com/docs/integrations/text_embedding/ollama/\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "question = \"心脏病人避免提多少公斤以上的重物\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_name = \"llama3.2\"\n",
    "# model_name = \"qwen2.5\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 原生大模型回答问题（Naive LLM）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---------- 1 ----------\n",
      "通常建议心脏病人避免提超过15-20公斤以上的重物。\n",
      "---------- 2 ----------\n",
      "通常建议心脏病人避免提超过4-5公斤以上的重物。\n",
      "---------- 3 ----------\n",
      "心脏病人应避免提超过15公斤以上的重物。\n",
      "---------- 4 ----------\n",
      "心脏病人应避免提超过4公斤以上的重物。\n",
      "---------- 5 ----------\n",
      "通常建议心脏病人避免超过10-15公斤以上的重物。\n"
     ]
    }
   ],
   "source": [
    "from langchain_ollama.llms import OllamaLLM\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "for i in range(5):\n",
    "    print(\"-\" * 10 + \" \" + str(i + 1) + \" \" + \"-\" * 10)\n",
    "    # prompt\n",
    "    template = \"\"\"回答下面这个问题，不要进行过多的叙述\n",
    "\n",
    "    {question}\"\"\"\n",
    "    prompt = ChatPromptTemplate.from_template(template)\n",
    "\n",
    "    # model\n",
    "    model = OllamaLLM(model=model_name)\n",
    "\n",
    "    chain = prompt | model\n",
    "    result = chain.invoke({\"question\": question})\n",
    "    print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 带有检索增强生成的大模型回答问题（RAG）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.1. 文档加载与分块"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.document_loaders import PyPDFLoader\n",
    "\n",
    "path = \"../data/Advice-for-Heart-Attack-Patients.pdf\"\n",
    "loader = PyPDFLoader(path)\n",
    "# 提取所有文本\n",
    "documents = loader.load()  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Chunk 1:\n",
      "©新加坡陈笃生医院，2020年版权所有。截至2020年5月，所有信息均属正确无误。未经本医院\n",
      "许可，不得对本文 档的任何部分进行复制、影印、反序编辑、改编、散布，或用作商业用途，\n",
      "\n",
      "Chunk 2:\n",
      "许可，不得对本文 档的任何部分进行复制、影印、反序编辑、改编、散布，或用作商业用途，\n",
      "不得以任何形式将任何内容，存储于数 据库和检索系统中，或者以任何形式进行传输。本文档\n",
      "\n",
      "Chunk 3:\n",
      "不得以任何形式将任何内容，存储于数 据库和检索系统中，或者以任何形式进行传输。本文档\n",
      "中的所有信息和材料仅供参考，不得替代医生和其他医疗专 家的医嘱。\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "\n",
    "# 使用 RecursiveCharacterTextSplitter 进行分块\n",
    "text_splitter = RecursiveCharacterTextSplitter(\n",
    "    chunk_size=100,   # 每个chunk的最大字符数\n",
    "    chunk_overlap=50  # chunk之间的重叠字符数\n",
    ")\n",
    "\n",
    "# 分割后的文本chunks\n",
    "chunks = text_splitter.split_documents(documents)\n",
    "\n",
    "# 查看分割结果\n",
    "for i, chunk in enumerate(chunks[:3]):  # 仅展示前3个chunk\n",
    "    print(f\"Chunk {i+1}:\\n{chunk.page_content}\\n\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.2. 索引"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_ollama import OllamaEmbeddings\n",
    "from langchain_core.vectorstores import InMemoryVectorStore\n",
    "\n",
    "embedding_model_name = \"bge-m3\"\n",
    "embeddings = OllamaEmbeddings(\n",
    "    model=embedding_model_name,\n",
    ")\n",
    "\n",
    "# Index chunks\n",
    "vector_store = InMemoryVectorStore(embeddings)\n",
    "_ = vector_store.add_documents(documents=chunks)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.3. 检索"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'动\\n• 避免提重物 （5公斤或以上) 以防止胸部疼痛或不适\\n• 遵照医护人员的建议，选择适当的运动或活动\\n4. 其它注意事项\\n• 不要吸烟\\n• 减少压力\\n• 避免极冷或极热的环境'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "retrieved_docs = vector_store.similarity_search(question)\n",
    "sim_top1 = retrieved_docs[0].page_content\n",
    "sim_top1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.4. 生成"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---------- 1 ----------\n",
      "5公斤以上\n",
      "---------- 2 ----------\n",
      "5公斤以上\n",
      "---------- 3 ----------\n",
      "5公斤以上的重物\n",
      "---------- 4 ----------\n",
      "5公斤以上的重物。\n",
      "---------- 5 ----------\n",
      "五公斤或以上\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_ollama.llms import OllamaLLM\n",
    "\n",
    "for i in range(5):\n",
    "    print(\"-\" * 10 + \" \" + str(i + 1) + \" \" + \"-\" * 10)\n",
    "    template = \"\"\"请根据如下事实：\n",
    "\n",
    "    ```\n",
    "    {content}\n",
    "    ```\n",
    "\n",
    "    回答下面这个问题，不要进行过多的叙述\n",
    "\n",
    "    {question}\"\"\"\n",
    "\n",
    "    prompt = ChatPromptTemplate.from_template(template)\n",
    "\n",
    "    model = OllamaLLM(model=model_name)\n",
    "\n",
    "    chain = prompt | model\n",
    "\n",
    "    result = chain.invoke({\"question\": question, \"content\": sim_top1})\n",
    "    print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
