{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "5687fc8c",
   "metadata": {},
   "source": [
    "# OLLAMA本地大模型\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8c1f63d",
   "metadata": {},
   "source": [
    "## 1. 环境准备\n",
    "\n",
    "- 安装Ollama（https://ollama.com/）并启动本地服务\n",
    "- 安装Python依赖：`pip install requests`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f40c00f",
   "metadata": {},
   "source": [
    "## 2. 通过Ollama API调用本地大模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "bea4a523",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "模型回复： Ollama 是一个开源的机器学习框架，主要用于训练和部署大型语言模型（如 LLaMA 系列）。其主要用途包括：\n",
      "\n",
      "1. **模型训练**：提供工具和优化技术，简化大规模语言模型的训练过程。  \n",
      "2. **模型部署**：支持高效部署，便于在不同场景（如服务器、边缘设备）中运行模型。  \n",
      "3. **微调与推理**：允许用户对预训练模型进行微调，并用于文本生成、问答等下游任务。  \n",
      "4. **开源社区支持**：通过开放源代码，促进开发者和研究人员的协作与创新。\n",
      "\n",
      "简而言之，Ollama 是一个面向开发者和研究者的工具，旨在降低大模型开发与应用的门槛。\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "# 本地Ollama服务地址\n",
    "OLLAMA_URL = \"http://localhost:11434/api/generate\"\n",
    "\n",
    "# 选择模型（如llama3、qwen3:8b等，需提前在Ollama中pull）\n",
    "model = \"qwen3:8b\"\n",
    "\n",
    "prompt = \"请用中文简要介绍一下Ollama的主要用途。\"\n",
    "\n",
    "payload = {\n",
    "    \"model\": model,\n",
    "    \"prompt\": prompt,\n",
    "    \"stream\": False\n",
    "}\n",
    "\n",
    "response = requests.post(OLLAMA_URL, json=payload)\n",
    "result = response.json()\n",
    "print(\"模型回复：\", result.get(\"response\", \"无回复\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "097c5af7",
   "metadata": {},
   "source": [
    "## 3. 对话及多轮"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "4409d3eb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ollama qwen3:8b chat回复： Ollama 是一个开源的机器学习框架，主要用于简化机器学习模型的训练、部署和应用。其主要用途包括：  \n",
      "1. **模型训练**：支持多种机器学习任务（如分类、回归、聚类等）的模型开发。  \n",
      "2. **模型部署**：提供工具将训练好的模型集成到实际应用中，如移动设备或服务器。  \n",
      "3. **跨平台支持**：兼容不同硬件（如 GPU/TPU）和操作系统，提升灵活性。  \n",
      "4. **易用性**：通过简化代码流程和接口，降低模型开发门槛。  \n",
      "\n",
      "适用于需要快速构建和部署机器学习解决方案的场景，如数据分析、预测系统等。\n"
     ]
    }
   ],
   "source": [
    "# 使用Ollama的qwen3:8b模型进行推理（通过Ollama API chat接口）\n",
    "import requests\n",
    "\n",
    "messages = [\n",
    "    {\"role\": \"user\", \"content\": \"Ollama的主要用途是什么？请用中文简要说明。\"}\n",
    "]\n",
    "\n",
    "payload = {\n",
    "    \"model\": \"qwen3:8b\",\n",
    "    \"messages\": messages,\n",
    "    \"stream\": False\n",
    "}\n",
    "\n",
    "response = requests.post(\"http://localhost:11434/api/chat\", json=payload)\n",
    "result = response.json()\n",
    "print(\"Ollama qwen3:8b chat回复：\", result.get(\"message\", {}).get(\"content\", \"无回复\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8d29337c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 多轮对话示例\n",
    "messages = [\n",
    "    {\"role\": \"user\", \"content\": \"你是谁？\"},\n",
    "    {\"role\": \"assistant\", \"content\": \"我是本地部署的Llama 3大模型。\"},\n",
    "    {\"role\": \"user\", \"content\": \"你能做什么？\"}\n",
    "]\n",
    "\n",
    "payload = {\n",
    "    \"model\": model,\n",
    "    \"messages\": messages,\n",
    "    \"stream\": False\n",
    "}\n",
    "\n",
    "response = requests.post(\"http://localhost:11434/api/chat\", json=payload)\n",
    "result = response.json()\n",
    "print(\"多轮对话回复：\", result.get(\"message\", {}).get(\"content\", \"无回复\"))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
