{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "44f8c410",
   "metadata": {},
   "source": [
    "# 基于Transformers/Hugging Face Hub 下载 Qwen/Qwen3-14B-GGUF\n",
    "\n",
    "本笔记本演示如何从 Hugging Face 下载 Qwen/Qwen3-14B-GGUF 模型文件（.gguf）。\n",
    "- 列出可用量化文件（如 Q4_K_M、Q5_K_M 等）\n",
    "- 选择量化与下载目录\n",
    "- 通过 huggingface_hub 的 snapshot_download 增量/断点续传下载\n",
    "- 可选：演示用 llama-cpp-python 加载（若已安装）\n",
    "\n",
    "注意：GGUF 文件用于 llama.cpp/llama-cpp-python 等引擎，Transformers 不能直接加载 GGUF。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "01e6fd1e",
   "metadata": {},
   "source": [
    "## 1. 环境准备与账号配置\n",
    "- 需要 huggingface_hub（Transformers 内部也使用该库）\n",
    "- 如果模型受控，需要先登录 `huggingface-cli login` 或设置 `HUGGINGFACE_HUB_TOKEN` 环境变量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1abd725b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os, sys\n",
    "from pathlib import Path\n",
    "\n",
    "try:\n",
    "    import huggingface_hub as hfh\n",
    "    from huggingface_hub import HfApi, snapshot_download\n",
    "    print('huggingface_hub version:', hfh.__version__)\n",
    "except Exception as e:\n",
    "    print('缺少 huggingface_hub，请先安装: pip install huggingface_hub')\n",
    "    raise\n",
    "\n",
    "# 如果你使用私有/需协议模型，请确保已登录：\n",
    "# - 终端运行: huggingface-cli login\n",
    "# - 或在环境变量中设置 HUGGINGFACE_HUB_TOKEN\n",
    "\n",
    "print('HF token set:', bool(os.getenv('HUGGINGFACE_HUB_TOKEN')))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "97dacb3d",
   "metadata": {},
   "source": [
    "## 2. 列出可用 GGUF 文件\n",
    "查询仓库 `Qwen/Qwen3-14B-GGUF` 中的 .gguf 文件列表，以便选择量化版本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "99d58985",
   "metadata": {},
   "outputs": [],
   "source": [
    "REPO_ID = \"Qwen/Qwen3-14B-GGUF\"\n",
    "api = HfApi()\n",
    "files = api.list_repo_files(REPO_ID, repo_type=\"model\")\n",
    "\n",
    "gguf_files = sorted([f for f in files if f.lower().endswith('.gguf')])\n",
    "print('共找到 GGUF 文件数:', len(gguf_files))\n",
    "for i, f in enumerate(gguf_files[:50], 1):\n",
    "    print(i, f)\n",
    "\n",
    "# 如果列表很多，可以按关键字过滤（如Q4_K_M、Q5_K_M）\n",
    "preferred = [f for f in gguf_files if 'Q4_K_M'.lower() in f.lower()]\n",
    "print('\\n包含Q4_K_M的候选数:', len(preferred))\n",
    "for f in preferred[:10]:\n",
    "    print(' -', f)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc57ad48",
   "metadata": {},
   "source": [
    "## 3. 设置下载参数\n",
    "- allow_patterns：仅下载匹配的 GGUF 文件（减少流量）\n",
    "- local_dir：保存路径（建议选择有足够磁盘空间的磁盘）\n",
    "- resume_download：允许断点续传"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "57010f6e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 选择量化关键词（例如 'Q4_K_M'、'Q5_K_M'、'Q8_0' 等），也可直接填完整文件名\n",
    "QUANT_KEY = os.getenv('QWEN_GGUF_QUANT', 'Q4_K_M')\n",
    "# 将下载目录调整为 E:\\huggingface_models（建议保持可通过环境变量覆盖）\n",
    "LOCAL_DIR = Path(os.getenv('QWEN_GGUF_DIR', 'E:/huggingface_models')).resolve()\n",
    "LOCAL_DIR.mkdir(parents=True, exist_ok=True)\n",
    "\n",
    "# 允许的匹配模式，仅下载包含该关键词的 .gguf\n",
    "allow_patterns = [f\"*{QUANT_KEY}*.gguf\"]\n",
    "print('Download to:', LOCAL_DIR)\n",
    "print('Allow patterns:', allow_patterns)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f695e6c1",
   "metadata": {},
   "source": [
    "## 4. 执行下载（支持断点续传）\n",
    "使用 snapshot_download 下载匹配的 GGUF 文件。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a4ae68b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 提示：大文件（数GB），请确保磁盘与网络充足\n",
    "snapshot_path = snapshot_download(\n",
    "    repo_id=REPO_ID,\n",
    "    repo_type=\"model\",\n",
    "    local_dir=str(LOCAL_DIR),\n",
    "    local_dir_use_symlinks=False,\n",
    "    allow_patterns=allow_patterns,\n",
    "    resume_download=True,\n",
    ")\n",
    "print('Snapshot local path:', snapshot_path)\n",
    "\n",
    "# 列出下载的 .gguf 文件\n",
    "for p in sorted(LOCAL_DIR.rglob('*.gguf')):\n",
    "    print(f\"{p.name}\\t{p.stat().st_size/1024/1024:.2f} MB\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c030e56f",
   "metadata": {},
   "source": [
    "## 5. 可选：使用 llama-cpp-python 加载验证（若已安装）\n",
    "此步骤仅示例如何加载 GGUF；若未安装将跳过。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "67090b0a",
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    from llama_cpp import Llama\n",
    "    gguf_list = sorted(LOCAL_DIR.rglob(f\"*{QUANT_KEY}*.gguf\"))\n",
    "    if gguf_list:\n",
    "        model_path = str(gguf_list[0])\n",
    "        print('Loading with llama.cpp:', model_path)\n",
    "        llm = Llama(model_path=model_path, n_ctx=2048, n_threads=None)\n",
    "        out = llm(\"你好，用一句话介绍一下Qwen3。\")\n",
    "        print(out[\"choices\"][0][\"text\"])    \n",
    "    else:\n",
    "        print('未找到匹配的GGUF文件，跳过加载示例。')\n",
    "except Exception as e:\n",
    "    print('未安装 llama-cpp-python 或加载失败：', e)"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
