{
    "cells": [
     {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
       "# 使用Nexa SDK进行本地模型推理\n",
       "\n",
       "Nexa SDK 是一个端侧推理框架，支持 ONNX 和 GGML 模型，支持文本生成、图像生成、视觉语言模型（VLM）、音频语言模型、语音转文本（ASR）和文本转语音（TTS）等类型的功能。它支持的设备包括 CPU, GPU (CUDA, Metal, ROCm) 和 iOS。主要具有以下使用范例：\n",
       "\n",
       "- 本地进行模型推理，支持 ONNX 和 GGML 模型。模型可以从 Nexa On-Device AI Hub 下载，也可以直接从 ModelScope 或者 HuggingFace 下载。\n",
       "- 进行模型转换，支持将 ModelScope 或 HuggingFace 的模型转换为 GGUF 量化格式。\n",
       "- 部署本地服务器，支持 API 调用模型。\n",
       "\n",
       "本教程将从环境安装开始，依次介绍基于 Nexa SDK 的模型推理、模型转换和本地服务器部署。本教程的所有命令推荐在终端下运行。"
      ]
     },
     {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
       "### 环境安装\n",
       "\n",
       "环境安装可以参考 [Nexa SDK 文档](https://github.com/NexaAI/nexa-sdk.git)，对于不同的设备，可以下载不同的预编译包进行安装，或者进行本地编译安装。本教程采用本地编译的方式。\n"
      ]
     },
     {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
       "ExecutionIndicator": {
        "show": false
       },
       "tags": [],
       "vscode": {
        "languageId": "shellscript"
       }
      },
      "outputs": [],
      "source": [
       "!pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu121\n",
       "!pip install modelscope\n",
       "!git clone https://github.com/NexaAI/nexa-sdk.git\n",
       "%cd nexa-sdk\n",
       "!git submodule update --init --recursive\n",
       "!pip install -e \".[convert]\""
      ]
     },
     {
      "cell_type": "markdown",
      "metadata": {
       "tags": []
      },
      "source": [
       "### 模型推理运行\n",
       "\n",
       "使用 nexa 命令行运行模型，模型源选择从 ModelSope 下载。 由于 `nexa run` 命令为交互式运行，推荐在 terminal 环境中运行。\n",
       "\n",
       "```shell\n",
       "nexa run -ms Qwen/Qwen2.5-Coder-7B-Instruct-GGUF\n",
       "```\n",
       "\n",
       "将提示 `Qwen/Qwen2.5-Coder-7B-Instruct-GGUF` repo 中有的 GGUF 模型文件，从中选择一个 GGUF 模型文件，例如 `qwen2.5-coder-7b-instruct-fp16.gguf`。\n"
      ]
     },
     {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
       "### 模型转换\n",
       "\n",
       "使用 nexa 模型转换工具将模型转换成 GGUF 量化格式，随后可通过 `nexa run` 命令进行推理。\n"
      ]
     },
     {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
       "execution": {
        "iopub.execute_input": "2024-11-20T11:46:37.585864Z",
        "iopub.status.busy": "2024-11-20T11:46:37.585511Z",
        "iopub.status.idle": "2024-11-20T11:52:18.019957Z",
        "shell.execute_reply": "2024-11-20T11:52:17.955986Z",
        "shell.execute_reply.started": "2024-11-20T11:46:37.585840Z"
       },
       "tags": [],
       "vscode": {
        "languageId": "shellscript"
       }
      },
      "outputs": [
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "\n",
         "Select model type:\n",
         "1. NLP (text generation)\n",
         "2. COMPUTER_VISION (image generation)\n",
         "\n",
         "Select model type (enter number): \n",
         "Available quantization types:\n",
         "1. q4_0\n",
         "2. q4_1\n",
         "3. q5_0\n",
         "4. q5_1\n",
         "5. q8_0\n",
         "6. q2_k\n",
         "7. q3_k_s\n",
         "8. q3_k_m\n",
         "9. q3_k_l\n",
         "10. q4_k_s\n",
         "11. q4_k_m\n",
         "12. q5_k_s\n",
         "13. q5_k_m\n",
         "14. q6_k\n",
         "15. iq2_xxs\n",
         "16. iq2_xs\n",
         "17. q2_k_s\n",
         "18. iq3_xs\n",
         "19. iq3_xxs\n",
         "20. iq1_s\n",
         "21. iq4_nl\n",
         "22. iq3_s\n",
         "23. iq3_m\n",
         "24. iq2_s\n",
         "25. iq2_m\n",
         "26. iq4_xs\n",
         "27. iq1_m\n",
         "28. f16\n",
         "29. f32\n",
         "30. bf16\n",
         "31. q4_0_4_4\n",
         "32. q4_0_4_8\n",
         "33. q4_0_8_8\n",
         "34. tq1_0\n",
         "35. tq2_0\n",
         "\n"
        ]
       },
       {
        "name": "stderr",
        "output_type": "stream",
        "text": [
         "2024-11-20 19:46:59.615480: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
         "2024-11-20 19:47:11.936118: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
         "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
         "2024-11-20 19:47:17.175198: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
         "2024-11-20 19:47:31,931 - modelscope - WARNING - Using branch: master as version is unstable, use with caution\n"
        ]
       },
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "Select quantization type (enter number): Downloading Model to directory: /mnt/workspace/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct\n"
        ]
       },
       {
        "name": "stderr",
        "output_type": "stream",
        "text": [
         "Downloading [config.json]: 100%|██████████| 663/663 [00:00<00:00, 1.40kB/s]\n",
         "Downloading [configuration.json]: 100%|██████████| 2.00/2.00 [00:00<00:00, 3.43B/s]\n",
         "Downloading [generation_config.json]: 100%|██████████| 243/243 [00:00<00:00, 433B/s]\n",
         "Downloading [LICENSE]: 100%|██████████| 11.1k/11.1k [00:00<00:00, 13.0kB/s]\n",
         "Downloading [merges.txt]: 100%|██████████| 1.59M/1.59M [00:00<00:00, 2.17MB/s]\n",
         "Downloading [model-00001-of-00004.safetensors]: 100%|██████████| 3.67G/3.67G [00:13<00:00, 295MB/s] \n",
         "Downloading [model-00002-of-00004.safetensors]: 100%|██████████| 3.60G/3.60G [00:15<00:00, 253MB/s] \n",
         "Downloading [model-00003-of-00004.safetensors]: 100%|██████████| 3.60G/3.60G [00:21<00:00, 179MB/s] \n",
         "Downloading [model-00004-of-00004.safetensors]: 100%|██████████| 3.31G/3.31G [00:12<00:00, 282MB/s] \n",
         "Downloading [model.safetensors.index.json]: 100%|██████████| 27.1k/27.1k [00:01<00:00, 26.0kB/s]\n",
         "Downloading [README.md]: 100%|██████████| 5.85k/5.85k [00:00<00:00, 13.1kB/s]\n",
         "Downloading [tokenizer.json]: 100%|██████████| 6.71M/6.71M [00:00<00:00, 7.82MB/s]\n",
         "Downloading [tokenizer_config.json]: 100%|██████████| 7.13k/7.13k [00:00<00:00, 11.6kB/s]\n",
         "Downloading [vocab.json]: 100%|██████████| 2.65M/2.65M [00:00<00:00, 3.17MB/s]\n",
         "2024-11-20 19:49:17,122 - modelscope - INFO - Creating symbolic link /mnt/workspace/.cache/modelscope/hub/Qwen/Qwen2___5-7B-Instruct -> /mnt/workspace/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct.\n",
         "2024-11-20 19:49:17,124 - modelscope - WARNING - Failed to create symbolic link /mnt/workspace/.cache/modelscope/hub/Qwen/Qwen2___5-7B-Instruct -> /mnt/workspace/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct: [Errno 2] No such file or directory: '/mnt/workspace/.cache/modelscope/hub/Qwen/Qwen2___5-7B-Instruct' -> '/mnt/workspace/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct'\n",
         "2024-11-20 19:49:17,155 - INFO - Loading model: Qwen2.5-7B-Instruct\n",
         "2024-11-20 19:49:17,162 - INFO - gguf: This GGUF file is for Little Endian only\n",
         "2024-11-20 19:49:17,162 - INFO - Exporting model...\n",
         "2024-11-20 19:49:17,163 - INFO - gguf: loading model weight map from 'model.safetensors.index.json'\n",
         "2024-11-20 19:49:17,163 - INFO - gguf: loading model part 'model-00001-of-00004.safetensors'\n",
         "2024-11-20 19:49:17,205 - INFO - token_embd.weight,         torch.bfloat16 --> F16, shape = {3584, 152064}\n",
         "2024-11-20 19:49:17,205 - INFO - blk.0.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,205 - INFO - blk.0.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,205 - INFO - blk.0.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,205 - INFO - blk.0.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,206 - INFO - blk.0.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,206 - INFO - blk.0.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,206 - INFO - blk.0.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,206 - INFO - blk.0.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,206 - INFO - blk.0.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,206 - INFO - blk.0.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,207 - INFO - blk.0.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,207 - INFO - blk.0.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,207 - INFO - blk.1.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,207 - INFO - blk.1.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,207 - INFO - blk.1.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,207 - INFO - blk.1.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,207 - INFO - blk.1.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,208 - INFO - blk.1.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,208 - INFO - blk.1.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,208 - INFO - blk.1.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,208 - INFO - blk.1.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,208 - INFO - blk.1.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,208 - INFO - blk.1.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,208 - INFO - blk.1.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,209 - INFO - blk.2.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,209 - INFO - blk.2.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,209 - INFO - blk.2.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,209 - INFO - blk.2.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,209 - INFO - blk.2.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,209 - INFO - blk.2.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.2.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.2.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.2.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.2.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.2.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.2.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.3.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,210 - INFO - blk.3.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,211 - INFO - blk.3.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.3.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.3.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.4.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.4.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.4.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.4.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.4.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,212 - INFO - blk.4.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,213 - INFO - blk.4.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,213 - INFO - blk.4.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,213 - INFO - blk.4.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,213 - INFO - blk.4.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,213 - INFO - blk.4.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,213 - INFO - blk.4.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,213 - INFO - blk.5.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,214 - INFO - blk.5.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,215 - INFO - blk.5.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,215 - INFO - blk.5.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,215 - INFO - blk.6.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,215 - INFO - blk.6.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,215 - INFO - blk.6.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,216 - INFO - blk.6.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,216 - INFO - blk.6.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,216 - INFO - blk.6.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,216 - INFO - blk.6.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,216 - INFO - blk.6.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,216 - INFO - blk.6.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,216 - INFO - gguf: loading model part 'model-00002-of-00004.safetensors'\n",
         "2024-11-20 19:49:17,218 - INFO - blk.10.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,219 - INFO - blk.10.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,219 - INFO - blk.10.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,219 - INFO - blk.10.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,219 - INFO - blk.10.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,219 - INFO - blk.10.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,220 - INFO - blk.10.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,220 - INFO - blk.10.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,220 - INFO - blk.10.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,220 - INFO - blk.10.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,220 - INFO - blk.10.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,221 - INFO - blk.10.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,221 - INFO - blk.11.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,221 - INFO - blk.11.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,221 - INFO - blk.11.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,221 - INFO - blk.11.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,221 - INFO - blk.11.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,222 - INFO - blk.11.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,222 - INFO - blk.11.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,222 - INFO - blk.11.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,222 - INFO - blk.11.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,222 - INFO - blk.11.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,223 - INFO - blk.11.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,223 - INFO - blk.11.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,223 - INFO - blk.12.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,223 - INFO - blk.12.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,223 - INFO - blk.12.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,223 - INFO - blk.12.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,224 - INFO - blk.12.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,224 - INFO - blk.12.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,224 - INFO - blk.12.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,224 - INFO - blk.12.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,224 - INFO - blk.12.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,225 - INFO - blk.12.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,225 - INFO - blk.12.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,225 - INFO - blk.12.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,225 - INFO - blk.13.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,225 - INFO - blk.13.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,225 - INFO - blk.13.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,226 - INFO - blk.13.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,226 - INFO - blk.13.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,226 - INFO - blk.13.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,226 - INFO - blk.13.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,226 - INFO - blk.13.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,227 - INFO - blk.13.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,227 - INFO - blk.13.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,227 - INFO - blk.13.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,227 - INFO - blk.13.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,227 - INFO - blk.14.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,227 - INFO - blk.14.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,227 - INFO - blk.14.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,228 - INFO - blk.14.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,228 - INFO - blk.14.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,228 - INFO - blk.14.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,228 - INFO - blk.14.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,228 - INFO - blk.14.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,228 - INFO - blk.14.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,229 - INFO - blk.14.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,229 - INFO - blk.6.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,229 - INFO - blk.6.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,230 - INFO - blk.6.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,230 - INFO - blk.7.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,230 - INFO - blk.7.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,230 - INFO - blk.7.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,230 - INFO - blk.7.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,231 - INFO - blk.7.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,231 - INFO - blk.7.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,231 - INFO - blk.7.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,231 - INFO - blk.7.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,231 - INFO - blk.7.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,231 - INFO - blk.7.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,231 - INFO - blk.7.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.7.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.8.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.8.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.8.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.8.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.8.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.8.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,232 - INFO - blk.8.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,233 - INFO - blk.8.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,233 - INFO - blk.8.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,233 - INFO - blk.8.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,233 - INFO - blk.8.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,233 - INFO - blk.8.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,233 - INFO - blk.9.attn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,233 - INFO - blk.9.ffn_down.weight,     torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.ffn_gate.weight,     torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.ffn_up.weight,       torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.ffn_norm.weight,     torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.attn_k.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.attn_k.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.attn_output.weight,  torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.attn_q.bias,         torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,234 - INFO - blk.9.attn_q.weight,       torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,235 - INFO - blk.9.attn_v.bias,         torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,235 - INFO - blk.9.attn_v.weight,       torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,235 - INFO - gguf: loading model part 'model-00003-of-00004.safetensors'\n",
         "2024-11-20 19:49:17,235 - INFO - blk.14.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.14.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.15.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.15.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.15.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.15.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.15.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.15.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,236 - INFO - blk.15.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.15.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.15.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.15.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.15.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.15.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.16.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.16.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,237 - INFO - blk.16.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,238 - INFO - blk.16.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.16.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.17.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.17.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.17.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.17.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.17.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.17.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,239 - INFO - blk.17.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,240 - INFO - blk.17.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,240 - INFO - blk.17.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,240 - INFO - blk.17.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,240 - INFO - blk.17.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,240 - INFO - blk.17.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,240 - INFO - blk.18.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,241 - INFO - blk.18.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,241 - INFO - blk.18.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,241 - INFO - blk.18.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,241 - INFO - blk.18.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,241 - INFO - blk.18.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,241 - INFO - blk.18.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,241 - INFO - blk.18.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,242 - INFO - blk.18.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,242 - INFO - blk.18.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,242 - INFO - blk.18.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,242 - INFO - blk.18.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,242 - INFO - blk.19.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,242 - INFO - blk.19.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,243 - INFO - blk.19.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,243 - INFO - blk.19.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,243 - INFO - blk.19.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,243 - INFO - blk.19.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,243 - INFO - blk.19.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,244 - INFO - blk.19.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,244 - INFO - blk.19.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,244 - INFO - blk.19.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,244 - INFO - blk.19.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,244 - INFO - blk.19.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,245 - INFO - blk.20.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.20.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.20.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.20.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.20.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.21.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.21.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.21.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,246 - INFO - blk.21.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,247 - INFO - blk.21.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,247 - INFO - blk.21.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,247 - INFO - blk.21.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,247 - INFO - blk.21.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,247 - INFO - blk.21.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,247 - INFO - blk.21.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,248 - INFO - blk.21.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,248 - INFO - blk.21.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,248 - INFO - blk.22.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,248 - INFO - blk.22.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,248 - INFO - blk.22.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,249 - INFO - blk.22.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,249 - INFO - blk.22.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,249 - INFO - blk.22.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,249 - INFO - blk.22.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,249 - INFO - blk.22.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,250 - INFO - blk.22.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,250 - INFO - blk.22.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,250 - INFO - blk.22.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,250 - INFO - gguf: loading model part 'model-00004-of-00004.safetensors'\n",
         "2024-11-20 19:49:17,250 - INFO - output.weight,             torch.bfloat16 --> F16, shape = {3584, 152064}\n",
         "2024-11-20 19:49:17,251 - INFO - blk.22.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,251 - INFO - blk.23.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,251 - INFO - blk.23.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,251 - INFO - blk.23.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,251 - INFO - blk.23.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,251 - INFO - blk.23.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,251 - INFO - blk.23.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,252 - INFO - blk.23.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,252 - INFO - blk.23.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,252 - INFO - blk.23.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,252 - INFO - blk.23.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,252 - INFO - blk.23.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,252 - INFO - blk.23.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,253 - INFO - blk.24.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,253 - INFO - blk.24.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,253 - INFO - blk.24.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,253 - INFO - blk.24.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,253 - INFO - blk.24.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,253 - INFO - blk.24.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,254 - INFO - blk.24.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,254 - INFO - blk.24.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,254 - INFO - blk.24.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,254 - INFO - blk.24.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,254 - INFO - blk.24.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,254 - INFO - blk.24.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,255 - INFO - blk.25.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,255 - INFO - blk.25.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,255 - INFO - blk.25.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,255 - INFO - blk.25.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,255 - INFO - blk.25.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,256 - INFO - blk.25.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,256 - INFO - blk.25.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,256 - INFO - blk.25.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,256 - INFO - blk.25.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,256 - INFO - blk.25.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,256 - INFO - blk.25.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,257 - INFO - blk.25.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,257 - INFO - blk.26.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,257 - INFO - blk.26.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,257 - INFO - blk.26.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,257 - INFO - blk.26.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,257 - INFO - blk.26.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,258 - INFO - blk.26.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,258 - INFO - blk.26.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,258 - INFO - blk.26.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,258 - INFO - blk.26.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,258 - INFO - blk.26.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,258 - INFO - blk.26.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,259 - INFO - blk.26.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,259 - INFO - blk.27.attn_norm.weight,   torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,259 - INFO - blk.27.ffn_down.weight,    torch.bfloat16 --> F16, shape = {18944, 3584}\n",
         "2024-11-20 19:49:17,259 - INFO - blk.27.ffn_gate.weight,    torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,259 - INFO - blk.27.ffn_up.weight,      torch.bfloat16 --> F16, shape = {3584, 18944}\n",
         "2024-11-20 19:49:17,259 - INFO - blk.27.ffn_norm.weight,    torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,260 - INFO - blk.27.attn_k.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,260 - INFO - blk.27.attn_k.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,260 - INFO - blk.27.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,260 - INFO - blk.27.attn_q.bias,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,260 - INFO - blk.27.attn_q.weight,      torch.bfloat16 --> F16, shape = {3584, 3584}\n",
         "2024-11-20 19:49:17,260 - INFO - blk.27.attn_v.bias,        torch.bfloat16 --> F32, shape = {512}\n",
         "2024-11-20 19:49:17,261 - INFO - blk.27.attn_v.weight,      torch.bfloat16 --> F16, shape = {3584, 512}\n",
         "2024-11-20 19:49:17,261 - INFO - output_norm.weight,        torch.bfloat16 --> F32, shape = {3584}\n",
         "2024-11-20 19:49:17,264 - INFO - Set meta model\n",
         "2024-11-20 19:49:17,264 - INFO - Set model parameters\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: context length = 32768\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: embedding length = 3584\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: feed forward length = 18944\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: head count = 28\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: key-value head count = 4\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: rope theta = 1000000.0\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: rms norm epsilon = 1e-06\n",
         "2024-11-20 19:49:17,264 - INFO - gguf: file type = 1\n",
         "2024-11-20 19:49:17,264 - INFO - Set model tokenizer\n",
         "2024-11-20 19:49:18,312 - INFO - Adding 151387 merge(s).\n",
         "2024-11-20 19:49:18,312 - INFO - Setting special token type eos to 151645\n",
         "2024-11-20 19:49:18,312 - INFO - Setting special token type pad to 151643\n",
         "2024-11-20 19:49:18,312 - INFO - Setting special token type bos to 151643\n",
         "2024-11-20 19:49:18,312 - INFO - Setting add_bos_token to False\n",
         "2024-11-20 19:49:18,312 - INFO - Setting chat_template to {%- if tools %}\n",
         "    {{- '<|im_start|>system\\n' }}\n",
         "    {%- if messages[0]['role'] == 'system' %}\n",
         "        {{- messages[0]['content'] }}\n",
         "    {%- else %}\n",
         "        {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n",
         "    {%- endif %}\n",
         "    {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n",
         "    {%- for tool in tools %}\n",
         "        {{- \"\\n\" }}\n",
         "        {{- tool | tojson }}\n",
         "    {%- endfor %}\n",
         "    {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n",
         "{%- else %}\n",
         "    {%- if messages[0]['role'] == 'system' %}\n",
         "        {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n",
         "    {%- else %}\n",
         "        {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n",
         "    {%- endif %}\n",
         "{%- endif %}\n",
         "{%- for message in messages %}\n",
         "    {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n",
         "        {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n",
         "    {%- elif message.role == \"assistant\" %}\n",
         "        {{- '<|im_start|>' + message.role }}\n",
         "        {%- if message.content %}\n",
         "            {{- '\\n' + message.content }}\n",
         "        {%- endif %}\n",
         "        {%- for tool_call in message.tool_calls %}\n",
         "            {%- if tool_call.function is defined %}\n",
         "                {%- set tool_call = tool_call.function %}\n",
         "            {%- endif %}\n",
         "            {{- '\\n<tool_call>\\n{\"name\": \"' }}\n",
         "            {{- tool_call.name }}\n",
         "            {{- '\", \"arguments\": ' }}\n",
         "            {{- tool_call.arguments | tojson }}\n",
         "            {{- '}\\n</tool_call>' }}\n",
         "        {%- endfor %}\n",
         "        {{- '<|im_end|>\\n' }}\n",
         "    {%- elif message.role == \"tool\" %}\n",
         "        {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n",
         "            {{- '<|im_start|>user' }}\n",
         "        {%- endif %}\n",
         "        {{- '\\n<tool_response>\\n' }}\n",
         "        {{- message.content }}\n",
         "        {{- '\\n</tool_response>' }}\n",
         "        {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n",
         "            {{- '<|im_end|>\\n' }}\n",
         "        {%- endif %}\n",
         "    {%- endif %}\n",
         "{%- endfor %}\n",
         "{%- if add_generation_prompt %}\n",
         "    {{- '<|im_start|>assistant\\n' }}\n",
         "{%- endif %}\n",
         "\n",
         "2024-11-20 19:49:18,312 - INFO - Set model quantization version\n",
         "2024-11-20 19:49:18,312 - INFO - Writing the following files:\n",
         "2024-11-20 19:49:18,313 - INFO - /root/.cache/nexa/tmp_models/Qwen2.5-7B-Instruct-f16.gguf: n_tensors = 339, total_size = 15.2G\n"
        ]
       },
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "Successfully downloaded repository 'Qwen/Qwen2.5-7B-Instruct' to /root/.cache/nexa/hub/modelscope/Qwen/Qwen2.5-7B-Instruct\n"
        ]
       },
       {
        "name": "stderr",
        "output_type": "stream",
        "text": [
         "Writing: 100%|██████████| 15.2G/15.2G [01:56<00:00, 131Mbyte/s] \n",
         "2024-11-20 19:51:15,076 - INFO - Model successfully exported to /root/.cache/nexa/tmp_models/Qwen2.5-7B-Instruct-f16.gguf\n",
         "2024-11-20 19:51:15,089 - INFO - Starting quantization of /root/.cache/nexa/tmp_models/Qwen2.5-7B-Instruct-f16.gguf\n",
         "2024-11-20 19:51:15,089 - INFO - Output file: /mnt/workspace/nexa-sdk/Qwen2.5-7B-Instruct-q4_0.gguf\n",
         "llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from /root/.cache/nexa/tmp_models/Qwen2.5-7B-Instruct-f16.gguf (version GGUF V3 (latest))\n",
         "llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\n",
         "llama_model_loader: - kv   0:                       general.architecture str              = qwen2\n",
         "llama_model_loader: - kv   1:                               general.type str              = model\n",
         "llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 7B Instruct\n",
         "llama_model_loader: - kv   3:                           general.finetune str              = Instruct\n",
         "llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5\n",
         "llama_model_loader: - kv   5:                         general.size_label str              = 7B\n",
         "llama_model_loader: - kv   6:                            general.license str              = apache-2.0\n",
         "llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-7...\n",
         "llama_model_loader: - kv   8:                   general.base_model.count u32              = 1\n",
         "llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 7B\n",
         "llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen\n",
         "llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-7B\n",
         "llama_model_loader: - kv  12:                               general.tags arr[str,2]       = [\"chat\", \"text-generation\"]\n",
         "llama_model_loader: - kv  13:                          general.languages arr[str,1]       = [\"en\"]\n",
         "llama_model_loader: - kv  14:                          qwen2.block_count u32              = 28\n",
         "llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768\n",
         "llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 3584\n",
         "llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 18944\n",
         "llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 28\n",
         "llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 4\n",
         "llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000\n",
         "llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001\n",
         "llama_model_loader: - kv  22:                          general.file_type u32              = 1\n",
         "llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2\n",
         "llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2\n",
         "llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = [\"!\", \"\\\"\", \"#\", \"$\", \"%\", \"&\", \"'\", ...\n",
         "llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...\n",
         "llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = [\"Ġ Ġ\", \"ĠĠ ĠĠ\", \"i n\", \"Ġ t\",...\n",
         "llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645\n",
         "llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643\n",
         "llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643\n",
         "llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false\n",
         "llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\\n    {{- '<|im_start|>...\n",
         "llama_model_loader: - kv  33:               general.quantization_version u32              = 2\n",
         "llama_model_loader: - type  f32:  141 tensors\n",
         "llama_model_loader: - type  f16:  198 tensors\n",
         "[   1/ 339]                    token_embd.weight - [ 3584, 152064,     1,     1], type =    f16, converting to q4_0 .. size =  1039.50 MiB ->   292.36 MiB\n",
         "[   2/ 339]               blk.0.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[   3/ 339]                blk.0.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[   4/ 339]                blk.0.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[   5/ 339]                  blk.0.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[   6/ 339]                blk.0.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[   7/ 339]                    blk.0.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[   8/ 339]                  blk.0.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[   9/ 339]             blk.0.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  10/ 339]                    blk.0.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  11/ 339]                  blk.0.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  12/ 339]                    blk.0.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  13/ 339]                  blk.0.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  14/ 339]               blk.1.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  15/ 339]                blk.1.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  16/ 339]                blk.1.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  17/ 339]                  blk.1.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  18/ 339]                blk.1.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  19/ 339]                    blk.1.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  20/ 339]                  blk.1.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  21/ 339]             blk.1.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  22/ 339]                    blk.1.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  23/ 339]                  blk.1.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  24/ 339]                    blk.1.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  25/ 339]                  blk.1.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  26/ 339]               blk.2.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  27/ 339]                blk.2.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  28/ 339]                blk.2.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  29/ 339]                  blk.2.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  30/ 339]                blk.2.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  31/ 339]                    blk.2.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  32/ 339]                  blk.2.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  33/ 339]             blk.2.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  34/ 339]                    blk.2.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  35/ 339]                  blk.2.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  36/ 339]                    blk.2.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  37/ 339]                  blk.2.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  38/ 339]               blk.3.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  39/ 339]                blk.3.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  40/ 339]                blk.3.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  41/ 339]                  blk.3.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  42/ 339]                blk.3.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  43/ 339]                    blk.3.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  44/ 339]                  blk.3.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  45/ 339]             blk.3.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  46/ 339]                    blk.3.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  47/ 339]                  blk.3.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  48/ 339]                    blk.3.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  49/ 339]                  blk.3.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  50/ 339]               blk.4.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  51/ 339]                blk.4.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  52/ 339]                blk.4.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  53/ 339]                  blk.4.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  54/ 339]                blk.4.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  55/ 339]                    blk.4.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  56/ 339]                  blk.4.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  57/ 339]             blk.4.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  58/ 339]                    blk.4.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  59/ 339]                  blk.4.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  60/ 339]                    blk.4.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  61/ 339]                  blk.4.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  62/ 339]               blk.5.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  63/ 339]                blk.5.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  64/ 339]                blk.5.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  65/ 339]                  blk.5.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  66/ 339]                blk.5.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  67/ 339]                    blk.5.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  68/ 339]                  blk.5.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  69/ 339]             blk.5.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  70/ 339]                    blk.5.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  71/ 339]                  blk.5.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  72/ 339]                    blk.5.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  73/ 339]                  blk.5.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  74/ 339]               blk.6.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  75/ 339]                blk.6.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  76/ 339]                    blk.6.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  77/ 339]                  blk.6.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  78/ 339]             blk.6.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  79/ 339]                    blk.6.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  80/ 339]                  blk.6.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  81/ 339]                    blk.6.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  82/ 339]                  blk.6.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  83/ 339]              blk.10.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  84/ 339]               blk.10.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  85/ 339]               blk.10.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  86/ 339]                 blk.10.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  87/ 339]               blk.10.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  88/ 339]                   blk.10.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  89/ 339]                 blk.10.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  90/ 339]            blk.10.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  91/ 339]                   blk.10.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  92/ 339]                 blk.10.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[  93/ 339]                   blk.10.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[  94/ 339]                 blk.10.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[  95/ 339]              blk.11.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[  96/ 339]               blk.11.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  97/ 339]               blk.11.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  98/ 339]                 blk.11.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[  99/ 339]               blk.11.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 100/ 339]                   blk.11.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 101/ 339]                 blk.11.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 102/ 339]            blk.11.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 103/ 339]                   blk.11.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 104/ 339]                 blk.11.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 105/ 339]                   blk.11.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 106/ 339]                 blk.11.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 107/ 339]              blk.12.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 108/ 339]               blk.12.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 109/ 339]               blk.12.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 110/ 339]                 blk.12.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 111/ 339]               blk.12.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 112/ 339]                   blk.12.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 113/ 339]                 blk.12.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 114/ 339]            blk.12.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 115/ 339]                   blk.12.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 116/ 339]                 blk.12.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 117/ 339]                   blk.12.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 118/ 339]                 blk.12.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 119/ 339]              blk.13.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 120/ 339]               blk.13.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 121/ 339]               blk.13.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 122/ 339]                 blk.13.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 123/ 339]               blk.13.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 124/ 339]                   blk.13.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 125/ 339]                 blk.13.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 126/ 339]            blk.13.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 127/ 339]                   blk.13.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 128/ 339]                 blk.13.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 129/ 339]                   blk.13.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 130/ 339]                 blk.13.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 131/ 339]              blk.14.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 132/ 339]                 blk.14.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 133/ 339]               blk.14.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 134/ 339]                   blk.14.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 135/ 339]                 blk.14.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 136/ 339]            blk.14.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 137/ 339]                   blk.14.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 138/ 339]                 blk.14.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 139/ 339]                   blk.14.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 140/ 339]                 blk.14.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 141/ 339]                blk.6.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 142/ 339]                blk.6.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 143/ 339]                  blk.6.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 144/ 339]               blk.7.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 145/ 339]                blk.7.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 146/ 339]                blk.7.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 147/ 339]                  blk.7.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 148/ 339]                blk.7.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 149/ 339]                    blk.7.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 150/ 339]                  blk.7.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 151/ 339]             blk.7.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 152/ 339]                    blk.7.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 153/ 339]                  blk.7.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 154/ 339]                    blk.7.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 155/ 339]                  blk.7.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 156/ 339]               blk.8.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 157/ 339]                blk.8.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 158/ 339]                blk.8.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 159/ 339]                  blk.8.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 160/ 339]                blk.8.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 161/ 339]                    blk.8.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 162/ 339]                  blk.8.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 163/ 339]             blk.8.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 164/ 339]                    blk.8.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 165/ 339]                  blk.8.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 166/ 339]                    blk.8.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 167/ 339]                  blk.8.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 168/ 339]               blk.9.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 169/ 339]                blk.9.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 170/ 339]                blk.9.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 171/ 339]                  blk.9.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 172/ 339]                blk.9.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 173/ 339]                    blk.9.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 174/ 339]                  blk.9.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 175/ 339]             blk.9.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 176/ 339]                    blk.9.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 177/ 339]                  blk.9.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 178/ 339]                    blk.9.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 179/ 339]                  blk.9.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 180/ 339]               blk.14.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 181/ 339]               blk.14.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 182/ 339]              blk.15.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 183/ 339]               blk.15.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 184/ 339]               blk.15.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 185/ 339]                 blk.15.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 186/ 339]               blk.15.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 187/ 339]                   blk.15.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 188/ 339]                 blk.15.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 189/ 339]            blk.15.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 190/ 339]                   blk.15.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 191/ 339]                 blk.15.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 192/ 339]                   blk.15.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 193/ 339]                 blk.15.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 194/ 339]              blk.16.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 195/ 339]               blk.16.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 196/ 339]               blk.16.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 197/ 339]                 blk.16.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 198/ 339]               blk.16.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 199/ 339]                   blk.16.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 200/ 339]                 blk.16.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 201/ 339]            blk.16.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 202/ 339]                   blk.16.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 203/ 339]                 blk.16.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 204/ 339]                   blk.16.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 205/ 339]                 blk.16.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 206/ 339]              blk.17.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 207/ 339]               blk.17.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 208/ 339]               blk.17.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 209/ 339]                 blk.17.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 210/ 339]               blk.17.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 211/ 339]                   blk.17.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 212/ 339]                 blk.17.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 213/ 339]            blk.17.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 214/ 339]                   blk.17.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 215/ 339]                 blk.17.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 216/ 339]                   blk.17.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 217/ 339]                 blk.17.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 218/ 339]              blk.18.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 219/ 339]               blk.18.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 220/ 339]               blk.18.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 221/ 339]                 blk.18.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 222/ 339]               blk.18.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 223/ 339]                   blk.18.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 224/ 339]                 blk.18.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 225/ 339]            blk.18.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 226/ 339]                   blk.18.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 227/ 339]                 blk.18.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 228/ 339]                   blk.18.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 229/ 339]                 blk.18.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 230/ 339]              blk.19.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 231/ 339]               blk.19.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 232/ 339]               blk.19.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 233/ 339]                 blk.19.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 234/ 339]               blk.19.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 235/ 339]                   blk.19.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 236/ 339]                 blk.19.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 237/ 339]            blk.19.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 238/ 339]                   blk.19.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 239/ 339]                 blk.19.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 240/ 339]                   blk.19.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 241/ 339]                 blk.19.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 242/ 339]              blk.20.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 243/ 339]               blk.20.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 244/ 339]               blk.20.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 245/ 339]                 blk.20.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 246/ 339]               blk.20.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 247/ 339]                   blk.20.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 248/ 339]                 blk.20.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 249/ 339]            blk.20.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 250/ 339]                   blk.20.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 251/ 339]                 blk.20.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 252/ 339]                   blk.20.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 253/ 339]                 blk.20.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 254/ 339]              blk.21.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 255/ 339]               blk.21.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 256/ 339]               blk.21.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 257/ 339]                 blk.21.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 258/ 339]               blk.21.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 259/ 339]                   blk.21.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 260/ 339]                 blk.21.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 261/ 339]            blk.21.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 262/ 339]                   blk.21.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 263/ 339]                 blk.21.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 264/ 339]                   blk.21.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 265/ 339]                 blk.21.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 266/ 339]              blk.22.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 267/ 339]               blk.22.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 268/ 339]                 blk.22.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 269/ 339]               blk.22.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 270/ 339]                   blk.22.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 271/ 339]                 blk.22.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 272/ 339]            blk.22.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 273/ 339]                   blk.22.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 274/ 339]                 blk.22.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 275/ 339]                   blk.22.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 276/ 339]                 blk.22.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 277/ 339]                        output.weight - [ 3584, 152064,     1,     1], type =    f16, size = 1039.500 MB\n",
         "[ 278/ 339]               blk.22.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 279/ 339]              blk.23.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 280/ 339]               blk.23.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 281/ 339]               blk.23.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 282/ 339]                 blk.23.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 283/ 339]               blk.23.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 284/ 339]                   blk.23.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 285/ 339]                 blk.23.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 286/ 339]            blk.23.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 287/ 339]                   blk.23.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 288/ 339]                 blk.23.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 289/ 339]                   blk.23.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 290/ 339]                 blk.23.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 291/ 339]              blk.24.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 292/ 339]               blk.24.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 293/ 339]               blk.24.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 294/ 339]                 blk.24.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 295/ 339]               blk.24.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 296/ 339]                   blk.24.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 297/ 339]                 blk.24.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 298/ 339]            blk.24.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 299/ 339]                   blk.24.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 300/ 339]                 blk.24.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 301/ 339]                   blk.24.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 302/ 339]                 blk.24.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 303/ 339]              blk.25.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 304/ 339]               blk.25.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 305/ 339]               blk.25.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 306/ 339]                 blk.25.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 307/ 339]               blk.25.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 308/ 339]                   blk.25.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 309/ 339]                 blk.25.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 310/ 339]            blk.25.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 311/ 339]                   blk.25.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 312/ 339]                 blk.25.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 313/ 339]                   blk.25.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 314/ 339]                 blk.25.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 315/ 339]              blk.26.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 316/ 339]               blk.26.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 317/ 339]               blk.26.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 318/ 339]                 blk.26.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 319/ 339]               blk.26.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 320/ 339]                   blk.26.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 321/ 339]                 blk.26.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 322/ 339]            blk.26.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 323/ 339]                   blk.26.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 324/ 339]                 blk.26.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 325/ 339]                   blk.26.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 326/ 339]                 blk.26.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 327/ 339]              blk.27.attn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 328/ 339]               blk.27.ffn_down.weight - [18944,  3584,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 329/ 339]               blk.27.ffn_gate.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 330/ 339]                 blk.27.ffn_up.weight - [ 3584, 18944,     1,     1], type =    f16, converting to q4_0 .. size =   129.50 MiB ->    36.42 MiB\n",
         "[ 331/ 339]               blk.27.ffn_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 332/ 339]                   blk.27.attn_k.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 333/ 339]                 blk.27.attn_k.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 334/ 339]            blk.27.attn_output.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 335/ 339]                   blk.27.attn_q.bias - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "[ 336/ 339]                 blk.27.attn_q.weight - [ 3584,  3584,     1,     1], type =    f16, converting to q4_0 .. size =    24.50 MiB ->     6.89 MiB\n",
         "[ 337/ 339]                   blk.27.attn_v.bias - [  512,     1,     1,     1], type =    f32, size =    0.002 MB\n",
         "[ 338/ 339]                 blk.27.attn_v.weight - [ 3584,   512,     1,     1], type =    f16, converting to q4_0 .. size =     3.50 MiB ->     0.98 MiB\n",
         "[ 339/ 339]                   output_norm.weight - [ 3584,     1,     1,     1], type =    f32, size =    0.014 MB\n",
         "llama_model_quantize_internal: model size  = 14526.27 MB\n",
         "llama_model_quantize_internal: quant size  =  4833.57 MB\n"
        ]
       },
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "\n",
         "Conversion completed successfully. Output file: /mnt/workspace/nexa-sdk/Qwen2.5-7B-Instruct-q4_0.gguf\n",
         "\n",
         "Would you like to store this model in nexa list so you can run it with `nexa run <model_name>` anywhere and anytime? (y/N): \n",
         "Would you like to run the converted model? (y/N): Exiting without running the model.\n",
         "\n",
         "Converted model stored at /mnt/workspace/nexa-sdk/Qwen2.5-7B-Instruct-q4_0.gguf\n",
         "\n",
         "You can run the converted model with command: nexa run /mnt/workspace/nexa-sdk/Qwen2.5-7B-Instruct-q4_0.gguf -lp -mt NLP\n"
        ]
       }
      ],
      "source": [
       "%%bash\n",
       "(echo \"1\"; echo \"1\"; echo \"N\"; echo \"N\") | nexa convert -ms Qwen/Qwen2.5-7B-Instruct"
      ]
     },
     {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
       "选择模型类型 (`NLP (text generation)`) 后，从量化类型中选择其中一个，例如：`q4_0`。随后将运行模型量化。随后，运行本地模型，同上，`nexa run` 推荐在 terminal 环境中运行。\n",
       "\n",
       "```shell\n",
       "nexa run /mnt/workspace/nexa-sdk/Qwen2.5-7B-Instruct-q4_0.gguf -lp -mt NLP\n",
       "```"
      ]
     },
     {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
       "### 本地服务器部署与 API 调用\n",
       "\n",
       "使用 nexa server 功能将模型进行本地服务器部署，随后可以通过 API 调用进行模型调用。\n",
       "\n",
       "运行以下命令，从可用的 GGUF 模型转选择一个模型文件下载，例如 `qwen2.5-coder-7b-instruct-fp16.gguf`。"
      ]
     },
     {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
       "ExecutionIndicator": {
        "show": false
       },
       "execution": {
        "iopub.execute_input": "2024-11-21T02:20:17.916921Z",
        "iopub.status.busy": "2024-11-21T02:20:17.916596Z",
        "iopub.status.idle": "2024-11-21T02:21:22.165958Z",
        "shell.execute_reply": "2024-11-21T02:21:22.165115Z",
        "shell.execute_reply.started": "2024-11-21T02:20:17.916900Z"
       },
       "tags": [],
       "vscode": {
        "languageId": "shellscript"
       }
      },
      "outputs": [
       {
        "name": "stderr",
        "output_type": "stream",
        "text": [
         "INFO:     Started server process [1811]\n",
         "INFO:     Waiting for application startup.\n",
         "2024-11-21 10:20:21.137103: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
         "2024-11-21 10:20:21.175952: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
         "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
         "2024-11-21 10:20:21.829693: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
        ]
       },
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "No model type specified. Running with default model type: NLP\n",
         "You can specify a different model type using the -mt flag\n",
         "Available gguf models in the repository:\n",
         "1. qwen2.5-coder-7b-instruct-fp16-00001-of-00004.gguf\n",
         "2. qwen2.5-coder-7b-instruct-fp16-00002-of-00004.gguf\n",
         "3. qwen2.5-coder-7b-instruct-fp16-00003-of-00004.gguf\n",
         "4. qwen2.5-coder-7b-instruct-fp16-00004-of-00004.gguf\n",
         "5. qwen2.5-coder-7b-instruct-fp16.gguf\n",
         "6. qwen2.5-coder-7b-instruct-q2_k.gguf\n",
         "7. qwen2.5-coder-7b-instruct-q3_k_m.gguf\n",
         "8. qwen2.5-coder-7b-instruct-q4_0-00001-of-00002.gguf\n",
         "9. qwen2.5-coder-7b-instruct-q4_0-00002-of-00002.gguf\n",
         "10. qwen2.5-coder-7b-instruct-q4_0.gguf\n",
         "11. qwen2.5-coder-7b-instruct-q4_k_m-00001-of-00002.gguf\n",
         "12. qwen2.5-coder-7b-instruct-q4_k_m-00002-of-00002.gguf\n",
         "13. qwen2.5-coder-7b-instruct-q4_k_m.gguf\n",
         "14. qwen2.5-coder-7b-instruct-q5_0-00001-of-00002.gguf\n",
         "15. qwen2.5-coder-7b-instruct-q5_0-00002-of-00002.gguf\n",
         "16. qwen2.5-coder-7b-instruct-q5_0.gguf\n",
         "17. qwen2.5-coder-7b-instruct-q5_k_m-00001-of-00002.gguf\n",
         "18. qwen2.5-coder-7b-instruct-q5_k_m-00002-of-00002.gguf\n",
         "19. qwen2.5-coder-7b-instruct-q5_k_m.gguf\n",
         "20. qwen2.5-coder-7b-instruct-q6_k-00001-of-00002.gguf\n",
         "21. qwen2.5-coder-7b-instruct-q6_k-00002-of-00002.gguf\n",
         "22. qwen2.5-coder-7b-instruct-q6_k.gguf\n",
         "23. qwen2.5-coder-7b-instruct-q8_0-00001-of-00003.gguf\n",
         "24. qwen2.5-coder-7b-instruct-q8_0-00002-of-00003.gguf\n",
         "25. qwen2.5-coder-7b-instruct-q8_0-00003-of-00003.gguf\n",
         "26. qwen2.5-coder-7b-instruct-q8_0.gguf\n"
        ]
       },
       {
        "name": "stderr",
        "output_type": "stream",
        "text": [
         "INFO:     Application startup complete.\n",
         "INFO:     Uvicorn running on http://localhost:8085 (Press CTRL+C to quit)\n",
         "INFO:     Shutting down\n"
        ]
       },
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "Error while terminating subprocess (pid=1808): \n"
        ]
       },
       {
        "name": "stderr",
        "output_type": "stream",
        "text": [
         "INFO:     Waiting for application shutdown.\n",
         "INFO:     Application shutdown complete.\n",
         "INFO:     Finished server process [1811]\n"
        ]
       },
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "Please enter the number of the model you want to download and use: You have selected: qwen2.5-coder-7b-instruct-q4_0.gguf\n",
         "Successfully pulled model Qwen/Qwen2.5-Coder-7B-Instruct-GGUF:qwen2.5-coder-7b-instruct-q4_0.gguf to /root/.cache/nexa/hub/modelscope/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/qwen2.5-coder-7b-instruct-q4_0.gguf\n",
         "model_type: NLP\n"
        ]
       }
      ],
      "source": [
       "%%bash\n",
       "echo \"10\" | nexa server -ms Qwen/Qwen2.5-Coder-7B-Instruct-GGUF --port 8085"
      ]
     },
     {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
       "execution": {
        "iopub.execute_input": "2024-11-21T02:22:08.625518Z",
        "iopub.status.busy": "2024-11-21T02:22:08.625212Z",
        "iopub.status.idle": "2024-11-21T02:22:28.749811Z",
        "shell.execute_reply": "2024-11-21T02:22:28.749330Z",
        "shell.execute_reply.started": "2024-11-21T02:22:08.625496Z"
       },
       "tags": []
      },
      "outputs": [
       {
        "name": "stdout",
        "output_type": "stream",
        "text": [
         "Response: {'id': '0e82fa3e-58be-4a0a-b2f2-fe5c81fc4240', 'object': 'chat.completion', 'created': 1732155748.7462492, 'choices': [{'message': {'role': 'assistant', 'content': 'Once upon a time, in a small village nestled among rolling hills, there lived a young girl named Lily. Lily was known for her kind heart and her love for nature. She spent most of her days exploring the nearby forest, collecting flowers, and helping her grandmother with her chores.\\n\\nOne sunny morning, as Lily was wandering through the forest, she stumbled upon a hidden clearing. In the center of the clearing stood a beautiful, ancient tree. The tree was unlike any other she had ever seen; its bark was smooth and its leaves shimmered in the sunlight.\\n\\nAs Lily approached the tree, she noticed a small, glowing object lying on'}, 'logprobs': None}]}\n"
        ]
       }
      ],
      "source": [
       "import requests\n",
       "import json\n",
       "\n",
       "# 定义请求的 URL\n",
       "url = \"http://localhost:8085/v1/chat/completions\"\n",
       "\n",
       "# 定义请求体\n",
       "request_body = {\n",
       "  \"messages\": [\n",
       "    {\n",
       "      \"role\": \"user\",\n",
       "      \"content\": \"Tell me a story\"\n",
       "    }\n",
       "  ],\n",
       "  \"max_tokens\": 128,\n",
       "  \"temperature\": 0.1,\n",
       "  \"stream\": False,\n",
       "  \"stop_words\": []\n",
       "}\n",
       "# 将请求体转换为 JSON 格式\n",
       "json_data = json.dumps(request_body)\n",
       "\n",
       "# 发送 POST 请求\n",
       "response = requests.post(url, data=json_data, headers={'Content-Type': 'application/json'})\n",
       "\n",
       "# 检查响应状态码\n",
       "if response.status_code == 200:\n",
       "    # 解析响应内容\n",
       "    response_data = response.json()\n",
       "    print(\"Response:\", response_data)\n",
       "else:\n",
       "    print(f\"Error: {response.status_code} - {response.text}\")"
      ]
     }
    ],
    "metadata": {
     "kernelspec": {
      "display_name": "Python 3 (ipykernel)",
      "language": "python",
      "name": "python3"
     },
     "language_info": {
      "codemirror_mode": {
       "name": "ipython",
       "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.14"
     }
    },
    "nbformat": 4,
    "nbformat_minor": 4
   }
   