{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# HDM Train Kaggle\n",
    "Created by [licyk](https://github.com/licyk)\n",
    "\n",
    "Jupyter Notebook 仓库：[licyk/sd-webui-all-in-one](https://github.com/licyk/sd-webui-all-in-one)\n",
    "\n",
    "\n",
    "## 简介\n",
    "一个在 [Kaggle](https://www.kaggle.com) 部署 [HDM](https://github.com/KohakuBlueleaf/HDM) 的 Jupyter Notebook，可用于 HDM 模型的训练。[Colab](https://colab.research.google.com) 同样也可以使用。\n",
    "\n",
    "还有这个只是写来玩的，可能有 BUG 跑不了。\n",
    "\n",
    "<h2 style=\"color: red;\">切记，不要在 Kaggle 使用包含 NSFW 的训练集，这将导致 Kaggle 账号被封禁！！！</h2>\n",
    "\n",
    "## 不同运行单元的功能\n",
    "该 Notebook 分为以下几个单元：\n",
    "\n",
    "- [功能初始化](#功能初始化)\n",
    "- [参数配置](#参数配置)\n",
    "- [安装环境](#安装环境)\n",
    "- [模型训练](#模型训练)\n",
    "- [模型上传](#模型上传)\n",
    "\n",
    "使用时请按顺序运行笔记单元。\n",
    "\n",
    "通常情况下[功能初始化](#功能初始化)和[模型上传](#模型上传)单元的内容无需修改，其他单元包含不同功能的注释，可阅读注释获得帮助。\n",
    "\n",
    "[参数配置](#参数配置)单元用于修改安装，训练，上传模型时的配置。\n",
    "\n",
    "[安装](#安装)单元执行安装训练环境的命令和下载模型 / 训练集的命令，可根据需求进行修改。\n",
    "\n",
    "[模型训练](#模型训练)执行训练模型的命令，需要根据自己的需求进行修改，该单元也提供一些训练参数的例子，可在例子的基础上进行修改。\n",
    "\n",
    "如果需要快速取消注释，可以选中代码，按下`Ctrl + /`取消注释。\n",
    "\n",
    "\n",
    "## 提示\n",
    "1. 不同单元中包含注释, 可阅读注释获得帮助。\n",
    "2. 训练代码的部分需要根据自己的需求进行更改。\n",
    "3. 推荐使用 Kaggle 的 `Save Version` 的功能运行笔记，可让 Kaggle 笔记在无人值守下保持运行，直至所有单元运行完成。\n",
    "4. 如果有 [HuggingFace](https://huggingface.co) 账号或者 [ModelScope](https://modelscope.cn) 账号，可通过填写 Token 和仓库名后实现自动上传训练好的模型，仓库需要手动创建。\n",
    "5. 进入 Kaggle 笔记后，在 Kaggle 的右侧栏可以调整 kaggle 笔记的设置，也可以上传训练集等。注意，在 Kaggle 笔记的`Session options`->`ACCELERATOR`中，需要选择`GPU T4 x 2`，才能使用 GPU 进行模型训练。\n",
    "6. 使用 Kaggle 进行模型训练时，训练集中最好没有 NSFW 内容，否则可能会导致 Kaggle 账号被封禁。\n",
    "7. 不同单元的标题下方包含快捷跳转链接，可使用跳转链接翻阅 Notebook。\n",
    "8. 该 Notebook 的使用方法可阅读：</br>[使用 HuggingFace / ModelScope 保存和下载文件 - licyk的小窝](https://licyk.netlify.app/2025/01/16/use-huggingface-or-modelscope-to-save-file/)</br>[使用 Kaggle 进行模型训练 - licyk的小窝](https://licyk.netlify.app/2025/01/16/use-kaggle-to-training-sd-model)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 功能初始化\n",
    "通常不需要修改该单元的内容  \n",
    "1. [[下一个单元 →](#参数配置)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# SD WebUI All In One 功能初始化部分, 通常不需要修改\n",
    "# 如果需要查看完整代码实现, 可阅读: https://github.com/licyk/sd-webui-all-in-one/blob/main/sd_webui_all_in_one\n",
    "#################################################################################################################\n",
    "# SD_WEBUI_ALL_IN_ONE_URL, FORCE_DOWNLOAD_CORE 参数可根据需求修改, 通常保持默认即可\n",
    "SD_WEBUI_ALL_IN_ONE_URL = \"https://github.com/licyk/sd-webui-all-in-one@main\" # SD WebUI All In One 核心下载地址\n",
    "FORCE_DOWNLOAD_CORE = False # 设置为 True 时, 即使 SD WebUI All In One 已存在也会重新下载\n",
    "#################################################################################################################\n",
    "import os\n",
    "from pathlib import Path\n",
    "\n",
    "os.environ[\"SD_WEBUI_ALL_IN_ONE_LOGGER_COLOR\"] = \"0\"\n",
    "try:\n",
    "    _ = JUPYTER_ROOT_PATH  # type: ignore # noqa: F821\n",
    "except Exception:\n",
    "    JUPYTER_ROOT_PATH = os.getcwd()\n",
    "!python -c \"import sd_webui_all_in_one\" &> /dev/null && [ \"{FORCE_DOWNLOAD_CORE}\" != \"True\" ] || python -m pip install \"git+{SD_WEBUI_ALL_IN_ONE_URL}\"\n",
    "from sd_webui_all_in_one import git_warpper, logger, VERSION, BaseManager\n",
    "from sd_webui_all_in_one.env import config_wandb_token, configure_pip\n",
    "from sd_webui_all_in_one.env_manager import install_manager_depend, install_pytorch, pip_install\n",
    "from sd_webui_all_in_one.mirror_manager import set_mirror\n",
    "from sd_webui_all_in_one.utils import check_gpu\n",
    "logger.info(\"SD WebUI All In One 核心模块初始化完成, 版本: %s\", VERSION)\n",
    "#################################################################################################################\n",
    "\n",
    "\n",
    "class HDMTrainManager(BaseManager):\n",
    "    def install(\n",
    "        self,\n",
    "        torch_ver: str | list | None = None,\n",
    "        xformers_ver: str | list | None = None,\n",
    "        model_path: str | Path = None,\n",
    "        model_list: list[str, int] | None = None,\n",
    "        use_uv: bool | None = True,\n",
    "        pypi_index_mirror: str | None = None,\n",
    "        pypi_extra_index_mirror: str | None = None,\n",
    "        pypi_find_links_mirror: str | None = None,\n",
    "        github_mirror: str | list | None = None,\n",
    "        huggingface_mirror: str | None = None,\n",
    "        pytorch_mirror: str | None = None,\n",
    "        hdm_repo: str | None = None,\n",
    "        retry: int | None = 3,\n",
    "        huggingface_token: str | None = None,\n",
    "        modelscope_token: str | None = None,\n",
    "        wandb_token: str | None = None,\n",
    "        git_username: str | None = None,\n",
    "        git_email: str | None = None,\n",
    "        check_avaliable_gpu: bool | None = False,\n",
    "        enable_tcmalloc: bool | None = True,\n",
    "    ) -> None:\n",
    "        \"\"\"安装 HDM 和其余环境\n",
    "\n",
    "        :param torch_ver`(str|None)`: 指定的 PyTorch 软件包包名, 并包括版本号\n",
    "        :param xformers_ver`(str|None)`: 指定的 xFormers 软件包包名, 并包括版本号\n",
    "        :param model_path`(str|Path|None)`: 指定模型下载的路径\n",
    "        :param model_list`(list[str|int]|None)`: 模型下载列表\n",
    "        :param use_uv`(bool|None)`: 使用 uv 替代 Pip 进行 Python 软件包的安装\n",
    "        :param pypi_index_mirror`(str|None)`: PyPI Index 镜像源链接\n",
    "        :param pypi_extra_index_mirror`(str|None)`: PyPI Extra Index 镜像源链接\n",
    "        :param pypi_find_links_mirror`(str|None)`: PyPI Find Links 镜像源链接\n",
    "        :param github_mirror`(str|list|None)`: Github 镜像源链接或者镜像源链接列表\n",
    "        :param huggingface_mirror`(str|None)`: HuggingFace 镜像源链接\n",
    "        :param pytorch_mirror`(str|None)`: PyTorch 镜像源链接\n",
    "        :param hdm_repo`(str|None)`: HDM 仓库地址, 未指定时默认为`https://github.com/KohakuBlueleaf/HDM`\n",
    "        :param retry`(int|None)`: 设置下载模型失败时重试次数\n",
    "        :param huggingface_token`(str|None)`: 配置 HuggingFace Token\n",
    "        :param modelscope_tokenn`(str|None)`: 配置 ModelScope Token\n",
    "        :param wandb_token`(str|None)`: 配置 WandB Token\n",
    "        :param git_username`(str|None)`: Git 用户名\n",
    "        :param git_email`(str|None)`: Git 邮箱\n",
    "        :param check_avaliable_gpu`(bool|None)`: 检查是否有可用的 GPU, 当 GPU 不可用时引发`Exception`\n",
    "        :param enable_tcmalloc`(bool|None)`: 启用 TCMalloc 内存优化\n",
    "        :notes\n",
    "            self.install() 将会以下几件事\n",
    "            1. 配置 PyPI / Github / HuggingFace 镜像源\n",
    "            2. 配置 Pip / uv\n",
    "            3. 安装管理工具自身依赖\n",
    "            4. 安装 HDM\n",
    "            5. 安装 PyTorch / xFormers\n",
    "            6. 安装 HDM 的依赖\n",
    "            7. 下载模型\n",
    "            8. 配置 HuggingFace / ModelScope / WandB Token 环境变量\n",
    "            9. 配置其他工具\n",
    "        \"\"\"\n",
    "        logger.info(\"配置 HDM 环境中\")\n",
    "        os.chdir(self.workspace)\n",
    "        hdm_path = self.workspace / self.workfolder\n",
    "        hdm_repo = hdm_repo if hdm_repo is not None else \"https://github.com/KohakuBlueleaf/HDM\"\n",
    "        model_path = model_path if model_path is not None else (\n",
    "            self.workspace / \"hdm-models\")\n",
    "        model_list = model_list if model_list else []\n",
    "        # 检查是否有可用的 GPU\n",
    "        if check_avaliable_gpu and not check_gpu():\n",
    "            raise Exception(\n",
    "                \"没有可用的 GPU, 请在 kaggle -> Notebook -> Session options -> ACCELERATOR 选择 GPU T4 x 2\\n如果不能使用 GPU, 请检查 Kaggle 账号是否绑定了手机号或者尝试更换账号!\")\n",
    "        # 配置镜像源\n",
    "        set_mirror(\n",
    "            pypi_index_mirror=pypi_index_mirror,\n",
    "            pypi_extra_index_mirror=pypi_extra_index_mirror,\n",
    "            pypi_find_links_mirror=pypi_find_links_mirror,\n",
    "            github_mirror=github_mirror,\n",
    "            huggingface_mirror=huggingface_mirror\n",
    "        )\n",
    "        configure_pip()  # 配置 Pip / uv\n",
    "        install_manager_depend(use_uv)  # 准备 Notebook 的运行依赖\n",
    "        # 下载 HDM\n",
    "        git_warpper.clone(\n",
    "            repo=hdm_repo,\n",
    "            path=hdm_path,\n",
    "        )\n",
    "        git_warpper.update(hdm_path)  # 更新 HDM\n",
    "        # 安装 HDM 的依赖\n",
    "        pip_install(\n",
    "            \"-e\",\n",
    "            f\"{hdm_path}[finetune,tipo,liger]\",\n",
    "            use_uv=use_uv,\n",
    "        )\n",
    "        # 安装 PyTorch 和 xFormers\n",
    "        install_pytorch(\n",
    "            torch_package=torch_ver,\n",
    "            xformers_package=xformers_ver,\n",
    "            pytorch_mirror=pytorch_mirror,\n",
    "            use_uv=use_uv\n",
    "        )\n",
    "        # 更新 urllib3\n",
    "        try:\n",
    "            pip_install(\n",
    "                \"urllib3\",\n",
    "                \"--upgrade\",\n",
    "                use_uv=False\n",
    "            )\n",
    "        except Exception as e:\n",
    "            logger.error(\"更新 urllib3 时发生错误: %s\", e)\n",
    "        try:\n",
    "            pip_install(\n",
    "                \"numpy==1.26.4\",\n",
    "                use_uv=use_uv\n",
    "            )\n",
    "        except Exception as e:\n",
    "            logger.error(\"降级 numpy 时发生错误: %s\", e)\n",
    "        self.get_model_from_list(\n",
    "            path=model_path,\n",
    "            model_list=model_list,\n",
    "            retry=retry\n",
    "        )\n",
    "        self.restart_repo_manager(\n",
    "            hf_token=huggingface_token,\n",
    "            ms_token=modelscope_token,\n",
    "        )\n",
    "        config_wandb_token(wandb_token)\n",
    "        git_warpper.set_git_config(\n",
    "            username=git_username,\n",
    "            email=git_email,\n",
    "        )\n",
    "        enable_tcmalloc and self.tcmalloc.configure_tcmalloc()\n",
    "        logger.info(\"HDM 环境配置完成\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 参数配置\n",
    "设置必要的参数, 根据注释说明进行修改  \n",
    "2. [[← 上一个单元](#功能初始化)|[下一个单元 →](#安装环境)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 环境设置\n",
    "WORKSPACE = \"/kaggle\" # 工作路径, 通常不需要修改\n",
    "WORKFOLDER = \"HDM\" # 工作路径中文件夹名称, 通常不需要修改\n",
    "HDM_REPO = \"https://github.com/KohakuBlueleaf/HDM\" # HDM 仓库地址\n",
    "TORCH_VER = \"\" # PyTorch 版本\n",
    "XFORMERS_VER = \"\" # xFormers 版本\n",
    "USE_UV = True # 使用 uv 加速 Python 软件包安装, 修改为 True 为启用, False 为禁用\n",
    "PIP_INDEX_MIRROR = \"https://pypi.python.org/simple\" # PyPI 主镜像源\n",
    "PIP_EXTRA_INDEX_MIRROR = \"https://download.pytorch.org/whl/cu128\" # PyPI 扩展镜像源\n",
    "PYTORCH_MIRROR = \"https://download.pytorch.org/whl/cu124\" # 用于下载 PyTorch 的镜像源\n",
    "PIP_FIND_LINKS_MIRROR = \"https://download.pytorch.org/whl/cu121/torch_stable.html\" # PyPI 扩展镜像源\n",
    "HUGGINGFACE_MIRROR = \"https://hf-mirror.com\" # HuggingFace 镜像源\n",
    "GITHUB_MIRROR = [ # Github 镜像源\n",
    "    \"https://ghfast.top/https://github.com\",\n",
    "    \"https://mirror.ghproxy.com/https://github.com\",\n",
    "    \"https://ghproxy.net/https://github.com\",\n",
    "    \"https://gh.api.99988866.xyz/https://github.com\",\n",
    "    \"https://gh-proxy.com/https://github.com\",\n",
    "    \"https://ghps.cc/https://github.com\",\n",
    "    \"https://gh.idayer.com/https://github.com\",\n",
    "    \"https://ghproxy.1888866.xyz/github.com\",\n",
    "    \"https://slink.ltd/https://github.com\",\n",
    "    \"https://github.boki.moe/github.com\",\n",
    "    \"https://github.moeyy.xyz/https://github.com\",\n",
    "    \"https://gh-proxy.net/https://github.com\",\n",
    "    \"https://gh-proxy.ygxz.in/https://github.com\",\n",
    "    \"https://wget.la/https://github.com\",\n",
    "    \"https://kkgithub.com\",\n",
    "    \"https://gitclone.com/github.com\",\n",
    "]\n",
    "CHECK_AVALIABLE_GPU = False # 检查可用的 GPU, 当 GPU 不可用时强制终止安装进程\n",
    "RETRY = 3 # 重试下载次数\n",
    "DOWNLOAD_THREAD = 16 # 下载线程\n",
    "ENABLE_TCMALLOC = True # 启用 TCMalloc 内存优化\n",
    "UPDATE_CORE = True # 更新内核\n",
    "\n",
    "##############################################################################\n",
    "\n",
    "# 模型上传设置, 使用 HuggingFace / ModelScope 上传训练好的模型\n",
    "# HuggingFace: https://huggingface.co\n",
    "# ModelScope: https://modelscope.cn\n",
    "USE_HF_TO_SAVE_MODEL = False # 使用 HuggingFace 保存训练好的模型, 修改为 True 为启用, False 为禁用 (True / False)\n",
    "USE_MS_TO_SAVE_MODEL = False # 使用 ModelScope 保存训练好的模型, 修改为 True 为启用, False 为禁用 (True / False)\n",
    "\n",
    "# Token 配置, 用于上传 / 下载模型 (部分模型下载需要 Token 进行验证)\n",
    "# HuggingFace Token 在 Account -> Settings -> Access Tokens 中获取\n",
    "HF_TOKEN = \"\" # HuggingFace Token\n",
    "# ModelScope Token 在 首页 -> 访问令牌 -> SDK 令牌 中获取\n",
    "MS_TOKEN = \"\" # ModelScope Token\n",
    "\n",
    "# 用于上传模型的 HuggingFace 模型仓库的 ID, 当仓库不存在时则尝试新建一个\n",
    "HF_REPO_ID = \"username/reponame\" # HuggingFace 仓库的 ID (格式: \"用户名/仓库名\")\n",
    "HF_REPO_TYPE = \"model\" # HuggingFace 仓库的种类 (可选的类型为: model / dataset / space), 如果在 HuggingFace 新建的仓库为模型仓库则不需要修改\n",
    "# HuggingFace 仓库类型和对应名称:\n",
    "# model: 模型仓库\n",
    "# dataset: 数据集仓库\n",
    "# space: 在线运行空间仓库\n",
    "\n",
    "# 用于上传模型的 ModelScope 模型仓库的 ID, 当仓库不存在时则尝试新建一个\n",
    "MS_REPO_ID = \"username/reponame\" # ModelScope 仓库的 ID (格式: \"用户名/仓库名\")\n",
    "MS_REPO_TYPE = \"model\" # ModelScope 仓库的种类 (model / dataset / space), 如果在 ModelScope 新建的仓库为模型仓库则不需要修改\n",
    "# ModelScope 仓库类型和对应名称:\n",
    "# model: 模型仓库\n",
    "# dataset: 数据集仓库\n",
    "# space: 创空间仓库\n",
    "\n",
    "# 设置自动创建仓库时仓库的可见性, False 为私有仓库(不可见), True 为公有仓库(可见), 通常保持默认即可\n",
    "HF_REPO_VISIBILITY = False # 设置新建的 HuggingFace 仓库可见性 (True / False)\n",
    "MS_REPO_VISIBILITY = False # 设置新建的 ModelScope 仓库可见性 (True / False)\n",
    "\n",
    "# Git 信息设置, 可以使用默认值\n",
    "GIT_USER_EMAIL = \"username@example.com\" # Git 的邮箱\n",
    "GIT_USER_NAME = \"username\" # Git 的用户名\n",
    "\n",
    "##############################################################################\n",
    "\n",
    "# 训练日志设置, 使用 WandB 记录训练日志, 使用 WandB 可远程查看实时训练日志\n",
    "# WandB Token 可在 https://wandb.ai/authorize 中获取\n",
    "WANDB_TOKEN = \"\" # WandB Token\n",
    "\n",
    "##############################################################################\n",
    "\n",
    "# 路径设置, 通常保持默认即可\n",
    "INPUT_DATASET_PATH = \"/kaggle/dataset\" # 训练集保存的路径\n",
    "OUTPUT_PATH = \"/kaggle/working/model\" # 训练时模型保存的路径\n",
    "HDM_MODEL_PATH = \"/kaggle/hdm-models\" # 模型下载到的路径\n",
    "KAGGLE_INPUT_PATH = \"/kaggle/input\" # Kaggle Input 的路径\n",
    "\n",
    "##############################################################################\n",
    "\n",
    "# 训练模型设置, 在安装时将会下载选择的模型\n",
    "# 下面举个例子:\n",
    "# HDM_MODEL = [\n",
    "#     [\"https://huggingface.co/licyk/sd-model/resolve/main/sd_1.5/v1-5-pruned-emaonly.safetensors\", 0],\n",
    "#     [\"https://huggingface.co/licyk/sd-model/resolve/main/sd_1.5/animefull-final-pruned.safetensors\", 1],\n",
    "#     [\"https://huggingface.co/licyk/sd-model/resolve/main/sd_1.5/Counterfeit-V3.0_fp16.safetensors\", 0],\n",
    "#     [\"https://huggingface.co/licyk/sd-model/resolve/main/sdxl_1.0/Illustrious-XL-v0.1.safetensors\", 1, \"Illustrious.safetensors\"]\n",
    "# ]\n",
    "# \n",
    "# 在这个例子中, 第一个参数指定了模型的下载链接, 第二个参数设置了是否要下载这个模型, 当这个值为 1 时则下载该模型\n",
    "# 第三个参数是可选参数, 用于指定下载到本地后的文件名称\n",
    "# \n",
    "# 则上面的例子中\n",
    "# https://huggingface.co/licyk/sd-model/resolve/main/sd_1.5/animefull-final-pruned.safetensors 和 \n",
    "# https://huggingface.co/licyk/sd-model/resolve/main/sdxl_1.0/Illustrious-XL-v0.1.safetensors 下载链接所指的文件将被下载\n",
    "# https://huggingface.co/licyk/sd-model/resolve/main/sd_1.5/animefull-final-pruned.safetensors 的文件下载到本地后名称为 animefull-final-pruned.safetensors\n",
    "# 并且 https://huggingface.co/licyk/sd-model/resolve/main/sdxl_1.0/Illustrious-XL-v0.1.safetensors 所指的文件将被重命名为 Illustrious.safetensors\n",
    "\n",
    "HDM_MODEL = [\n",
    "    [\"https://huggingface.co/KBlueLeaf/HDM-xut-340M-anime/resolve/main/hdm-xut-340M-512px-note.safetensors\", 1],\n",
    "    [\"https://huggingface.co/KBlueLeaf/HDM-xut-340M-anime/resolve/main/hdm-xut-340M-768px-note.safetensors\", 1],\n",
    "    [\"https://huggingface.co/KBlueLeaf/HDM-xut-340M-anime/resolve/main/hdm-xut-340M-1024px-note.safetensors\", 1],\n",
    "]\n",
    "\n",
    "##############################################################################\n",
    "# 下面为初始化参数部分, 不需要修改\n",
    "INSTALL_PARAMS = {\n",
    "    \"torch_ver\": TORCH_VER or None,\n",
    "    \"xformers_ver\": XFORMERS_VER or None,\n",
    "    \"model_path\": HDM_MODEL_PATH or None,\n",
    "    \"model_list\": HDM_MODEL,\n",
    "    \"use_uv\": USE_UV,\n",
    "    \"pypi_index_mirror\": PIP_INDEX_MIRROR or None,\n",
    "    \"pypi_extra_index_mirror\": PIP_EXTRA_INDEX_MIRROR or None,\n",
    "    # Kaggle 的环境暂不需要以下镜像源\n",
    "    # \"pypi_find_links_mirror\": PIP_FIND_LINKS_MIRROR or None,\n",
    "    # \"github_mirror\": GITHUB_MIRROR or None,\n",
    "    # \"huggingface_mirror\": HUGGINGFACE_MIRROR or None,\n",
    "    \"pytorch_mirror\": PYTORCH_MIRROR or None,\n",
    "    \"hdm_repo\": HDM_REPO or None,\n",
    "    \"retry\": RETRY,\n",
    "    \"huggingface_token\": HF_TOKEN or None,\n",
    "    \"modelscope_token\": MS_TOKEN or None,\n",
    "    \"wandb_token\": WANDB_TOKEN or None,\n",
    "    \"git_username\": GIT_USER_NAME or None,\n",
    "    \"git_email\": GIT_USER_EMAIL or None,\n",
    "    \"check_avaliable_gpu\": CHECK_AVALIABLE_GPU,\n",
    "    \"enable_tcmalloc\": ENABLE_TCMALLOC,\n",
    "    \"custom_sys_pkg_cmd\": None,\n",
    "    \"update_core\": UPDATE_CORE,\n",
    "}\n",
    "HF_REPO_UPLOADER_PARAMS = {\n",
    "    \"api_type\": \"huggingface\",\n",
    "    \"repo_id\": HF_REPO_ID,\n",
    "    \"repo_type\": HF_REPO_TYPE,\n",
    "    \"visibility\": HF_REPO_VISIBILITY,\n",
    "    \"upload_path\": OUTPUT_PATH,\n",
    "    \"retry\": RETRY,\n",
    "}\n",
    "MS_REPO_UPLOADER_PARAMS = {\n",
    "    \"api_type\": \"modelscope\",\n",
    "    \"repo_id\": MS_REPO_ID,\n",
    "    \"repo_type\": MS_REPO_TYPE,\n",
    "    \"visibility\": MS_REPO_VISIBILITY,\n",
    "    \"upload_path\": OUTPUT_PATH,\n",
    "    \"retry\": RETRY,\n",
    "}\n",
    "os.makedirs(WORKSPACE, exist_ok=True) # 创建工作路径\n",
    "os.makedirs(OUTPUT_PATH, exist_ok=True) # 创建模型输出路径\n",
    "os.makedirs(INPUT_DATASET_PATH, exist_ok=True) # 创建训练集路径\n",
    "os.makedirs(HDM_MODEL_PATH, exist_ok=True) # 创建模型下载路径\n",
    "HDM_PATH = os.path.join(WORKSPACE, WORKFOLDER) # HDM 路径\n",
    "logger.info(\"参数设置完成\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 安装环境\n",
    "安装环境和下载模型和训练集, 根据注释的说明进行修改  \n",
    "3. [[← 上一个单元](#参数配置)|[下一个单元 →](#模型训练)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 初始化部分参数并执行安装命令, 这一小部分不需要修改\n",
    "logger.info(\"开始安装 HDM\")\n",
    "hdm_manager = HDMTrainManager(WORKSPACE, WORKFOLDER)\n",
    "hdm_manager.install(**INSTALL_PARAMS)\n",
    "hdm_manager.import_kaggle_input(KAGGLE_INPUT_PATH, INPUT_DATASET_PATH)\n",
    "##########################################################################################\n",
    "# 下方可自行编写命令\n",
    "# 下方的命令示例可以根据自己的需求进行修改\n",
    "\n",
    "\n",
    "##### 1. 关于运行环境 #####\n",
    "\n",
    "# 如果需要安装某个软件包, 可以使用 %pip 命令\n",
    "# 下面是几个使用例子:\n",
    "# 1.\n",
    "# %pip install lycoris-lora==2.1.0.post3 dadaptation==3.1\n",
    "# \n",
    "# 这将安装 lycoris-lora==2.1.0.post3 和 dadaptation==3.1\n",
    "# \n",
    "# 2.\n",
    "# %pip uninstall tensorboard\n",
    "# \n",
    "# 这将卸载 tensorboard\n",
    "\n",
    "\n",
    "##########################################################################################\n",
    "\n",
    "\n",
    "##### 2. 关于模型导入 #####\n",
    "\n",
    "# 该 Kaggle 训练脚本支持 4 种方式导入模型, 如下:\n",
    "# 1. 使用 Kaggle Input 导入\n",
    "# 2. 使用模型下载链接导入\n",
    "# 3. 从 HuggingFace 仓库导入\n",
    "# 4. 从 ModelScope 仓库导入\n",
    "\n",
    "\n",
    "### 2.1. 使用 Kaggle Input 导入 ###\n",
    "# 在 Kaggle 右侧面板中, 点击 Notebook -> Input -> Upload -> New Model, 从此处导入模型\n",
    "\n",
    "\n",
    "### 2.2 使用模型下载链接导入 ###\n",
    "# 如果需要通过链接下载额外的模型, 可以使用 hdm_manager.get_model()\n",
    "# 使用参数:\n",
    "# hdm_manager.get_model(\n",
    "#     url=\"model_url\",                    # 模型下载链接\n",
    "#     path=HDM_MODEL_PATH,                 # 模型下载到本地的路径\n",
    "#     filename=\"filename.safetensors\",    # 模型的名称\n",
    "#     retry=RETRY,                        # 重试下载的次数\n",
    "# )\n",
    "# \n",
    "# 下面是几个使用例子:\n",
    "# 1.\n",
    "# hdm_manager.get_model(\n",
    "#     url=\"https://modelscope.cn/models/user/repo/resolve/master/your_model.safetensors\",\n",
    "#     path=HDM_MODEL_PATH,\n",
    "#     retry=RETRY,\n",
    "# )\n",
    "# 这将从 https://modelscope.cn/models/user/repo/resolve/master/your_model.safetensors 下载模型并保存到 HDM_MODEL_PATH 中\n",
    "# \n",
    "# hdm_manager.get_model(\n",
    "#     url=\"https://modelscope.cn/models/user/repo/resolve/master/your_model.safetensors\",\n",
    "#     path=HDM_MODEL_PATH,\n",
    "#     filename=\"rename_model.safetensors\",\n",
    "#     retry=RETRY,\n",
    "# )\n",
    "# 这将从 https://modelscope.cn/models/user/repo/resolve/master/your_model.safetensors 下载模型并保存到 HDM_MODEL_PATH 中, 并且重命名为 rename_model.safetensors\n",
    "\n",
    "\n",
    "### 2.3. 从 HuggingFace 仓库导入 ###\n",
    "# 如果需要从 HuggingFace 仓库下载模型, 可以使用 hdm_manager.repo.download_files_from_repo()\n",
    "# 使用参数:\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"huggingface\",                 # 指定为 HuggingFace 的仓库\n",
    "#     local_dir=HDM_MODEL_PATH,                # 模型下载到本地的路径\n",
    "#     repo_id=\"usename/repo_id\",              # HuggingFace 仓库 ID\n",
    "#     repo_type=\"model\",                      # (可选参数) HuggingFace 仓库种类 (model / dataset / space)\n",
    "#     folder=\"path/in/repo/file.safetensors\", # (可选参数) 文件在 HuggingFace 仓库中的路径\n",
    "#     retry=RETRY,                            # (可选参数) 重试下载的次数, 默认为 3\n",
    "#     num_threads=DOWNLOAD_THREAD,            # (可选参数) 下载线程\n",
    "# )\n",
    "# \n",
    "# 例如要从 stabilityai/stable-diffusion-xl-base-1.0 (类型为 model) 下载 sd_xl_base_1.0_0.9vae.safetensors\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"huggingface\",\n",
    "#     repo_id=\"stabilityai/stable-diffusion-xl-base-1.0\",\n",
    "#     repo_type=\"model\",\n",
    "#     folder=\"sd_xl_base_1.0_0.9vae.safetensors\",\n",
    "#     local_dir=HDM_MODEL_PATH,\n",
    "#     retry=RETRY,\n",
    "#     num_threads=DOWNLOAD_THREAD,\n",
    "# )\n",
    "# 则上述的命令将会从 stabilityai/stable-diffusion-xl-base-1.0 下载 sd_xl_base_1.0_0.9vae.safetensors 模型\n",
    "# 并将模型保存到 HDM_MODEL_PATH 中\n",
    "# 注意 folder 填的是文件在 HuggingFace 仓库中的路径, 如果上述例子中的文件在仓库的 checkpoint/sd_xl_base_1.0_0.9vae.safetensors 路径\n",
    "# 则 folder 填的内容为 checkpoint/sd_xl_base_1.0_0.9vae.safetensors\n",
    "#\n",
    "# 模型保存的路径与 local_dir 和 folder 参数有关\n",
    "# 对于上面的例子 local_dir 为 /kaggle/sd-models, folder 为 sd_xl_base_1.0_0.9vae.safetensors\n",
    "# 则最终保存的路径为 /kaggle/sd-models/sd_xl_base_1.0_0.9vae.safetensors\n",
    "# \n",
    "# 如果 folder 为 checkpoint/sd_xl_base_1.0_0.9vae.safetensors\n",
    "# 则最终保存的路径为 /kaggle/sd-models/checkpoint/sd_xl_base_1.0_0.9vae.safetensors\n",
    "# \n",
    "# folder 参数为可选参数, 即该参数可不指定, 在不指定的情况下将下载整个仓库中的文件\n",
    "# 比如将上面的例子改成:\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"huggingface\",\n",
    "#     repo_id=\"stabilityai/stable-diffusion-xl-base-1.0\",\n",
    "#     repo_type=\"model\",\n",
    "#     local_dir=HDM_MODEL_PATH,\n",
    "#     retry=RETRY,\n",
    "#     num_threads=DOWNLOAD_THREAD,\n",
    "# )\n",
    "# 这时候将下载 stabilityai/stable-diffusion-xl-base-1.0 仓库中的所有文件\n",
    "# 对于可选参数, 可以进行省略, 此时将使用该参数的默认值进行运行, 上面的例子就可以简化成下面的:\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"huggingface\",\n",
    "#     repo_id=\"stabilityai/stable-diffusion-xl-base-1.0\",\n",
    "#     repo_type=\"model\",\n",
    "#     folder=\"sd_xl_base_1.0_0.9vae.safetensors\",\n",
    "#     local_dir=HDM_MODEL_PATH,\n",
    "# )\n",
    "# 省略后仍然可以正常执行, 但对于一些重要的可选参数, 不推荐省略, 如 repo_type 参数\n",
    "# 该参数用于指定仓库类型, 不指定时则默认认为仓库为 model 类型\n",
    "# 若要下载的仓库为 dataset 类型, 不指定 repo_type 参数时默认就把仓库类型当做 model, 最终导致找不到要下载的仓库\n",
    "\n",
    "\n",
    "### 2.4. 从 ModelScope 仓库导入 ###\n",
    "# 如果需要从 ModelScope 仓库下载模型, 可以使用 hdm_manager.repo.download_files_from_repo()\n",
    "# 使用方法和 **2.3. 从 HuggingFace 仓库导入** 部分的类似, 只需要指定 api_type=\"modelscope\" 来指定使用 ModelScope 的仓库\n",
    "# 使用参数:\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"modelscope\",                  # 指定为 ModelScope 的仓库\n",
    "#     local_dir=HDM_MODEL_PATH,                # 模型下载到本地的路径\n",
    "#     repo_id=\"usename/repo_id\",              # ModelScope 仓库 ID\n",
    "#     repo_type=\"model\",                      # (可选参数) ModelScope 仓库种类 (model / dataset / space)\n",
    "#     folder=\"path/in/repo/file.safetensors\", # (可选参数) 文件在 ModelScope 仓库中的路径\n",
    "#     retry=RETRY,                            # (可选参数) 重试下载的次数, 默认为 3\n",
    "#     num_threads=DOWNLOAD_THREAD,            # (可选参数) 下载线程\n",
    "# )\n",
    "# \n",
    "# 例如要从 stabilityai/stable-diffusion-xl-base-1.0 (类型为 model) 下载 sd_xl_base_1.0_0.9vae.safetensors\n",
    "# hdm_manager.dataset.get_single_file_from_ms(\n",
    "#     api_type=\"modelscope\",\n",
    "#     repo_id=\"stabilityai/stable-diffusion-xl-base-1.0\",\n",
    "#     repo_type=\"model\",\n",
    "#     folder=\"sd_xl_base_1.0_0.9vae.safetensors\",\n",
    "#     local_dir=HDM_MODEL_PATH,\n",
    "#     retry=RETRY,\n",
    "#     num_threads=DOWNLOAD_THREAD,\n",
    "# )\n",
    "# 则上述的命令将会从 stabilityai/stable-diffusion-xl-base-1.0 下载 sd_xl_base_1.0_0.9vae.safetensors 模型\n",
    "# 并将模型保存到 HDM_MODEL_PATH 中\n",
    "\n",
    "\n",
    "\n",
    "##########################################################################################\n",
    "\n",
    "\n",
    "##### 3. 关于训练集导入 #####\n",
    "\n",
    "# 该 Kaggle 训练脚本支持 4 种方式导入训练集, 如下:\n",
    "# 1. 使用 Kaggle Input 导入\n",
    "# 2. 使用训练集下载链接导入\n",
    "# 3. 从 HuggingFace 仓库导入\n",
    "# 4. 从 ModelScope 仓库导入\n",
    "\n",
    "\n",
    "### 3.1. 使用 Kaggle Input 导入 ###\n",
    "# 在 Kaggle 右侧面板中, 点击 Notebook -> Input -> Upload -> New Dataset, 从此处导入模型\n",
    "\n",
    "\n",
    "### 3.2. 使用训练集下载链接导入 ###\n",
    "# 如果将训练集压缩后保存在某个平台, 如 HuggingFace, ModelScope, 并且有下载链接\n",
    "# 可以使用 hdm_manager.utils.download_archive_and_unpack() 函数下载训练集\n",
    "# 使用参数:\n",
    "# hdm_manager.utils.download_archive_and_unpack(\n",
    "#     url=\"download_url\",             # 训练集压缩包的下载链接\n",
    "#     local_dir=INPUT_DATASET_PATH,   # 下载数据集到本地的路径\n",
    "#     name=\"filename.zip\",            # (可选参数) 将数据集压缩包进行重命名\n",
    "#     retry=RETRY,                    # (可选参数) 重试下载的次数\n",
    "# )\n",
    "# \n",
    "# 该函数在下载训练集压缩包完成后将解压到指定的本地路径\n",
    "# 压缩包格式仅支持 7z, zip, tar\n",
    "# \n",
    "# 下面是几个使用的例子:\n",
    "# 1.\n",
    "# hdm_manager.utils.download_archive_and_unpack(\n",
    "#     url=\"https://modelscope.cn/models/user/repo/resolve/master/data_1.7z\",\n",
    "#     local_dir=INPUT_DATASET_PATH,\n",
    "#     retry=RETRY,\n",
    "# )\n",
    "# 这将从 https://modelscope.cn/models/user/repo/resolve/master/data_1.7z 下载训练集压缩包并解压到 INPUT_DATASET_PATH 中\n",
    "# \n",
    "# 2.\n",
    "# hdm_manager.utils.download_archive_and_unpack(\n",
    "#     url=\"https://modelscope.cn/models/user/repo/resolve/master/data_1.7z\",\n",
    "#     local_dir=INPUT_DATASET_PATH,\n",
    "#     name=\"training_dataset.7z\",\n",
    "#     retry=RETRY,\n",
    "# )\n",
    "# 这将从 https://modelscope.cn/models/user/repo/resolve/master/data_1.7z 下载训练集压缩包并重命名成 training_dataset.7z\n",
    "# 再将 training_dataset.7z 中的文件解压到 INPUT_DATASET_PATH 中\n",
    "# \n",
    "# \n",
    "# 训练集的要求:\n",
    "# 需要将图片进行打标, 并调整训练集为指定的目结构, 例如:\n",
    "# Nachoneko\n",
    "#     └── 1_nachoneko\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.png\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.txt\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).png\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).txt\n",
    "#             ├── 0(8).txt\n",
    "#             ├── 0(8).webp\n",
    "#             ├── 001_2.png\n",
    "#             ├── 001_2.txt\n",
    "#             ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.png\n",
    "#             ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.txt\n",
    "#             ├── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.png\n",
    "#             └── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.txt\n",
    "# \n",
    "# 在 Nachoneko 文件夹新建一个文件夹, 格式为 <数字>_<名称>, 如 1_nachoneko, 前面的数字代表这部分的训练集的重复次数, 1_nachoneko 文件夹内则放图片和打标文件\n",
    "# \n",
    "# 训练集也可以分成多个部分组成, 例如:\n",
    "# Nachoneko\n",
    "#     ├── 1_nachoneko\n",
    "#     │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.png\n",
    "#     │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.txt\n",
    "#     │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).png\n",
    "#     │       └── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).txt\n",
    "#     ├── 2_nachoneko\n",
    "#     │       ├── 0(8).txt\n",
    "#     │       ├── 0(8).webp\n",
    "#     │       ├── 001_2.png\n",
    "#     │       └── 001_2.txt\n",
    "#     └── 4_nachoneko\n",
    "#             ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.png\n",
    "#             ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.txt\n",
    "#             ├── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.png\n",
    "#             └── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.txt\n",
    "# \n",
    "# 处理好训练集并调整好目录结构后可以将 Nachoneko 文件夹进行压缩了, 使用 zip / 7z / tar 格式进行压缩\n",
    "# 例如将上述的训练集压缩成 Nachoneko.7z, 此时需要检查一下压缩后在压缩包的目录结果是否和原来的一致(有些压缩软件在部分情况下会破坏原来的目录结构)\n",
    "# 确认没有问题后将该训练集上传到网盘, 推荐使用 HuggingFace / ModelScope\n",
    "\n",
    "\n",
    "### 3.3. 从 HuggingFace 仓库导入 ###\n",
    "# 如果训练集保存在 HuggingFace, 可以使用 hdm_manager.repo.download_files_from_repo() 函数从 HuggingFace 下载数据集\n",
    "# 使用方法和 **2.3. 从 HuggingFace 仓库导入** 部分类似, 部分说明可参考那部分的内容\n",
    "# 使用格式:\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"huggingface\",                 # 指定为 HuggingFace 的仓库\n",
    "#     local_dir=INPUT_DATASET_PATH,           # 下载数据集到哪个路径\n",
    "#     repo_id=\"username/train_data\",          # HuggingFace 仓库 ID\n",
    "#     repo_type=\"dataset\",                    # (可选参数) HuggingFace 仓库的类型 (model / dataset / space)\n",
    "#     folder=\"folder_in_repo\",                # (可选参数) 指定要从 HuggingFace 仓库里下载哪个文件夹的内容\n",
    "#     retry=RETRY,                            # (可选参数) 重试下载的次数, 默认为 3\n",
    "#     num_threads=DOWNLOAD_THREAD,            # (可选参数) 下载线程\n",
    "# )\n",
    "# \n",
    "# 比如在 HuggingFace 的仓库为 username/train_data, 仓库类型为 dataset\n",
    "# 仓库的文件结构如下:\n",
    "# ├── Nachoneko\n",
    "# │   ├── 1_nachoneko\n",
    "# │   │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.png\n",
    "# │   │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.txt\n",
    "# │   │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).png\n",
    "# │   │       └── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).txt\n",
    "# │   ├── 2_nachoneko\n",
    "# │   │       ├── 0(8).txt\n",
    "# │   │       ├── 0(8).webp\n",
    "# │   │       ├── 001_2.png\n",
    "# │   │       └── 001_2.txt\n",
    "# │   └── 4_nachoneko\n",
    "# │           ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.png\n",
    "# │           ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.txt\n",
    "# │           ├── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.png\n",
    "# │           └── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.txt\n",
    "# └ aaaki\n",
    "#   ├── 1_aaaki\n",
    "#   │   ├── 1.png\n",
    "#   │   ├── 1.txt\n",
    "#   │   ├── 11.png\n",
    "#   │   ├── 11.txt\n",
    "#   │   ├── 12.png\n",
    "#   │   └── 12.txt\n",
    "#   └── 3_aaaki\n",
    "#       ├── 14.png\n",
    "#       ├── 14.txt\n",
    "#       ├── 16.png\n",
    "#       └── 16.txt\n",
    "#\n",
    "# 此时想要下载这个仓库中的 Nachoneko 文件夹的内容, 则下载命令为\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"huggingface\",\n",
    "#     local_dir=INPUT_DATASET_PATH,\n",
    "#     repo_id=\"username/train_data\",\n",
    "#     repo_type=\"dataset\",\n",
    "#     folder=\"Nachoneko\",\n",
    "#     retry=RETRY,\n",
    "#     num_threads=DOWNLOAD_THREAD,\n",
    "# )\n",
    "# \n",
    "# 如果想下载整个仓库, 则移除 folder 参数, 命令修改为\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"huggingface\",\n",
    "#     local_dir=INPUT_DATASET_PATH,\n",
    "#     repo_id=\"username/train_data\",\n",
    "#     repo_type=\"dataset\",\n",
    "#     retry=RETRY,\n",
    "#     num_threads=DOWNLOAD_THREAD,\n",
    "# )\n",
    "\n",
    "\n",
    "\n",
    "# 4. 从 ModelScope 仓库导入\n",
    "# 如果训练集保存在 ModelScope, 可以使用 hdm_manager.repo.download_files_from_repo() 函数从 ModelScope 下载数据集\n",
    "# 使用方法可参考 **3.2. 使用训练集下载链接导入** 部分的说明\n",
    "# 使用格式:\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"modelscope\",          # 指定为 ModelScope 的仓库\n",
    "#     local_dir=INPUT_DATASET_PATH,   # 下载数据集到哪个路径\n",
    "#     repo_id=\"usename/repo_id\",      # ModelScope 仓库 ID\n",
    "#     repo_type=\"dataset\",            # (可选参数) ModelScope 仓库的类型 (model / dataset / space)\n",
    "#     folder=\"folder_in_repo\",        # (可选参数) 指定要从 ModelScope 仓库里下载哪个文件夹的内容\n",
    "#     retry=RETRY,                    # (可选参数) 重试下载的次数, 默认为 3\n",
    "#     num_threads=DOWNLOAD_THREAD,    # (可选参数) 下载线程\n",
    "# )\n",
    "# \n",
    "# 比如在 ModelScope 的仓库为 username/train_data, 仓库类型为 dataset\n",
    "# 仓库的文件结构如下:\n",
    "# ├── Nachoneko\n",
    "# │   ├── 1_nachoneko\n",
    "# │   │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.png\n",
    "# │   │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.txt\n",
    "# │   │       ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).png\n",
    "# │   │       └── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).txt\n",
    "# │   ├── 2_nachoneko\n",
    "# │   │       ├── 0(8).txt\n",
    "# │   │       ├── 0(8).webp\n",
    "# │   │       ├── 001_2.png\n",
    "# │   │       └── 001_2.txt\n",
    "# │   └── 4_nachoneko\n",
    "# │           ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.png\n",
    "# │           ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.txt\n",
    "# │           ├── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.png\n",
    "# │           └── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.txt\n",
    "# └ aaaki\n",
    "#   ├── 1_aaaki\n",
    "#   │   ├── 1.png\n",
    "#   │   ├── 1.txt\n",
    "#   │   ├── 11.png\n",
    "#   │   ├── 11.txt\n",
    "#   │   ├── 12.png\n",
    "#   │   └── 12.txt\n",
    "#   └── 3_aaaki\n",
    "#       ├── 14.png\n",
    "#       ├── 14.txt\n",
    "#       ├── 16.png\n",
    "#       └── 16.txt\n",
    "#\n",
    "# 此时想要下载这个仓库中的 Nachoneko 文件夹的内容, 则下载命令为\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"modelscope\",\n",
    "#     local_dir=INPUT_DATASET_PATH,\n",
    "#     repo_id=\"username/train_data\",\n",
    "#     repo_type=\"dataset\",\n",
    "#     folder=\"Nachoneko\",\n",
    "# )\n",
    "# \n",
    "# 如果想下载整个仓库, 则移除 folder 参数, 命令修改为\n",
    "# hdm_manager.repo.download_files_from_repo(\n",
    "#     api_type=\"modelscope\",\n",
    "#     local_dir=INPUT_DATASET_PATH,\n",
    "#     repo_id=\"username/train_data\",\n",
    "#     repo_type=\"dataset\",\n",
    "# )\n",
    "\n",
    "\n",
    "\n",
    "# 下载训练集的技巧\n",
    "# 如果有个 character_aaaki 训练集上传到 HuggingFace 上时结构如下：\n",
    "# \n",
    "#\n",
    "# HuggingFace_Repo (licyk/sd_training_dataset)\n",
    "# ├── character_aaaki\n",
    "# │   ├── 1_aaaki\n",
    "# │   │   ├── 1.png\n",
    "# │   │   ├── 1.txt\n",
    "# │   │   ├── 3.png\n",
    "# │   │   └── 3.txt\n",
    "# │   └── 2_aaaki\n",
    "# │       ├── 4.png\n",
    "# │       └── 4.txt\n",
    "# ├── character_robin\n",
    "# │   └── 1_xxx\n",
    "# │       ├── 11.png\n",
    "# │       └── 11.txt\n",
    "# └── style_pvc\n",
    "#     └── 5_aaa\n",
    "#         ├── test.png\n",
    "#         └── test.txt\n",
    "#\n",
    "# \n",
    "# 可能有时候不想为训练集中每个子训练集设置不同的重复次数，又不想上传的时候再多套一层文件夹，就把训练集结构调整成了下面的：\n",
    "# \n",
    "#\n",
    "# HuggingFace_Repo (licyk/sd_training_dataset)\n",
    "# ├── character_aaaki\n",
    "# │   ├── 1.png\n",
    "# │   ├── 1.txt\n",
    "# │   ├── 3.png\n",
    "# │   ├── 3.txt\n",
    "# │   ├── 4.png\n",
    "# │   └── 4.txt\n",
    "# ├── character_robin\n",
    "# │   └── 1_xxx\n",
    "# │       ├── 11.png\n",
    "# │       └── 11.txt\n",
    "# └── style_pvc\n",
    "#     └── 5_aaa\n",
    "#         ├── test.png\n",
    "#         └── test.txt\n",
    "#\n",
    "# \n",
    "# 此时这个状态的训练集是缺少子训练集和重复次数的，如果直接使用 hdm_manager.repo.download_files_from_repo() 去下载训练集并用于训练将会导致报错\n",
    "# 不过可以自己再编写一个函数对 hdm_manager.repo.download_files_from_repo() 函数再次封装，自动加上子训练集并设置重复次数\n",
    "# \n",
    "#\n",
    "# def make_dataset(\n",
    "#     local_dir: str | Path,\n",
    "#     repo_id: str,\n",
    "#     repo_type: str,\n",
    "#     repeat: int,\n",
    "#     folder: str,\n",
    "# ) -> None:\n",
    "#     import os\n",
    "#     import shutil\n",
    "#     origin_dataset_path = os.path.join(local_dir, folder)\n",
    "#     tmp_dataset_path = os.path.join(local_dir, f\"{repeat}_{folder}\")\n",
    "#     new_dataset_path = os.path.join(origin_dataset_path, f\"{repeat}_{folder}\")\n",
    "#     hdm_manager.repo.download_files_from_repo(\n",
    "#         api_type=\"huggingface\",\n",
    "#         local_dir=local_dir,\n",
    "#         repo_id=repo_id,\n",
    "#         repo_type=repo_type,\n",
    "#         folder=folder,\n",
    "#     )\n",
    "#     if os.path.exists(origin_dataset_path):\n",
    "#         logger.info(\"设置 %s 训练集的重复次数为 %s\", folder, repeat)\n",
    "#         shutil.move(origin_dataset_path, tmp_dataset_path)\n",
    "#         shutil.move(tmp_dataset_path, new_dataset_path)\n",
    "#     else:\n",
    "#         logger.error(\"从 %s 下载 %s 失败\", repo_id, folder)\n",
    "#\n",
    "# \n",
    "# 编写好后，可以去调用这个函数\n",
    "# \n",
    "#\n",
    "# make_dataset(\n",
    "#     local_dir=INPUT_DATASET_PATH,\n",
    "#     repo_id=\"licyk/sd_training_dataset\",\n",
    "#     repo_type=\"dataset\",\n",
    "#     repeat=3,\n",
    "#     folder=\"character_aaaki\",\n",
    "# )\n",
    "#\n",
    "# \n",
    "# 该函数将会把 character_aaaki 训练集下载到 {INPUT_DATASET_PATH} 中，即 /kaggle/dataset\n",
    "# 文件夹名称为 character_aaaki，并且 character_aaaki 文件夹内继续创建了一个子文件夹作为子训练集，根据 repeat=3 将子训练集的重复次数设置为 3\n",
    "\n",
    "\n",
    "##########################################################################################\n",
    "logger.info(\"HDM 安装完成\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型训练\n",
    "需自行编写命令，下方有可参考的例子  \n",
    "4. [[← 上一个单元](#安装环境)|[下一个单元 →](#模型上传)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "logger.info(\"进入 HDM 目录\")\n",
    "os.chdir(HDM_PATH)\n",
    "hdm_manager.display_model_and_dataset_dir(HDM_MODEL_PATH, INPUT_DATASET_PATH, recursive=False)\n",
    "logger.info(\"使用 HDM 进行模型训练\")\n",
    "##########################################################################################\n",
    "# 1.\n",
    "# 运行前需要根据自己的需求更改参数\n",
    "# \n",
    "# 训练参数的设置可参考：\n",
    "# https://github.com/KohakuBlueleaf/HDM/blob/main/config/train/hdm-xut-340M-ft.toml\n",
    "# \n",
    "# \n",
    "# 2.\n",
    "# 下方被注释的代码选择后使用 Ctrl + / 取消注释\n",
    "# \n",
    "# \n",
    "# 3.\n",
    "# 训练使用的底模会被下载到 HDM_MODEL_PATH, 即 /kaggle/sd-models\n",
    "# 填写底模路径时一般可以通过 --pretrained_model_name_or_path=\"{HDM_MODEL_PATH}/base_model.safetensors\" 指定\n",
    "# 如果需要外挂 VAE 模型可以通过 --vae=\"{HDM_MODEL_PATH}/vae.safetensors\" 指定\n",
    "# \n",
    "# 通过 Kaggle Inout 导入的训练集保存在 KAGGLE_INPUT_PATH, 即 /kaggle/input, 运行该笔记时将会把训练集复制进 INPUT_DATASET_PATH, 即 /kaggle/dataset\n",
    "# 该路径可通过 INPUT_DATASET_PATH 调整\n",
    "# 如果使用 hdm_manager.dataset.get_dataset() 函数下载训练集, 数据集一般会解压到 INPUT_DATASET_PATH, 这取决于函数第一个参数传入的路径\n",
    "# 训练集的路径通常要这种结构\n",
    "# $ tree /kaggle\n",
    "# kaggle\n",
    "# └── dataset\n",
    "#     └── Nachoneko\n",
    "#         └── 1_gan_cheng\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.png\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2019 winter 麗.txt\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).png\n",
    "#             ├── [メロンブックス (よろず)]Melonbooks Girls Collection 2020 spring 彩 (オリジナル).txt\n",
    "#             ├── 0(8).txt\n",
    "#             ├── 0(8).webp\n",
    "#             ├── 001_2.png\n",
    "#             ├── 001_2.txt\n",
    "#             ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.png\n",
    "#             ├── 0b1c8893-c9aa-49e5-8769-f90c4b6866f5.txt\n",
    "#             ├── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.png\n",
    "#             └── 0d5149dd-3bc1-484f-8c1e-a1b94bab3be5.txt\n",
    "# 4 directories, 12 files\n",
    "# 在填写训练集路径时, 应使用 --train_data_dir=\"{INPUT_DATASET_PATH}/Nachoneko\"\n",
    "# \n",
    "# 模型保存的路径通常用 --output_dir=\"{OUTPUT_PATH}\" 指定, 如 --output_dir=\"{OUTPUT_PATH}/Nachoneko\", OUTPUT_PATH 默认设置为 /kaggle/working/model\n",
    "# 在 Kaggle 的 Output 中可以看到保存的模型, 前提是使用 Kaggle 的 Save Version 运行 Kaggle\n",
    "# OUTPUT_PATH 也指定了保存模型到 HuggingFace / ModelScope 的功能的上传路径\n",
    "# \n",
    "# --output_name 用于指定保存的模型名字, 如 --output_name=\"Nachoneko\"\n",
    "# \n",
    "# \n",
    "# 4.\n",
    "# Kaggle 的实例最长可运行 12 h, 要注意训练时长不要超过 12 h, 否则将导致训练被意外中断, 并且最后的模型保存功能将不会得到运行\n",
    "# 如果需要在模型被保存后立即上传到 HuggingFace 进行保存, 可使用启动参数为 sd-scripts 设置自动保存, 具体可阅读 sd-scripts 的帮助信息\n",
    "# 使用 python train_network.py -h 命令可查询可使用的启动参数, 命令中的 train_network.py 可替换成 sdxl_train_network.py 等\n",
    "# \n",
    "# \n",
    "# 5.\n",
    "# 训练命令的开头为英文的感叹号, 也就是 !, 后面就是 Shell Script 风格的命令\n",
    "# 每行的最后为反斜杠用于换行, 也就是用 \\ 来换行, 并且反斜杠的后面不允许有其他符号, 比如空格等\n",
    "# 训练命令的每一行之间不能有任何换行空出来, 最后一行不需要反斜杠, 因为最后一行的下一行已经没有训练参数\n",
    "# \n",
    "# \n",
    "# 6.\n",
    "# 如果训练参数是 toml 格式的, 比如从 Akegarasu/lora-scripts 训练器复制来的训练参数\n",
    "# 可以转换成对应的训练命令中的参数\n",
    "# 下面列举几种转换例子:\n",
    "# \n",
    "# (1)\n",
    "# toml 格式:\n",
    "# pretrained_model_name_or_path = \"{HDM_MODEL_PATH}/Illustrious-XL-v0.1.safetensors\"\n",
    "# 训练命令格式:\n",
    "# --pretrained_model_name_or_path=\"{HDM_MODEL_PATH}/Illustrious-XL-v0.1.safetensors\"\n",
    "# \n",
    "# (2)\n",
    "# toml 格式:\n",
    "# unet_lr = 0.0001\n",
    "# 训练命令格式:\n",
    "# --unet_lr=0.0001\n",
    "# \n",
    "# (3)\n",
    "# toml 格式:\n",
    "# network_args = [\n",
    "#     \"conv_dim=100000\",\n",
    "#     \"conv_alpha=100000\",\n",
    "#     \"algo=lokr\",\n",
    "#     \"dropout=0\",\n",
    "#     \"factor=8\",\n",
    "#     \"train_norm=True\",\n",
    "#     \"preset=full\",\n",
    "# ]\n",
    "# 训练命令格式:\n",
    "# --network_args \\\n",
    "#     conv_dim=100000 \\\n",
    "#     conv_alpha=100000 \\\n",
    "#     algo=lokr \\\n",
    "#     dropout=0 \\\n",
    "#     factor=8 \\\n",
    "#     train_norm=True \\\n",
    "#     preset=full \\\n",
    "# \n",
    "# (4)\n",
    "# toml 格式:\n",
    "# enable_bucket = true\n",
    "# 训练命令格式:\n",
    "# --enable_bucket\n",
    "# \n",
    "# (5)\n",
    "# toml 格式:\n",
    "# lowram = false\n",
    "# 训练命令格式:\n",
    "# 无对应的训练命令, 也就是不需要填, 因为这个参数的值为 false, 也就是无对应的参数, 如果值为 true, 则对应训练命令中的 --lowram\n",
    "# \n",
    "# 可以根据这个例子去转换 toml 格式的训练参数成训练命令的格式\n",
    "# \n",
    "# \n",
    "# 7.\n",
    "# 如果需要 toml 格式的配置文件来配置训练参数可以使用下面的代码来保存 toml 格式的训练参数\n",
    "# \n",
    "# toml_file_path = os.path.join(WORKSPACE, \"train_config.toml\")\n",
    "# toml_content = f\"\"\"\n",
    "# 这里使用 toml 格式编写训练参数, \n",
    "# 还可以结合 Python F-Strings 的用法使用前面配置好的变量\n",
    "# Python F-Strings 的说明: https://docs.python.org/zh-cn/3.13/reference/lexical_analysis.html#f-strings\n",
    "# toml 的语法可参考: https://toml.io/cn/v1.0.0\n",
    "# 下面展示训练命令里参数对应的 toml 格式转换\n",
    "# \n",
    "# \n",
    "# pretrained_model_name_or_path = \"{HDM_MODEL_PATH}/Illustrious-XL-v0.1.safetensors\"\n",
    "# 对应训练命令中的 --pretrained_model_name_or_path=\"{HDM_MODEL_PATH}/Illustrious-XL-v0.1.safetensors\"\n",
    "# \n",
    "# unet_lr = 0.0001\n",
    "# 对应训练命令中的 --unet_lr=0.0001\n",
    "# \n",
    "# network_args = [\n",
    "#     \"conv_dim=100000\",\n",
    "#     \"conv_alpha=100000\",\n",
    "#     \"algo=lokr\",\n",
    "#     \"dropout=0\",\n",
    "#     \"factor=8\",\n",
    "#     \"train_norm=True\",\n",
    "#     \"preset=full\",\n",
    "# ]\n",
    "# 对应下面训练命令中的\n",
    "# --network_args \\\n",
    "#     conv_dim=100000 \\\n",
    "#     conv_alpha=100000 \\\n",
    "#     algo=lokr \\\n",
    "#     dropout=0 \\\n",
    "#     factor=8 \\\n",
    "#     train_norm=True \\\n",
    "#     preset=full \\\n",
    "# \n",
    "# enable_bucket = true\n",
    "# 对应训练命令中的 --enable_bucket\n",
    "# \n",
    "# lowram = false\n",
    "# 这个参数的值为 false, 也就是无对应的参数, 如果值为 true, 则对应训练命令中的 --lowram\n",
    "# \"\"\".strip()\n",
    "# if not os.path.exists(os.path.dirname(toml_file_path)):\n",
    "#     os.makedirs(toml_file_path, exist_ok=True)\n",
    "# with open(toml_file_path, \"w\", encoding=\"utf8\") as file:\n",
    "#     file.write(toml_content)\n",
    "# \n",
    "# 使用上面的代码将会把训练参数的 toml 配置文件保存在 toml_file_path 路径中, 也就是 {WORKSPACE}/train_config.toml, 即 /kaggle/train_config.toml\n",
    "# 而原来的训练命令无需再写上训练参数, 只需指定该训练配置文件的路径即可\n",
    "# 使用 --config_file=\"{WORKSPACE}/train_config.toml\" 来指定\n",
    "# \n",
    "# \n",
    "# 8. \n",
    "# 如果要查看 sd-script 的命令行参数, 可以加上 -h 后再运行, 此时 sd-script 将显示所有可用的参数\n",
    "# \n",
    "# \n",
    "# 9.\n",
    "# 下方提供了一些训练参数, 可以直接使用, 使用时取消注释后根据需求修改部分参数即可\n",
    "# \n",
    "#              .,@@@@@@@@@].                                          ./@[`....`[\\\\.                 \n",
    "#             //\\`..  . ...,\\@].       .,]]]/O@@@@@@@@\\]...       .,]//............\\@`               \n",
    "#           .O`........ .......\\\\.]]@@@@@@@@[..........,[@@@@\\`.*/....=^............/@@`             \n",
    "#          .O........    .......@@/@@@/`.....               . ,\\@\\....\\`............O@`@             \n",
    "#          =^...`....          .O@@`.........            .........\\@`...[`.,@`....,@^/.@^            \n",
    "#         .OO`..\\....          =/..... ......            ..[@]....,\\@@]]]].@@]`..//..@=\\^            \n",
    "#          @O/@`,............=O/......    ...   ....       ...\\\\.....,@@@`=\\@\\@@[...=O`/^.           \n",
    "#          @@\\.,@]..]]//[,/@^O=@.............   .\\@^...........,@`.....\\@@/*\\o*O@\\.=/.@`             \n",
    "#          ,@/O`...[OOO`.,@O,\\/....././\\^....   ..@O` ..\\`.......=\\.....=\\\\@@@@@/\\@@//               \n",
    "#            ,@`\\].......O^o,/.....@`/=^.....,\\...,@^ ...=\\...    =\\.....,@,@@@@[/@@@/               \n",
    "#            ..,\\@\\]]]]O/@.*@.....=^/\\^......=@....\\O..^..@@`..  ..\\@.....,@.\\@@\\[[O`                \n",
    "#                .*=@@\\@@^.O^...../o^O.......O=^...=@..@..\\.\\\\.   . @@`....,@.\\@@@@`                 \n",
    "# .              ..=@O^=@`,@ .....@@=`......=^.O....@..@^.=^.=@.....=@@.....,\\.\\@@@@.                \n",
    "#                .,@@`,O@./^.....=@O/......./^.O... \\`.=^.=^..=@...  O=\\.....=^.\\@@@@`.              \n",
    "#                ./@`.=^@=@......=@O`....@,/@@.=^...=^.=^.=^.[[\\@....=^\\^.....@@.\\@@@@`              \n",
    "#               .,@^. @^O/@......=@O.]O`OO.=`\\^.....,^.=@.=^....=@...=\\.O.....@^\\`@@/`               \n",
    "#                =@ .=@..@^ .....=@/.../@^,/.=@......* =@.=^.....=\\..=@`=^....=^ \\/\\                 \n",
    "#                /^..=@.,@^ /`...=@.../O@.O...@........O=^=` ,`...@^.=@\\=^..] =@..@O`.               \n",
    "#               ,@...@/.=@. @^...=@../@\\@/OOO.=^......=^,O@[.]]@\\]/@ =^@`O..O.=@^ =@^                \n",
    "#               =^...@@.=@..O\\....@ //.O@O@@]..@....../^.OO@@@[[@@@@\\/^@^O .O.=@@ .@^                \n",
    "#               @^..=@@.,@.=^@....@@\\@@@[[[[[[[\\^@^..,/..O..,@@@\\..=@@//OO..O./^@. =@                \n",
    "#               @...=^@^.@.=^=^...=@@`/@@@@@`...*O\\..@...[.=@`,@@@`.@`=^@=`.O.@.@. =@.               \n",
    "#               @^..O.@^ \\^=@,\\..=@@ @\\,@@/@@`..=^..@`.....@@\\@@/@@...O.@=^,O=^.@^./@                \n",
    "#               @@..O.=@.,\\=@^O..=`\\/@^/@@OO@^..,`,O`.. .. @@/@@\\@@..=`=@../^O..@^/O^                \n",
    "#              .=@^ @..@`=@o@@=^..,.O@@/@@oO@........... ...@^.\\/@..=^=@/ =@O. .@\\@`.                \n",
    "#               .@@.@^.@^@^\\@@O^..=^,O@@*.................  .......=^/@@^=@@^ .=@`.                  \n",
    "#               .=O@O^,@^=^.\\@^o..=/\\,\\ .....   .... .....    ...]@O`=@O/@@^   =@                    \n",
    "#                .=O@O/^==@@`O@O^.=@.\\`\\`....      .  ........ ......//@.@`.   =^                    \n",
    "#                  ,O@@^.O..\\@@@^.=@...[O@`..      .  ........ .....//.@@,\\   .@^                    \n",
    "#                   .@/@@@]../@@^..@@\\........     ..,/`=@/@`.....,@^..=^ =` .=@.                    \n",
    "#                   =/..O... @^=\\. O@@@@\\....... ...//.,@..@O].,/@`=\\..=@..@..@^                     \n",
    "#                  ,/..O....=/=/@ .=@@@@@@@@].....//.,O@`.//.@@@@@..@^..@`.=\\/O`.                    \n",
    "#                 ,@../`....O=/.@...@@@@@@@@@@@@@/../\\@..//.=@@@@\\. @@..=\\..@^/.                     \n",
    "#                ,@`.=`....=@/..@...\\@@@@@@@@.. O..,@/..=@ .O=@@@^. @/@..@^..@`.                     \n",
    "#               ,@`.=^....,@/..=O^..,@@.[@@@@,@]@../^..=@../^=@@@...@.,\\.=@`..@`                     \n",
    "#              ,@..=/.   .@/...O=@...@@....,@@...,[@\\.,@`..@..@@/..,@..,\\.@@...@..                   \n",
    "#              @`.,/.....@/...=`O@^..,@...../`.......,\\@]..@..O@\\]]/\\`..,\\,@^..,@`                   \n",
    "#             =^..@...../@...,^/O@@...=@`...@............\\@` /`,\\,@`.=\\.,\\@=@`..,@.                  \n",
    "#            ,@../`....@\\`/@``,`]/@^...O,\\/\\@]..............\\\\..=@`\\\\ ,@@@@@@@....@`.                \n",
    "#            O`.=^....@^O@^.@@@@@@@\\....\\@^=@@@@@\\] ..........,@`\\@\\.@`=@@@@@@\\....@`..              \n",
    "#           =/..O...,@O/@^.@@@@@@@@@`...=/.@@@@@@@@@@@].........,@@/@`,@/@@@@O\\^....@`..             \n",
    "#           /^.O.../@O^@^./@@@@@@@@@\\...@`=@@@@@@@@@@@@@\\.......@`=\\//@`,@@@@@.@`....\\`...           \n",
    "#         .,@.=^../`@@\\@.=@@@@@@@@@@@^.=@.@@@@@@@@@@@@@@@@@`...@` =\\\\==`@`\\@@@^.@.....\\^...          \n",
    "#       ../\\^,@.,/.=@/=/,@@@@@@@@@@@@\\ @^=@@@@@@@@@@@@@@@@@@@]@@@@@@@@@o\\@`.@@\\.,@.....\\\\...         \n",
    "#       =/.@^@`/`..@@^@^=@@@@@@@@@@@@@\\@.@@@@@@@@@@@@@@@@@@@@@@@@O@@@@@\\/.,/.@@\\.,@.....,@`..        \n",
    "#     ,O`.,@=@/...=@@.@^O@@@@@@@@@@@@@@^=@@@@@@@@@@@@@@@@@@@@@@@@@@O@@@^ /@@.,@/@`.@`.....\\\\...      \n",
    "#  ..=^...=^@^....@OO,@^/@@@@@@@@@@@@\\@.@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\\@@@@@`=\\.\\^.\\`.....,@`..    \n",
    "# \n",
    "# 炼丹就不存在什么万能参数的, 可能下面给的参数里有的参数训练出来的效果很好, 有的效果就一般\n",
    "# 训练参数还看训练集呢, 同一套参数在不同训练集上效果都不一样, 可能好可能坏 (唉, 被这训练参数折磨过好多次)\n",
    "# 虽然说一直用同个效果不错的参数可能不会出现特别坏的结果吧\n",
    "# 还有好的训练集远比好的训练参数更重要\n",
    "# 好的训练集真的, 真的, 真的非常重要\n",
    "# 再好的参数, 训练集烂也救不回来\n",
    "# \n",
    "# \n",
    "# 10.\n",
    "# 建议先改改训练集路径的参数就开始训练, 跑通训练了再试着改其他参数\n",
    "# 还有我编写的训练参数不一定是最好的, 所以需要自己去摸索这些训练参数是什么作用的, 再去修改\n",
    "# 其实有些参数我自己也调不明白, 但是很多时候跑出来效果还不错\n",
    "# 为什么效果好, 分からない, 这东西像个黑盒, 有时候就觉得神奇呢\n",
    "##########################################################################################\n",
    "\n",
    "\n",
    "# 根据 https://github.com/KohakuBlueleaf/HDM/blob/main/config/train/hdm-xut-340M-ft.toml 修改的训练参数\n",
    "# \n",
    "# toml_file_path = os.path.join(WORKSPACE, \"train_config.toml\")\n",
    "# toml_content = f\"\"\"\n",
    "# [lightning]\n",
    "#     seed=20090220\n",
    "#     epochs=10\n",
    "#     batch_size=8\n",
    "#     dataloader_workers=4\n",
    "#     persistent_workers=true\n",
    "#     grad_acc=4\n",
    "#     devices=1\n",
    "#     precision=\"16-mixed\"\n",
    "#     grad_clip=1.0\n",
    "#     grad_ckpt=true\n",
    "# \n",
    "#     [lightning.imggencallback]\n",
    "#         id=\"HDM-xut-340M-finetune\"\n",
    "#         size=1024\n",
    "#         num=32\n",
    "#         preview_num=8\n",
    "#         batch_size=4\n",
    "#         steps=32\n",
    "#         period=128\n",
    "#     [lightning.logger]\n",
    "#         name=\"HDM-xut-340M-finetune\"\n",
    "#         project=\"HDM\"\n",
    "#         offline=true\n",
    "# \n",
    "# \n",
    "# [trainer]\n",
    "#     name=\"test\"\n",
    "#     lr=0.1 # We have muP scale, need higher LR here\n",
    "#     optimizer=\"torch.optim.AdamW\"\n",
    "#     opt_configs = {{\"weight_decay\"= 0.01, \"betas\"= [0.9, 0.95]}}\n",
    "#     lr_sch_configs = {{\"end\"= -1, \"mode\"= \"cosine\", \"warmup\"= 1000, \"min_value\"= 0.01}}\n",
    "#     te_use_normed_ctx=false\n",
    "# \n",
    "# \n",
    "# [dataset]\n",
    "#     [[dataset.datasets]]\n",
    "#         class = \"hdm.data.kohya.KohyaDataset\"\n",
    "#         [dataset.datasets.kwargs]\n",
    "#             size=1024\n",
    "#             dataset_folder = \"{INPUT_DATASET_PATH}/test_dataset\"\n",
    "#             keep_token_seperator=\"|||\"\n",
    "#             tag_seperator=\", \"\n",
    "#             seperator=\", \"\n",
    "#             group_seperator=\"%%\"\n",
    "#             tag_shuffle=true\n",
    "#             group_shuffle=false\n",
    "#             tag_dropout_rate=0.0\n",
    "#             group_dropout_rate=0.0\n",
    "#             use_cached_meta=true\n",
    "#             # For example:\n",
    "#             # \"xxx, zzz ||| aa $$ bb %% cc $$ dd\" -> \"xxx, zzz, aa, bb, dd, cc\"\n",
    "# \n",
    "# \n",
    "# [model]\n",
    "#     config=\"{HDM_PATH}/config/model/xut-qwen3-sm-tread.yaml\"\n",
    "#     model_path=\"{HDM_MODEL_PATH}/hdm-xut-340M-1024px-note.safetensors\"\n",
    "#     inference_dtype = \"torch.float16\"\n",
    "#     [model.lycoris]\n",
    "#         algo = \"lokr\"\n",
    "#         factor = 4\n",
    "#         full_matrix = true\n",
    "#         train_norm = true\n",
    "# \n",
    "# \"\"\".strip()\n",
    "# if not os.path.exists(os.path.dirname(toml_file_path)):\n",
    "#     os.makedirs(toml_file_path, exist_ok=True)\n",
    "# with open(toml_file_path, \"w\", encoding=\"utf8\") as file:\n",
    "#     file.write(toml_content)\n",
    "# !python \"{HDM_PATH}/scripts/train.py\" \"{toml_file_path}\"\n",
    "\n",
    "\n",
    "# 上面那个官方示例参数会报错, 不懂为什么, 这个是随便改过的, 倒是能跑\n",
    "# \n",
    "# toml_file_path = os.path.join(WORKSPACE, \"train_config.toml\")\n",
    "# toml_content = f\"\"\"\n",
    "# [lightning]\n",
    "#     seed=20090220\n",
    "#     epochs=10\n",
    "#     batch_size=8\n",
    "#     dataloader_workers=4\n",
    "#     persistent_workers=true\n",
    "#     grad_acc=4\n",
    "#     devices=1\n",
    "#     precision=\"16-mixed\"\n",
    "#     grad_clip=1.0\n",
    "#     grad_ckpt=true\n",
    "# \n",
    "#     [lightning.imggencallback]\n",
    "#         id=\"HDM-xut-340M-finetune\"\n",
    "#         size=1024\n",
    "#         num=32\n",
    "#         preview_num=8\n",
    "#         batch_size=4\n",
    "#         steps=32\n",
    "#         period=128\n",
    "#     [lightning.logger]\n",
    "#         name=\"HDM-xut-340M-finetune\"\n",
    "#         project=\"HDM\"\n",
    "#         offline=true\n",
    "# \n",
    "# \n",
    "# [trainer]\n",
    "#     name=\"test\"\n",
    "#     lr=0.1 # We have muP scale, need higher LR here\n",
    "#     optimizer=\"torch.optim.AdamW\"\n",
    "#     opt_configs = {{\"weight_decay\"= 0.01, \"betas\"= [0.9, 0.95]}}\n",
    "#     lr_sch_configs = {{\"mode\"= \"cosine\", \"warmup\"= 1000, \"min_value\"= 0.01}}\n",
    "#     te_use_normed_ctx=false\n",
    "# \n",
    "# \n",
    "# [dataset]\n",
    "#     [[dataset.datasets]]\n",
    "#         class = \"hdm.data.kohya.KohyaDataset\"\n",
    "#         [dataset.datasets.kwargs]\n",
    "#             size=1024\n",
    "#             dataset_folder = \"{INPUT_DATASET_PATH}/test_dataset\"\n",
    "#             keep_token_seperator=\"|||\"\n",
    "#             tag_seperator=\", \"\n",
    "#             seperator=\", \"\n",
    "#             group_seperator=\"%%\"\n",
    "#             tag_shuffle=true\n",
    "#             group_shuffle=false\n",
    "#             tag_dropout_rate=0.0\n",
    "#             group_dropout_rate=0.0\n",
    "#             use_cached_meta=true\n",
    "#             # For example:\n",
    "#             # \"xxx, zzz ||| aa $$ bb %% cc $$ dd\" -> \"xxx, zzz, aa, bb, dd, cc\"\n",
    "# \n",
    "# \n",
    "# [model]\n",
    "#     config=\"{HDM_PATH}/config/model/xut-qwen3-sm-tread.yaml\"\n",
    "#     model_path=\"{HDM_MODEL_PATH}/hdm-xut-340M-1024px-note.safetensors\"\n",
    "#     inference_dtype = \"torch.float16\"\n",
    "#     [model.lycoris]\n",
    "#         algo = \"lokr\"\n",
    "#         factor = 4\n",
    "#         full_matrix = true\n",
    "#         train_norm = true\n",
    "# \n",
    "# \"\"\".strip()\n",
    "# if not os.path.exists(os.path.dirname(toml_file_path)):\n",
    "#     os.makedirs(toml_file_path, exist_ok=True)\n",
    "# with open(toml_file_path, \"w\", encoding=\"utf8\") as file:\n",
    "#     file.write(toml_content)\n",
    "# !python \"{HDM_PATH}/scripts/train.py\" \"{toml_file_path}\"\n",
    "\n",
    "\n",
    "##########################################################################################\n",
    "os.chdir(WORKSPACE)\n",
    "logger.info(\"离开 HDM 目录\")\n",
    "logger.info(\"模型训练结束\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型上传\n",
    "通常不需要修改该单元内容，如果需要修改参数，建议通过上方的参数配置单元进行修改  \n",
    "5. [← 上一个单元](#模型训练)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模型上传到 HuggingFace / ModelScope, 通常不需要修改, 修改参数建议通过上方的参数配置单元进行修改\n",
    "\n",
    "# 使用 HuggingFace 上传模型\n",
    "if USE_HF_TO_SAVE_MODEL:\n",
    "    logger.info(\"使用 HuggingFace 保存模型\")\n",
    "    hdm_manager.repo.upload_files_to_repo(**HF_REPO_UPLOADER_PARAMS)\n",
    "\n",
    "# 使用 ModelScope 上传模型\n",
    "if USE_MS_TO_SAVE_MODEL:\n",
    "    logger.info(\"使用 ModelScope 保存模型\")\n",
    "    hdm_manager.repo.upload_files_to_repo(**MS_REPO_UPLOADER_PARAMS)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
