{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Orb：一种快速、可扩展的神经网络势能模型\n",
    "\n",
    "## 模型简介\n",
    "\n",
    "Orb 是一种通用的机器学习原子间势（MLFF），采用可扩展的图神经网络架构，通过并行化设计实现高效的材料建模。该模型无需依赖旋转等变性约束，仅通过数据驱动学习原子间复杂相互作用，即可在几何优化、蒙特卡洛和分子动力学模拟等多种任务中达到从头算精度。在 Matbench Discovery 基准上发布时误差降低 31%，并在大系统规模下比现有开源模型快 3-6 倍，支持零样本高温度非周期分子模拟的长期稳定运行。\n",
    "\n",
    "## 模型架构\n",
    "\n",
    "Orb 的核心为增强型图网络模拟器（GNS），结合平滑图注意力机制与距离截断函数构建消息传递。\n",
    "\n",
    "* 图构建：原子系统表示为 G=(V,E,C)，节点嵌入仅含原子类型，边特征融合归一化位移向量与高斯基 RBF 展开，支持周期边界条件；\n",
    "* 三阶段处理：编码器初始化节点与边特征；处理器堆叠消息传递层，通过残差边更新与双向消息聚合实现特征交互，并引入 sigmoid 加权截断门确保连续性；\n",
    "* 解码器：独立 MLP 头并行预测总能量、逐原子力与晶胞应力，推理阶段通过净零力和净零扭矩后处理保证物理一致性。\n",
    "\n",
    "## 应用场景\n",
    "\n",
    "Orb 适用于多种高通量原子尺度模拟任务，包括：\n",
    "\n",
    "* 晶体结构的高精度几何优化与稳定性预测，如 Matbench Discovery 基准中的分解能评估；\n",
    "* 大规模分子动力学模拟，支持上千原子系统长时间稳定运行，用于扩散、掺杂等稀疏统计现象研究；\n",
    "* 复杂多孔材料吸附行为建模，如 MOF 中 CO₂ 低压吸附自由能面与吸附热计算，结合 D3 校正实现范德瓦尔斯相互作用精确描述。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 参考论文：\"Orb: A Fast, Scalable Neural Network Potential\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -r requirement.txt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import logging\n",
    "import warnings\n",
    "import os\n",
    "import timeit\n",
    "from typing import Dict, Optional\n",
    "\n",
    "import mindspore as ms\n",
    "from mindspore import nn, ops, context\n",
    "import mindspore.dataset as ds\n",
    "from mindspore.communication import init\n",
    "from mindspore.communication import get_rank, get_group_size\n",
    "\n",
    "from src import base, pretrained, utils\n",
    "from src.ase_dataset import AseSqliteDataset, BufferData\n",
    "from src.trainer import OrbLoss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据集说明：MPtrj Dataset\n",
    "\n",
    "### 基本信息\n",
    "\n",
    "* 数据集名称：mptrj_ase.db\n",
    "* 数据来源：Materials Project\n",
    "* 目标体系：无机晶体材料（含 GGA 与 GGA+U 计算）\n",
    "* 数据格式：.db\n",
    "\n",
    "### 数据内容概览\n",
    "\n",
    "该数据集从 Materials Project 所有 GGA/GGA+U 静态与弛豫计算轨迹中解析获得，经过去重与兼容性筛选，包含完整优化路径中的中间构型。所有能量经 MP2020 兼容性校正，确保 GGA 与 GGA+U 统一基准。适用于通用原子间势（UIP）的预训练与微调，支持能量、力、应力与磁矩联合建模。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 加载配置\n",
    "\n",
    "设置运行环境与上下文，根据并行模式（单机或数据并行）配置 MindSpore 的运行模式、设备和随机种子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Args:\n",
    "    def __init__(self):\n",
    "        self.config = \"configs/config.yaml\"\n",
    "        self.device_target = \"Ascend\"\n",
    "        self.device_id = 0\n",
    "        self.parallel_mode = \"NONE\"\n",
    "\n",
    "args = Args()\n",
    "\n",
    "if args.parallel_mode.upper() == \"DATA_PARALLEL\":\n",
    "    ms.set_context(\n",
    "        mode=context.PYNATIVE_MODE,\n",
    "        device_target=args.device_target,\n",
    "        pynative_synchronize=True,\n",
    "    )\n",
    "    # Set parallel context\n",
    "    ms.set_auto_parallel_context(parallel_mode=ms.ParallelMode.DATA_PARALLEL, gradients_mean=True)\n",
    "    init()\n",
    "    ms.set_seed(1)\n",
    "else:\n",
    "    ms.set_context(\n",
    "        mode=context.PYNATIVE_MODE,\n",
    "        device_target=args.device_target,\n",
    "        device_id=args.device_id,\n",
    "        pynative_synchronize=True,\n",
    "    )\n",
    "\n",
    "configs = utils.load_cfg(args.config)\n",
    "warnings.filterwarnings(\"ignore\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义微调训练函数 `finetune`，实现单次训练循环：前向、反向、优化、日志记录、梯度裁剪、学习率调度等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "logging.basicConfig(\n",
    "    level=logging.INFO, format=\"%(asctime)s - %(levelname)s - %(message)s\"\n",
    ")\n",
    "\n",
    "\n",
    "def finetune(\n",
    "        model: nn.Cell,\n",
    "        loss_fn: Optional[nn.Cell],\n",
    "        optimizer: nn.Optimizer,\n",
    "        train_dataloader: ds.GeneratorDataset,\n",
    "        val_dataloader: ds.GeneratorDataset,\n",
    "        lr_scheduler: Optional[ms.experimental.optim.lr_scheduler] = None,\n",
    "        clip_grad: Optional[float] = None,\n",
    "        log_freq: float = 10,\n",
    "        parallel_mode: str = \"NONE\",\n",
    "):\n",
    "    \"\"\"Train for a fixed number of steps.\n",
    "\n",
    "    Args:\n",
    "        model: The model to optimize.\n",
    "        loss_fn: The loss function to use.\n",
    "        optimizer: The optimizer to use for the model.\n",
    "        train_dataloader: A Dataloader, which may be infinite if num_steps is passed.\n",
    "        val_dataloader: A Dataloader for validation.\n",
    "        lr_scheduler: Optional, a Learning rate scheduler for modifying the learning rate.\n",
    "        clip_grad: Optional, the gradient clipping threshold.\n",
    "        log_freq: The logging frequency for step metrics.\n",
    "        parallel_mode: The parallel mode to use, e.g., \"DATA_PARALLEL\" or \"NONE\".\n",
    "\n",
    "    Returns\n",
    "        A dictionary of metrics.\n",
    "    \"\"\"\n",
    "    if clip_grad is not None:\n",
    "        hook_handles = utils.gradient_clipping(model, clip_grad)\n",
    "\n",
    "    train_metrics = utils.ScalarMetricTracker()\n",
    "    val_metrics = utils.ScalarMetricTracker()\n",
    "\n",
    "    epoch_metrics = {\n",
    "        \"data_time\": 0.0,\n",
    "        \"train_time\": 0.0,\n",
    "    }\n",
    "\n",
    "    # Get gradient function\n",
    "    grad_fn = ms.value_and_grad(loss_fn.loss, None, optimizer.parameters, has_aux=True)\n",
    "    if parallel_mode == \"DATA_PARALLEL\":\n",
    "        grad_reducer = nn.DistributedGradReducer(optimizer.parameters)\n",
    "\n",
    "    # Define function of one-step training\n",
    "    def train_step(data, label=None):\n",
    "        (loss, val_logs), grads = grad_fn(data, label)\n",
    "        if parallel_mode == \"DATA_PARALLEL\":\n",
    "            grads = grad_reducer(grads)\n",
    "        optimizer(grads)\n",
    "        return loss, val_logs\n",
    "\n",
    "    step_begin = timeit.default_timer()\n",
    "    for i, batch in enumerate(train_dataloader):\n",
    "        epoch_metrics[\"data_time\"] += timeit.default_timer() - step_begin\n",
    "        # Reset metrics so that it reports raw values for each step but still do averages on\n",
    "        # the gradient accumulation.\n",
    "        if i % log_freq == 0:\n",
    "            train_metrics.reset()\n",
    "\n",
    "        model.set_train()\n",
    "        loss, train_logs = train_step(batch)\n",
    "\n",
    "        epoch_metrics[\"train_time\"] += timeit.default_timer() - step_begin\n",
    "        train_metrics.update(epoch_metrics)\n",
    "        train_metrics.update(train_logs)\n",
    "\n",
    "        if ops.isnan(loss):\n",
    "            raise ValueError(\"nan loss encountered\")\n",
    "\n",
    "        if lr_scheduler is not None:\n",
    "            lr_scheduler.step()\n",
    "        step_begin = timeit.default_timer()\n",
    "\n",
    "    if clip_grad is not None:\n",
    "        for h in hook_handles:\n",
    "            h.remove()\n",
    "\n",
    "    # begin evaluation\n",
    "    model.set_train(False)\n",
    "    val_iter = iter(val_dataloader)\n",
    "    val_batch = next(val_iter)\n",
    "    loss, val_logs = loss_fn.loss(val_batch)\n",
    "    val_metrics.update(val_logs)\n",
    "\n",
    "    return train_metrics.get_metrics(), val_metrics.get_metrics()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "构建数据加载器 `build_loader`，从 `.db` 文件加载 ASE 数据集，支持数据增强、批处理、并行切分等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def build_loader(\n",
    "        dataset_path: str,\n",
    "        num_workers: int,\n",
    "        batch_size: int,\n",
    "        augmentation: Optional[bool] = True,\n",
    "        target_config: Optional[Dict] = None,\n",
    "        shuffle: Optional[bool] = True,\n",
    "        parallel_mode: str = \"NONE\",\n",
    "        **kwargs,\n",
    ") -> ds.GeneratorDataset:\n",
    "    \"\"\"Builds the dataloader from a config file.\n",
    "\n",
    "    Args:\n",
    "        dataset_path: Dataset path.\n",
    "        num_workers: The number of workers for each dataset.\n",
    "        batch_size: The batch_size config for each dataset.\n",
    "        augmentation: If rotation augmentation is used.\n",
    "        target_config: The target config.\n",
    "        shuffle: If the dataset should be shuffled.\n",
    "        parallel_mode: The parallel mode to use, e.g., \"DATA_PARALLEL\" or \"NONE\".\n",
    "\n",
    "    Returns:\n",
    "        The Dataloader.\n",
    "    \"\"\"\n",
    "    log_loading = f\"Loading datasets: {dataset_path} with {num_workers} workers. \"\n",
    "    dataset = AseSqliteDataset(\n",
    "        dataset_path, target_config=target_config, augmentation=augmentation, **kwargs\n",
    "    )\n",
    "\n",
    "    log_loading += f\"Total dataset size: {len(dataset)} samples\"\n",
    "    logging.info(log_loading)\n",
    "\n",
    "    dataset = BufferData(dataset, shuffle=shuffle)\n",
    "    if parallel_mode == \"DATA_PARALLEL\":\n",
    "        rank_id = get_rank()\n",
    "        rank_size = get_group_size()\n",
    "        dataloader = [\n",
    "            [dataset[j] for j in range(i, min(i + batch_size, len(dataset)))] \\\n",
    "                for i in range(0, len(dataset), batch_size)\n",
    "        ]\n",
    "        dataloader = [\n",
    "            base.batch_graphs(\n",
    "                data[rank_id*len(data)//rank_size : (rank_id+1)*len(data)//rank_size]\n",
    "            ) for data in dataloader\n",
    "        ]\n",
    "    else:\n",
    "        dataloader = [\n",
    "            base.batch_graphs(\n",
    "                [dataset[j] for j in range(i, min(i + batch_size, len(dataset)))]\n",
    "            ) for i in range(0, len(dataset), batch_size)\n",
    "        ]\n",
    "\n",
    "    return dataloader"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "主训练流程函数 `run`，加载预训练模型、构建数据加载器、设置优化器、执行多 epoch 微调训练并保存检查点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def run(args, parallel_mode=\"NONE\"):\n",
    "    \"\"\"Training Loop.\n",
    "\n",
    "    Args:\n",
    "        config (DictConfig): Config for training loop.\n",
    "        parallel_mode (str): The parallel mode to use, e.g., \"DATA_PARALLEL\" or \"NONE\".\n",
    "    \"\"\"\n",
    "    utils.seed_everything(args.random_seed)\n",
    "\n",
    "    # Load dataset\n",
    "    train_loader = build_loader(\n",
    "        dataset_path=args.train_data_path,\n",
    "        num_workers=args.num_workers,\n",
    "        batch_size=args.batch_size,\n",
    "        target_config={\"graph\": [\"energy\", \"stress\"], \"node\": [\"forces\"]},\n",
    "        augmentation=True,\n",
    "    )\n",
    "    val_loader = build_loader(\n",
    "        dataset_path=args.val_data_path,\n",
    "        num_workers=args.num_workers,\n",
    "        batch_size=1000,\n",
    "        target_config={\"graph\": [\"energy\", \"stress\"], \"node\": [\"forces\"]},\n",
    "        augmentation=False,\n",
    "        shuffle=False,\n",
    "    )\n",
    "    num_steps = len(train_loader)\n",
    "\n",
    "    # Instantiate model\n",
    "    pretrained_weights_path = os.path.join(args.checkpoint_path, \"orb-mptraj-only-v2.ckpt\")\n",
    "    model = pretrained.orb_mptraj_only_v2(pretrained_weights_path)\n",
    "    loss_fn = OrbLoss(model)\n",
    "    model_params = sum(p.size for p in model.trainable_params() if p.requires_grad)\n",
    "    logging.info(\"Model has %d trainable parameters.\", model_params)\n",
    "\n",
    "    total_steps = args.max_epochs * num_steps\n",
    "    optimizer, lr_scheduler = utils.get_optim(args.lr, total_steps, model)\n",
    "\n",
    "    # Fine-tuning loop\n",
    "    start_epoch = 0\n",
    "    train_time = timeit.default_timer()\n",
    "    for epoch in range(start_epoch, args.max_epochs):\n",
    "        train_metrics, val_metrics = finetune(\n",
    "            model=model,\n",
    "            loss_fn=loss_fn,\n",
    "            optimizer=optimizer,\n",
    "            train_dataloader=train_loader,\n",
    "            val_dataloader=val_loader,\n",
    "            lr_scheduler=lr_scheduler,\n",
    "            clip_grad=args.gradient_clip_val,\n",
    "            parallel_mode=parallel_mode,\n",
    "        )\n",
    "        print(f'Epoch: {epoch}/{args.max_epochs}, \\n train_metrics: {train_metrics}\\n val_metrics: {val_metrics}')\n",
    "\n",
    "        # Save checkpoint from last epoch\n",
    "        if epoch == args.max_epochs - 1:\n",
    "            # create ckpts folder if it does not exist\n",
    "            if not os.path.exists(args.checkpoint_path):\n",
    "                os.makedirs(args.checkpoint_path)\n",
    "            if parallel_mode == \"DATA_PARALLEL\":\n",
    "                rank_id = get_rank()\n",
    "                rank_size = get_group_size()\n",
    "                ms.save_checkpoint(\n",
    "                    model,\n",
    "                    os.path.join(\n",
    "                        args.checkpoint_path,\n",
    "                        f\"orb-ft-parallel[{rank_id}-{rank_size}]-checkpoint_epoch{epoch}.ckpt\"\n",
    "                    ),\n",
    "                )\n",
    "            else:\n",
    "                ms.save_checkpoint(\n",
    "                    model,\n",
    "                    os.path.join(args.checkpoint_path, f\"orb-ft-checkpoint_epoch{epoch}.ckpt\"),\n",
    "                )\n",
    "            logging.info(\"Checkpoint saved to %s\", args.checkpoint_path)\n",
    "    logging.info(\"Training time: %.5f seconds\", timeit.default_timer() - train_time)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "run(configs, args.parallel_mode)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "ms250_py310",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.0"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
