{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Orb: A Fast, Scalable Neural Network Potential\n",
    "\n",
    "## Model Overview\n",
    "\n",
    "Orb is a universal machine learning interatomic potential (MLIP) that employs a scalable graph neural network architecture with a parallelized design to enable efficient materials modeling. The model learns complex interatomic interactions in a purely data-driven manner without relying on rotation equivariance constraints, achieving ab initio accuracy across diverse tasks including geometry optimization, Monte Carlo sampling, and molecular dynamics simulations. Upon release on the Matbench Discovery benchmark, it reduced errors by 31% and demonstrated 3–6× higher throughput on large systems compared to existing open-source models, while supporting long-term stable zero-shot high-temperature simulations of non-periodic molecular systems.\n",
    "\n",
    "## Model Architecture\n",
    "\n",
    "The core of Orb is an enhanced Graph Network Simulator (GNS) that incorporates smooth graph attention mechanisms and distance cutoff functions to construct message passing.\n",
    "\n",
    "* **Graph Construction**: The atomic system is represented as G=(V,E,C), where node embeddings contain only atomic types, and edge features combine normalized displacement vectors with Gaussian RBF expansions, supporting periodic boundary conditions.\n",
    "* **Three-Stage Processing**: An encoder initializes node and edge features; a processor stacks message-passing layers with residual edge updates and bidirectional message aggregation for feature interaction, incorporating a sigmoid-weighted cutoff gate to ensure continuity.\n",
    "* **Decoder**: Independent MLP heads predict total energy, per-atom forces, and cell stress in parallel; during inference, post-processing enforces net-zero force and net-zero torque for physical consistency.\n",
    "\n",
    "## Application Scenarios\n",
    "\n",
    "Orb is well-suited for a wide range of high-throughput atomistic simulation tasks, including:\n",
    "\n",
    "* High-precision geometry optimization and stability prediction of crystal structures, such as decomposition energy evaluation in the Matbench Discovery benchmark.\n",
    "* Large-scale molecular dynamics simulations supporting long-term stable runs on systems with thousands of atoms, enabling studies of rare statistical phenomena such as diffusion and doping.\n",
    "* Modeling of adsorption behavior in complex porous materials, such as low-pressure CO₂ adsorption free energy surfaces and adsorption enthalpies in MOFs, with D3 corrections for accurate van der Waals interactions.\n",
    "\n",
    "> Reference paper: \"Orb: A Fast, Scalable Neural Network Potential\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -r requirement.txt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import logging\n",
    "import warnings\n",
    "import os\n",
    "import timeit\n",
    "from typing import Dict, Optional\n",
    "\n",
    "import mindspore as ms\n",
    "from mindspore import nn, ops, context\n",
    "import mindspore.dataset as ds\n",
    "from mindspore.communication import init\n",
    "from mindspore.communication import get_rank, get_group_size\n",
    "\n",
    "from src import base, pretrained, utils\n",
    "from src.ase_dataset import AseSqliteDataset, BufferData\n",
    "from src.trainer import OrbLoss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Dataset Description: MPtrj Dataset\n",
    "\n",
    "### Basic Information\n",
    "\n",
    "* Dataset name: `mptrj_ase.db`\n",
    "* Data source: Materials Project\n",
    "* Target systems: Inorganic crystalline materials (including GGA and GGA+U calculations)\n",
    "* Data format: `.db`\n",
    "\n",
    "### Data Content Overview\n",
    "\n",
    "This dataset is parsed from all GGA/GGA+U static and relaxation trajectories in the Materials Project, subjected to deduplication and compatibility filtering, and includes intermediate configurations along full optimization paths. All energies are corrected using MP2020 compatibility adjustments to ensure a unified benchmark across GGA and GGA+U calculations. It is suitable for pre-training and fine-tuning universal interatomic potentials (UIP), supporting joint modeling of energy, forces, stress, and magnetic moments."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Loading Configuration\n",
    "\n",
    "Configure the runtime environment and context, setting MindSpore execution mode, device, and random seed based on parallelism mode (standalone or data-parallel)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Args:\n",
    "    def __init__(self):\n",
    "        self.config = \"configs/config.yaml\"\n",
    "        self.device_target = \"Ascend\"\n",
    "        self.device_id = 0\n",
    "        self.parallel_mode = \"NONE\"\n",
    "\n",
    "args = Args()\n",
    "\n",
    "if args.parallel_mode.upper() == \"DATA_PARALLEL\":\n",
    "    ms.set_context(\n",
    "        mode=context.PYNATIVE_MODE,\n",
    "        device_target=args.device_target,\n",
    "        pynative_synchronize=True,\n",
    "    )\n",
    "    # Set parallel context\n",
    "    ms.set_auto_parallel_context(parallel_mode=ms.ParallelMode.DATA_PARALLEL, gradients_mean=True)\n",
    "    init()\n",
    "    ms.set_seed(1)\n",
    "else:\n",
    "    ms.set_context(\n",
    "        mode=context.PYNATIVE_MODE,\n",
    "        device_target=args.device_target,\n",
    "        device_id=args.device_id,\n",
    "        pynative_synchronize=True,\n",
    "    )\n",
    "\n",
    "configs = utils.load_cfg(args.config)\n",
    "warnings.filterwarnings(\"ignore\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Define the fine-tuning function `finetune`, which implements a single training loop encompassing forward propagation, backpropagation, optimization, logging, gradient clipping, learning rate scheduling, and related operations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "logging.basicConfig(\n",
    "    level=logging.INFO, format=\"%(asctime)s - %(levelname)s - %(message)s\"\n",
    ")\n",
    "\n",
    "\n",
    "def finetune(\n",
    "        model: nn.Cell,\n",
    "        loss_fn: Optional[nn.Cell],\n",
    "        optimizer: nn.Optimizer,\n",
    "        train_dataloader: ds.GeneratorDataset,\n",
    "        val_dataloader: ds.GeneratorDataset,\n",
    "        lr_scheduler: Optional[ms.experimental.optim.lr_scheduler] = None,\n",
    "        clip_grad: Optional[float] = None,\n",
    "        log_freq: float = 10,\n",
    "        parallel_mode: str = \"NONE\",\n",
    "):\n",
    "    \"\"\"Train for a fixed number of steps.\n",
    "\n",
    "    Args:\n",
    "        model: The model to optimize.\n",
    "        loss_fn: The loss function to use.\n",
    "        optimizer: The optimizer to use for the model.\n",
    "        train_dataloader: A Dataloader, which may be infinite if num_steps is passed.\n",
    "        val_dataloader: A Dataloader for validation.\n",
    "        lr_scheduler: Optional, a Learning rate scheduler for modifying the learning rate.\n",
    "        clip_grad: Optional, the gradient clipping threshold.\n",
    "        log_freq: The logging frequency for step metrics.\n",
    "        parallel_mode: The parallel mode to use, e.g., \"DATA_PARALLEL\" or \"NONE\".\n",
    "\n",
    "    Returns\n",
    "        A dictionary of metrics.\n",
    "    \"\"\"\n",
    "    if clip_grad is not None:\n",
    "        hook_handles = utils.gradient_clipping(model, clip_grad)\n",
    "\n",
    "    train_metrics = utils.ScalarMetricTracker()\n",
    "    val_metrics = utils.ScalarMetricTracker()\n",
    "\n",
    "    epoch_metrics = {\n",
    "        \"data_time\": 0.0,\n",
    "        \"train_time\": 0.0,\n",
    "    }\n",
    "\n",
    "    # Get gradient function\n",
    "    grad_fn = ms.value_and_grad(loss_fn.loss, None, optimizer.parameters, has_aux=True)\n",
    "    if parallel_mode == \"DATA_PARALLEL\":\n",
    "        grad_reducer = nn.DistributedGradReducer(optimizer.parameters)\n",
    "\n",
    "    # Define function of one-step training\n",
    "    def train_step(data, label=None):\n",
    "        (loss, val_logs), grads = grad_fn(data, label)\n",
    "        if parallel_mode == \"DATA_PARALLEL\":\n",
    "            grads = grad_reducer(grads)\n",
    "        optimizer(grads)\n",
    "        return loss, val_logs\n",
    "\n",
    "    step_begin = timeit.default_timer()\n",
    "    for i, batch in enumerate(train_dataloader):\n",
    "        epoch_metrics[\"data_time\"] += timeit.default_timer() - step_begin\n",
    "        # Reset metrics so that it reports raw values for each step but still do averages on\n",
    "        # the gradient accumulation.\n",
    "        if i % log_freq == 0:\n",
    "            train_metrics.reset()\n",
    "\n",
    "        model.set_train()\n",
    "        loss, train_logs = train_step(batch)\n",
    "\n",
    "        epoch_metrics[\"train_time\"] += timeit.default_timer() - step_begin\n",
    "        train_metrics.update(epoch_metrics)\n",
    "        train_metrics.update(train_logs)\n",
    "\n",
    "        if ops.isnan(loss):\n",
    "            raise ValueError(\"nan loss encountered\")\n",
    "\n",
    "        if lr_scheduler is not None:\n",
    "            lr_scheduler.step()\n",
    "        step_begin = timeit.default_timer()\n",
    "\n",
    "    if clip_grad is not None:\n",
    "        for h in hook_handles:\n",
    "            h.remove()\n",
    "\n",
    "    # begin evaluation\n",
    "    model.set_train(False)\n",
    "    val_iter = iter(val_dataloader)\n",
    "    val_batch = next(val_iter)\n",
    "    loss, val_logs = loss_fn.loss(val_batch)\n",
    "    val_metrics.update(val_logs)\n",
    "\n",
    "    return train_metrics.get_metrics(), val_metrics.get_metrics()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Build data loader `build_loader` to load ASE datasets from `.db` files, supporting data augmentation, batching, and parallel partitioning."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def build_loader(\n",
    "        dataset_path: str,\n",
    "        num_workers: int,\n",
    "        batch_size: int,\n",
    "        augmentation: Optional[bool] = True,\n",
    "        target_config: Optional[Dict] = None,\n",
    "        shuffle: Optional[bool] = True,\n",
    "        parallel_mode: str = \"NONE\",\n",
    "        **kwargs,\n",
    ") -> ds.GeneratorDataset:\n",
    "    \"\"\"Builds the dataloader from a config file.\n",
    "\n",
    "    Args:\n",
    "        dataset_path: Dataset path.\n",
    "        num_workers: The number of workers for each dataset.\n",
    "        batch_size: The batch_size config for each dataset.\n",
    "        augmentation: If rotation augmentation is used.\n",
    "        target_config: The target config.\n",
    "        shuffle: If the dataset should be shuffled.\n",
    "        parallel_mode: The parallel mode to use, e.g., \"DATA_PARALLEL\" or \"NONE\".\n",
    "\n",
    "    Returns:\n",
    "        The Dataloader.\n",
    "    \"\"\"\n",
    "    log_loading = f\"Loading datasets: {dataset_path} with {num_workers} workers. \"\n",
    "    dataset = AseSqliteDataset(\n",
    "        dataset_path, target_config=target_config, augmentation=augmentation, **kwargs\n",
    "    )\n",
    "\n",
    "    log_loading += f\"Total dataset size: {len(dataset)} samples\"\n",
    "    logging.info(log_loading)\n",
    "\n",
    "    dataset = BufferData(dataset, shuffle=shuffle)\n",
    "    if parallel_mode == \"DATA_PARALLEL\":\n",
    "        rank_id = get_rank()\n",
    "        rank_size = get_group_size()\n",
    "        dataloader = [\n",
    "            [dataset[j] for j in range(i, min(i + batch_size, len(dataset)))] \\\n",
    "                for i in range(0, len(dataset), batch_size)\n",
    "        ]\n",
    "        dataloader = [\n",
    "            base.batch_graphs(\n",
    "                data[rank_id*len(data)//rank_size : (rank_id+1)*len(data)//rank_size]\n",
    "            ) for data in dataloader\n",
    "        ]\n",
    "    else:\n",
    "        dataloader = [\n",
    "            base.batch_graphs(\n",
    "                [dataset[j] for j in range(i, min(i + batch_size, len(dataset)))]\n",
    "            ) for i in range(0, len(dataset), batch_size)\n",
    "        ]\n",
    "\n",
    "    return dataloader"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Main training workflow function `run`: loads pretrained model, constructs data loaders, configures optimizer, performs multi-epoch fine-tuning, and saves checkpoints."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def run(args, parallel_mode=\"NONE\"):\n",
    "    \"\"\"Training Loop.\n",
    "\n",
    "    Args:\n",
    "        config (DictConfig): Config for training loop.\n",
    "        parallel_mode (str): The parallel mode to use, e.g., \"DATA_PARALLEL\" or \"NONE\".\n",
    "    \"\"\"\n",
    "    utils.seed_everything(args.random_seed)\n",
    "\n",
    "    # Load dataset\n",
    "    train_loader = build_loader(\n",
    "        dataset_path=args.train_data_path,\n",
    "        num_workers=args.num_workers,\n",
    "        batch_size=args.batch_size,\n",
    "        target_config={\"graph\": [\"energy\", \"stress\"], \"node\": [\"forces\"]},\n",
    "        augmentation=True,\n",
    "    )\n",
    "    val_loader = build_loader(\n",
    "        dataset_path=args.val_data_path,\n",
    "        num_workers=args.num_workers,\n",
    "        batch_size=1000,\n",
    "        target_config={\"graph\": [\"energy\", \"stress\"], \"node\": [\"forces\"]},\n",
    "        augmentation=False,\n",
    "        shuffle=False,\n",
    "    )\n",
    "    num_steps = len(train_loader)\n",
    "\n",
    "    # Instantiate model\n",
    "    pretrained_weights_path = os.path.join(args.checkpoint_path, \"orb-mptraj-only-v2.ckpt\")\n",
    "    model = pretrained.orb_mptraj_only_v2(pretrained_weights_path)\n",
    "    loss_fn = OrbLoss(model)\n",
    "    model_params = sum(p.size for p in model.trainable_params() if p.requires_grad)\n",
    "    logging.info(\"Model has %d trainable parameters.\", model_params)\n",
    "\n",
    "    total_steps = args.max_epochs * num_steps\n",
    "    optimizer, lr_scheduler = utils.get_optim(args.lr, total_steps, model)\n",
    "\n",
    "    # Fine-tuning loop\n",
    "    start_epoch = 0\n",
    "    train_time = timeit.default_timer()\n",
    "    for epoch in range(start_epoch, args.max_epochs):\n",
    "        train_metrics, val_metrics = finetune(\n",
    "            model=model,\n",
    "            loss_fn=loss_fn,\n",
    "            optimizer=optimizer,\n",
    "            train_dataloader=train_loader,\n",
    "            val_dataloader=val_loader,\n",
    "            lr_scheduler=lr_scheduler,\n",
    "            clip_grad=args.gradient_clip_val,\n",
    "            parallel_mode=parallel_mode,\n",
    "        )\n",
    "        print(f'Epoch: {epoch}/{args.max_epochs}, \\n train_metrics: {train_metrics}\\n val_metrics: {val_metrics}')\n",
    "\n",
    "        # Save checkpoint from last epoch\n",
    "        if epoch == args.max_epochs - 1:\n",
    "            # create ckpts folder if it does not exist\n",
    "            if not os.path.exists(args.checkpoint_path):\n",
    "                os.makedirs(args.checkpoint_path)\n",
    "            if parallel_mode == \"DATA_PARALLEL\":\n",
    "                rank_id = get_rank()\n",
    "                rank_size = get_group_size()\n",
    "                ms.save_checkpoint(\n",
    "                    model,\n",
    "                    os.path.join(\n",
    "                        args.checkpoint_path,\n",
    "                        f\"orb-ft-parallel[{rank_id}-{rank_size}]-checkpoint_epoch{epoch}.ckpt\"\n",
    "                    ),\n",
    "                )\n",
    "            else:\n",
    "                ms.save_checkpoint(\n",
    "                    model,\n",
    "                    os.path.join(args.checkpoint_path, f\"orb-ft-checkpoint_epoch{epoch}.ckpt\"),\n",
    "                )\n",
    "            logging.info(\"Checkpoint saved to %s\", args.checkpoint_path)\n",
    "    logging.info(\"Training time: %.5f seconds\", timeit.default_timer() - train_time)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "run(configs, args.parallel_mode)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
