{
    "cells": [
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "8046e96a",
            "metadata": {},
            "outputs": [],
            "source": [
                "BRANCH='main'"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "38bfe8ea",
            "metadata": {},
            "outputs": [],
            "source": [
                "\"\"\"\n",
                "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n",
                "\n",
                "Instructions for setting up Colab are as follows:\n",
                "1. Open a new Python 3 notebook.\n",
                "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n",
                "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n",
                "4. Run this cell to set up dependencies.\n",
                "\"\"\"\n",
                "# If you're using Google Colab and not running locally, run this cell\n",
                "\n",
                "# install NeMo\n",
                "!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp]"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "98c00a93",
            "metadata": {},
            "outputs": [],
            "source": [
                "import os\n",
                "import wget \n",
                "import torch\n",
                "import pytorch_lightning as pl\n",
                "from omegaconf import OmegaConf"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "e9fb1a66",
            "metadata": {},
            "source": [
                "# Task Description\n",
                "In this tutorial, we are going to describe how to export NeMo NLP models with BERT based models as the pre-trained model."
            ]
        },
        {
            "cell_type": "markdown",
            "id": "dd0fb016",
            "metadata": {},
            "source": [
                "## Convert the Megatron-LM Weights to Nemo file\n",
                "\n",
                "If you prefer to use the Huggingface BERT models, please skip this section and refer to `Setting up a NeMo Experiment` section to load a model from `nemo_nlp.modules.get_pretrained_lm_models_list()`\n",
                "\n",
                "NeMo Megatron BERT can [load from a pretrained model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html?highlight=nemo%20file#restore) using `.nemo` file. We can convert the Megatron-LM checkpoint to the `.nemo` file. Let's first download the pretrained model weights and vocabulary file."
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "e451f219",
            "metadata": {},
            "outputs": [],
            "source": [
                "from nemo.collections.nlp.modules.common.megatron.megatron_utils import MEGATRON_CONFIG_MAP\n",
                "import pathlib\n",
                "\n",
                "PRETRAINED_BERT_MODEL = \"megatron-bert-345m-uncased\"  # specify BERT-like model from MEGATRON_CONFIG_MAP.keys()\n",
                "nemo_out_path = \"qa_pretrained.nemo\" # the nemo output file name\n",
                "\n",
                "checkpoint_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['checkpoint']\n",
                "vocab_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['vocab']\n",
                "checkpoint_filename = pathlib.Path(checkpoint_url).name\n",
                "vocab_filename = pathlib.Path(vocab_url).name\n",
                "if not pathlib.Path(checkpoint_filename).exists():\n",
                "    print('downloading from checkpoint url', checkpoint_url)\n",
                "    !wget $checkpoint_url\n",
                "if not pathlib.Path(vocab_filename).exists():\n",
                "    print('downloading from vocab url', vocab_url)\n",
                "    !wget $vocab_url"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "7586b5c0",
            "metadata": {},
            "outputs": [],
            "source": [
                "WORK_DIR = \"WORK_DIR\"\n",
                "os.makedirs(WORK_DIR, exist_ok=True)\n",
                "\n",
                "# Prepare the model parameters \n",
                "# download the model's configuration file \n",
                "config_dir = WORK_DIR + '/configs/'\n",
                "MODEL_CONFIG = \"megatron_bert_config.yaml\"\n",
                "os.makedirs(config_dir, exist_ok=True)\n",
                "if not os.path.exists(config_dir + MODEL_CONFIG):\n",
                "    print('Downloading config file...')\n",
                "    wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/conf/' + MODEL_CONFIG, config_dir)\n",
                "else:\n",
                "    print ('config file is already exists')"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "e0dd3124",
            "metadata": {},
            "outputs": [],
            "source": [
                "# this line will print the entire config of the model\n",
                "config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'\n",
                "print(config_path)\n",
                "config = OmegaConf.load(config_path)\n",
                "\n",
                "config.model.megatron_legacy = True # set to true if you trained the NLP model on NeMo < 1.5.0\n",
                "config.model.bias_gelu_fusion = False # set to true if you want the MegatronLM to NeMo conversion for training; and set to false to use the converted model at time of export \n",
                "config.model.masked_softmax_fusion = False # set to true if you want the MegatronLM to NeMo conversion for training; and set to false to use the converted model at time of export\n",
                "\n",
                "config.model.num_layers = 24\n",
                "config.model.hidden_size = 1024\n",
                "config.model.ffn_hidden_size = 4096\n",
                "config.model.num_attention_heads = 16\n",
                "config.model.tokenizer.vocab_file = vocab_filename\n",
                "config.model.tokenizer.type = 'BertWordPieceLowerCase' # change this to BertWordPieceCase if you are using a cased pretrained model\n",
                "config.model.tensor_model_parallel_size = 1\n",
                "config.model.data.data_prefix = ''\n",
                "config.model.max_position_embeddings = 512\n",
                "config.model.data.seq_length = 512\n",
                "config.cfg = {}\n",
                "config.cfg.cfg = config.model\n",
                "with open('hparams.yaml', 'w') as f:\n",
                "    f.write(OmegaConf.to_yaml(config.cfg))\n",
                "if(config.model.megatron_legacy):\n",
                "    checkpoint_filename = \"model_optim_rng_ca.pt\" #provide path to the pretrained pt file you used during training on NeMo < 1.5.0, for NeMo >= 1.5.0\n",
                "print(checkpoint_filename)"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "47dca6de",
            "metadata": {},
            "outputs": [],
            "source": [
                "import os\n",
                "PWD = os.getcwd()\n",
                "wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/megatron_lm_ckpt_to_nemo.py')\n",
                "!python -m torch.distributed.run --nproc_per_node=1 megatron_lm_ckpt_to_nemo.py --checkpoint_folder=$PWD --checkpoint_name=$checkpoint_filename --hparams_file=$PWD/hparams.yaml --nemo_file_path=$PWD/$nemo_out_path --model_type=bert --tensor_model_parallel_size=1"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "1ae8d31b",
            "metadata": {},
            "source": [
                "# Legacy NLP Bert based model conversion\n",
                "\n",
                "Step 1: Convert legacy nemo checkpoint to a checkpoint which is currently supported by nemo\n",
                "\n",
                "Step 2: Use the converted model from step 1 to export the nemo file to the required format"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "86639a3d",
            "metadata": {},
            "outputs": [],
            "source": [
                "wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/scripts/nemo_legacy_import/nlp_checkpoint_port.py')\n",
                "wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/scripts/export.py')"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "48820d57",
            "metadata": {},
            "outputs": [],
            "source": [
                "legacy_nemo_file_path = \"/NeMo/megatron_multiqa.nemo\" #path to you model trained on NeMo < 1.5\n",
                "nemo_converted_out_path = \"converted_megatron_multiqa.nemo\"\n",
                "megatron_absolute_language_model_path = \"/NeMo/tutorials/nlp/qa_pretrained.nemo\" # Give the absolute path of the model you obtained using megatron_lm_ckpt_to_nemo\n",
                "onnx_export_out_path = \"onnx_megatron_multiqa.onnx\""
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "7191e0cb",
            "metadata": {},
            "outputs": [],
            "source": [
                "os.system(f\"python nlp_checkpoint_port.py {legacy_nemo_file_path} {nemo_converted_out_path} --megatron-legacy=True --megatron-checkpoint {megatron_absolute_language_model_path}\")"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "ccc720ef",
            "metadata": {},
            "outputs": [],
            "source": [
                "os.system(f\"python export.py {nemo_converted_out_path} {onnx_export_out_path} --autocast --runtime-check\")"
            ]
        },
        {
            "cell_type": "markdown",
            "id": "f10461f2",
            "metadata": {},
            "source": [
                "# Convert a NLP model with BERT based pre-trained model trained on NeMo >= 1.5.0\n",
                "\n",
                "For models trained on NeMo >= 1.5.0, you just run the export script and skip the legacy conversion part"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "0514ab37",
            "metadata": {},
            "outputs": [],
            "source": [
                "nemo_file_path = \"\"\n",
                "onnx_export_out_path = "
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "id": "1d6b5db4",
            "metadata": {},
            "outputs": [],
            "source": [
                "python export.py $nemo_converted_out_path $onnx_export_out_path --autocast --runtime-check"
            ]
        }
    ],
    "metadata": {
        "kernelspec": {
            "display_name": "Python 3 (ipykernel)",
            "language": "python",
            "name": "python3"
        },
        "language_info": {
            "codemirror_mode": {
                "name": "ipython",
                "version": 3
            },
            "file_extension": ".py",
            "mimetype": "text/x-python",
            "name": "python",
            "nbconvert_exporter": "python",
            "pygments_lexer": "ipython3",
            "version": "3.8.12"
        }
    },
    "nbformat": 4,
    "nbformat_minor": 5
}
