{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "zBQDGVIe95Sj"
   },
   "source": [
    "# EasyEdit Example with **ROME**\n",
    "> Tutorial author: Yachen Chang（<yachenchang@zju.edu.cn>） and Jiangtao Guan (<jiangtaoguan@zju.edu.cn>)\n",
    "\n",
    "In this tutorial, we use `ROME` to edit `InternLM-7b` model. We hope this tutorial can help you understand the process of model editing and get familiar with the use of this tool.\n",
    "\n",
    "This tutorial uses `Python3`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "cbn0z6or-FPa"
   },
   "source": [
    "# Model Editing\n",
    "![Model Editing to fix and update LLMs]()\n",
    "\n",
    "Deployed models may still make unpredictable errors. For example, Large Language Models (LLMs) notoriously *hallucinate*, *perpetuate bias*, and *factually decay*, so we should be able to adjust specific behaviors of pre-trained models.\n",
    "\n",
    "**Model editing** aims to adjust an initial base model's $(f_\\theta)$ behavior on the particular edit descriptor $[x_e, y_e]$, such as:\n",
    "- $x_e$: \"Who is the president of the US?\n",
    "- $y_e$: \"Joe Biden.\"\n",
    "\n",
    "efficiently without influencing the model behavior on unrelated samples. The ultimate goal is to create an edited model$(f_\\theta’)$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7WQuYKbU_jDn"
   },
   "source": [
    "# Editing Scope\n",
    "![scope.png]()\n",
    "\n",
    "The model editing process generally impacts the predictions for a broad set of inputs **that are closely** associated with the edit example, called the **editing scope**.\n",
    "\n",
    "\n",
    "A successful edit should adjust the model’s behavior within the editing scope while remaining unrelated inputs(as below formula).\n",
    "\n",
    "\n",
    "$f_{\\theta_{e}}(x) = \\begin{cases}\n",
    "y_e & \\text{if } x \\in I(x_e,y_e) \\\\\n",
    "f_{\\theta}(x) & \\text{if } x \\in O(x_e, y_e) \\end{cases}$\n",
    "\n",
    "In addition to this, the performance of model editing should be measured from multiple dimensions:\n",
    "\n",
    "- `Reliability`: the success rate of editing with a given editing description\n",
    "- `Generalization`: the success rate of editing **within** the editing scope\n",
    "- `Locality`: whether the model's output changes after editing for unrelated inputs\n",
    "- `Portability`: the success rate of editing for factual reasoning(one hop, synonym, one-to-one relation)\n",
    "- `Efficiency`: time and memory consumption required during the editing process\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "yx9VP0adjNmf",
    "tags": []
   },
   "source": [
    "# Method: **ROME**\n",
    "\n",
    "Paper:[Locating and Editing Factual Associations in GPT](https://arxiv.org/abs/2202.05262)\n",
    "![rome.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "# Model: **InternLM-7b**\n",
    "\n",
    "[InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities](https://github.com/InternLM/InternLM)\n",
    "\n",
    "Paper URL: https://github.com/InternLM/InternLM-techreport/blob/main/InternLM.pdf\n",
    "\n",
    "Project URL: https://internlm.org/\n",
    "\n",
    "Code URL: https://github.com/InternLM/InternLM-techreport\n",
    "\n",
    "\n",
    "![Internlm.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "MiEzUSIak_eu",
    "tags": []
   },
   "source": [
    "## Prepare the runtime environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 703,
     "status": "ok",
     "timestamp": 1689079232937,
     "user": {
      "displayName": "王鹏",
      "userId": "13732581426292571398"
     },
     "user_tz": -480
    },
    "id": "RO20sOmEqq-O",
    "outputId": "b7ed3531-71c7-4205-bee6-6b0950ea3b03",
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/root/autodl-tmp/EasyEdit\n",
      "Dockerfile  \u001b[0m\u001b[01;34measyeditor\u001b[0m/  \u001b[01;34mhparams\u001b[0m/            requirements.txt     tutorial.pdf\n",
      "LICENSE     edit.py      \u001b[01;34mhugging_cache\u001b[0m/      \u001b[01;34mresults\u001b[0m/\n",
      "README.md   \u001b[01;34mexamples\u001b[0m/    \u001b[01;34mlogs\u001b[0m/               test.py\n",
      "data.zip    \u001b[01;34mfigs\u001b[0m/        multimodal_edit.py  \u001b[01;34mtutorial-notebooks\u001b[0m/\n"
     ]
    }
   ],
   "source": [
    "## Clone Repo\n",
    "!git clone https://github.com/zjunlp/EasyEdit\n",
    "%cd ./EasyEdit\n",
    "%ls"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 397312,
     "status": "ok",
     "timestamp": 1689079696944,
     "user": {
      "displayName": "王鹏",
      "userId": "13732581426292571398"
     },
     "user_tz": -480
    },
    "id": "V7i-2uXDYXAN",
    "outputId": "33325843-f474-408e-83e1-b2ef438d5ad8",
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: http://mirrors.aliyun.com/pypi/simple\n",
      "Requirement already satisfied: datasets==1.18.3 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 1)) (1.18.3)\n",
      "Requirement already satisfied: einops==0.4.0 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (0.4.0)\n",
      "Requirement already satisfied: gpustat==1.1 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 3)) (1.1)\n",
      "Requirement already satisfied: hydra-core==1.1.1 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 4)) (1.1.1)\n",
      "Requirement already satisfied: higher==0.2.1 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 5)) (0.2.1)\n",
      "Requirement already satisfied: importlib-metadata==6.3.0 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 6)) (6.3.0)\n",
      "Requirement already satisfied: matplotlib==3.5.1 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 7)) (3.5.1)\n",
      "Requirement already satisfied: nltk==3.6.5 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 8)) (3.6.5)\n",
      "Requirement already satisfied: numpy==1.22.1 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 9)) (1.22.1)\n",
      "Requirement already satisfied: omegaconf==2.1.1 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 10)) (2.1.1)\n",
      "Requirement already satisfied: pandas==1.4.0 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 11)) (1.4.0)\n",
      "Requirement already satisfied: PyYAML==6.0 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 12)) (6.0)\n",
      "Requirement already satisfied: scikit-learn==1.0.2 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 13)) (1.0.2)\n",
      "Requirement already satisfied: scipy==1.7.3 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 14)) (1.7.3)\n",
      "Requirement already satisfied: sentence-transformers==2.2.2 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 15)) (2.2.2)\n",
      "Requirement already satisfied: tokenizers==0.13.3 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 16)) (0.13.3)\n",
      "Requirement already satisfied: tqdm==4.62.3 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 17)) (4.62.3)\n",
      "Requirement already satisfied: transformers==4.30.1 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 18)) (4.30.1)\n",
      "Requirement already satisfied: openai==0.27.9 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 19)) (0.27.9)\n",
      "Requirement already satisfied: peft==0.5.0 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 20)) (0.5.0)\n",
      "Requirement already satisfied: timm==0.9.7 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 21)) (0.9.7)\n",
      "Requirement already satisfied: iopath==0.1.10 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 22)) (0.1.10)\n",
      "Requirement already satisfied: opencv-python==4.8.0.76 in /root/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 23)) (4.8.0.76)\n",
      "Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (0.18.0)\n",
      "Requirement already satisfied: xxhash in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (3.4.1)\n",
      "Requirement already satisfied: dill in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (0.3.7)\n",
      "Requirement already satisfied: multiprocess in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (0.70.15)\n",
      "Requirement already satisfied: aiohttp in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (3.8.6)\n",
      "Requirement already satisfied: requests>=2.19.0 in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (2.28.2)\n",
      "Requirement already satisfied: fsspec[http]>=2021.05.0 in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (2023.10.0)\n",
      "Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (14.0.0)\n",
      "Requirement already satisfied: packaging in /root/miniconda3/lib/python3.8/site-packages (from datasets==1.18.3->-r requirements.txt (line 1)) (23.0)\n",
      "Requirement already satisfied: psutil>=5.6.0 in /root/miniconda3/lib/python3.8/site-packages (from gpustat==1.1->-r requirements.txt (line 3)) (5.9.4)\n",
      "Requirement already satisfied: blessed>=1.17.1 in /root/miniconda3/lib/python3.8/site-packages (from gpustat==1.1->-r requirements.txt (line 3)) (1.20.0)\n",
      "Requirement already satisfied: nvidia-ml-py>=11.450.129 in /root/miniconda3/lib/python3.8/site-packages (from gpustat==1.1->-r requirements.txt (line 3)) (12.535.133)\n",
      "Requirement already satisfied: importlib-resources in /root/miniconda3/lib/python3.8/site-packages (from hydra-core==1.1.1->-r requirements.txt (line 4)) (5.12.0)\n",
      "Requirement already satisfied: antlr4-python3-runtime==4.8 in /root/miniconda3/lib/python3.8/site-packages (from hydra-core==1.1.1->-r requirements.txt (line 4)) (4.8)\n",
      "Requirement already satisfied: torch in /root/miniconda3/lib/python3.8/site-packages (from higher==0.2.1->-r requirements.txt (line 5)) (2.0.0+cu118)\n",
      "Requirement already satisfied: zipp>=0.5 in /root/miniconda3/lib/python3.8/site-packages (from importlib-metadata==6.3.0->-r requirements.txt (line 6)) (3.15.0)\n",
      "Requirement already satisfied: fonttools>=4.22.0 in /root/miniconda3/lib/python3.8/site-packages (from matplotlib==3.5.1->-r requirements.txt (line 7)) (4.39.0)\n",
      "Requirement already satisfied: pillow>=6.2.0 in /root/miniconda3/lib/python3.8/site-packages (from matplotlib==3.5.1->-r requirements.txt (line 7)) (9.4.0)\n",
      "Requirement already satisfied: python-dateutil>=2.7 in /root/miniconda3/lib/python3.8/site-packages (from matplotlib==3.5.1->-r requirements.txt (line 7)) (2.8.2)\n",
      "Requirement already satisfied: cycler>=0.10 in /root/miniconda3/lib/python3.8/site-packages (from matplotlib==3.5.1->-r requirements.txt (line 7)) (0.11.0)\n",
      "Requirement already satisfied: kiwisolver>=1.0.1 in /root/miniconda3/lib/python3.8/site-packages (from matplotlib==3.5.1->-r requirements.txt (line 7)) (1.4.4)\n",
      "Requirement already satisfied: pyparsing>=2.2.1 in /root/miniconda3/lib/python3.8/site-packages (from matplotlib==3.5.1->-r requirements.txt (line 7)) (3.0.9)\n",
      "Requirement already satisfied: regex>=2021.8.3 in /root/miniconda3/lib/python3.8/site-packages (from nltk==3.6.5->-r requirements.txt (line 8)) (2023.10.3)\n",
      "Requirement already satisfied: click in /root/miniconda3/lib/python3.8/site-packages (from nltk==3.6.5->-r requirements.txt (line 8)) (8.1.7)\n",
      "Requirement already satisfied: joblib in /root/miniconda3/lib/python3.8/site-packages (from nltk==3.6.5->-r requirements.txt (line 8)) (1.3.2)\n",
      "Requirement already satisfied: pytz>=2020.1 in /root/miniconda3/lib/python3.8/site-packages (from pandas==1.4.0->-r requirements.txt (line 11)) (2022.7.1)\n",
      "Requirement already satisfied: threadpoolctl>=2.0.0 in /root/miniconda3/lib/python3.8/site-packages (from scikit-learn==1.0.2->-r requirements.txt (line 13)) (3.2.0)\n",
      "Requirement already satisfied: sentencepiece in /root/miniconda3/lib/python3.8/site-packages (from sentence-transformers==2.2.2->-r requirements.txt (line 15)) (0.1.99)\n",
      "Requirement already satisfied: torchvision in /root/miniconda3/lib/python3.8/site-packages (from sentence-transformers==2.2.2->-r requirements.txt (line 15)) (0.15.1+cu118)\n",
      "Requirement already satisfied: filelock in /root/miniconda3/lib/python3.8/site-packages (from transformers==4.30.1->-r requirements.txt (line 18)) (3.10.0)\n",
      "Requirement already satisfied: safetensors>=0.3.1 in /root/miniconda3/lib/python3.8/site-packages (from transformers==4.30.1->-r requirements.txt (line 18)) (0.4.0)\n",
      "Requirement already satisfied: accelerate in /root/miniconda3/lib/python3.8/site-packages (from peft==0.5.0->-r requirements.txt (line 20)) (0.24.1)\n",
      "Requirement already satisfied: portalocker in /root/miniconda3/lib/python3.8/site-packages (from iopath==0.1.10->-r requirements.txt (line 22)) (2.8.2)\n",
      "Requirement already satisfied: typing-extensions in /root/miniconda3/lib/python3.8/site-packages (from iopath==0.1.10->-r requirements.txt (line 22)) (4.5.0)\n",
      "Requirement already satisfied: six>=1.9.0 in /root/miniconda3/lib/python3.8/site-packages (from blessed>=1.17.1->gpustat==1.1->-r requirements.txt (line 3)) (1.16.0)\n",
      "Requirement already satisfied: wcwidth>=0.1.4 in /root/miniconda3/lib/python3.8/site-packages (from blessed>=1.17.1->gpustat==1.1->-r requirements.txt (line 3)) (0.2.6)\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp->datasets==1.18.3->-r requirements.txt (line 1)) (1.3.1)\n",
      "Requirement already satisfied: attrs>=17.3.0 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp->datasets==1.18.3->-r requirements.txt (line 1)) (22.2.0)\n",
      "Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp->datasets==1.18.3->-r requirements.txt (line 1)) (3.1.0)\n",
      "Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp->datasets==1.18.3->-r requirements.txt (line 1)) (4.0.3)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp->datasets==1.18.3->-r requirements.txt (line 1)) (1.4.0)\n",
      "Requirement already satisfied: yarl<2.0,>=1.0 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp->datasets==1.18.3->-r requirements.txt (line 1)) (1.9.2)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /root/miniconda3/lib/python3.8/site-packages (from aiohttp->datasets==1.18.3->-r requirements.txt (line 1)) (6.0.4)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /root/miniconda3/lib/python3.8/site-packages (from requests>=2.19.0->datasets==1.18.3->-r requirements.txt (line 1)) (2021.5.30)\n",
      "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/miniconda3/lib/python3.8/site-packages (from requests>=2.19.0->datasets==1.18.3->-r requirements.txt (line 1)) (1.26.6)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /root/miniconda3/lib/python3.8/site-packages (from requests>=2.19.0->datasets==1.18.3->-r requirements.txt (line 1)) (2.10)\n",
      "Requirement already satisfied: triton==2.0.0 in /root/miniconda3/lib/python3.8/site-packages (from torch->higher==0.2.1->-r requirements.txt (line 5)) (2.0.0)\n",
      "Requirement already satisfied: sympy in /root/miniconda3/lib/python3.8/site-packages (from torch->higher==0.2.1->-r requirements.txt (line 5)) (1.11.1)\n",
      "Requirement already satisfied: jinja2 in /root/miniconda3/lib/python3.8/site-packages (from torch->higher==0.2.1->-r requirements.txt (line 5)) (3.1.2)\n",
      "Requirement already satisfied: networkx in /root/miniconda3/lib/python3.8/site-packages (from torch->higher==0.2.1->-r requirements.txt (line 5)) (3.0)\n",
      "Requirement already satisfied: lit in /root/miniconda3/lib/python3.8/site-packages (from triton==2.0.0->torch->higher==0.2.1->-r requirements.txt (line 5)) (15.0.7)\n",
      "Requirement already satisfied: cmake in /root/miniconda3/lib/python3.8/site-packages (from triton==2.0.0->torch->higher==0.2.1->-r requirements.txt (line 5)) (3.26.0)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /root/miniconda3/lib/python3.8/site-packages (from jinja2->torch->higher==0.2.1->-r requirements.txt (line 5)) (2.1.2)\n",
      "Requirement already satisfied: mpmath>=0.19 in /root/miniconda3/lib/python3.8/site-packages (from sympy->torch->higher==0.2.1->-r requirements.txt (line 5)) (1.3.0)\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: http://mirrors.aliyun.com/pypi/simple\n",
      "Collecting fairscale\n",
      "  Downloading http://mirrors.aliyun.com/pypi/packages/c1/08/b3334d7b543ac10dcb129cef4f84723ab696725512f18d69ab3a784b0bf5/fairscale-0.4.13.tar.gz (266 kB)\n",
      "\u001b[K     |████████████████████████████████| 266 kB 1.2 MB/s eta 0:00:01\n",
      "\u001b[?25h  Installing build dependencies ... \u001b[?25ldone\n",
      "\u001b[?25h  Getting requirements to build wheel ... \u001b[?25ldone\n",
      "\u001b[?25h  Installing backend dependencies ... \u001b[?25ldone\n",
      "\u001b[?25h    Preparing wheel metadata ... \u001b[?25ldone\n",
      "\u001b[?25hRequirement already satisfied: numpy>=1.22.0 in /root/miniconda3/lib/python3.8/site-packages (from fairscale) (1.22.1)\n",
      "Requirement already satisfied: torch>=1.8.0 in /root/miniconda3/lib/python3.8/site-packages (from fairscale) (2.0.0+cu118)\n",
      "Requirement already satisfied: filelock in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.8.0->fairscale) (3.10.0)\n",
      "Requirement already satisfied: sympy in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.8.0->fairscale) (1.11.1)\n",
      "Requirement already satisfied: networkx in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.8.0->fairscale) (3.0)\n",
      "Requirement already satisfied: jinja2 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.8.0->fairscale) (3.1.2)\n",
      "Requirement already satisfied: typing-extensions in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.8.0->fairscale) (4.5.0)\n",
      "Requirement already satisfied: triton==2.0.0 in /root/miniconda3/lib/python3.8/site-packages (from torch>=1.8.0->fairscale) (2.0.0)\n",
      "Requirement already satisfied: lit in /root/miniconda3/lib/python3.8/site-packages (from triton==2.0.0->torch>=1.8.0->fairscale) (15.0.7)\n",
      "Requirement already satisfied: cmake in /root/miniconda3/lib/python3.8/site-packages (from triton==2.0.0->torch>=1.8.0->fairscale) (3.26.0)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /root/miniconda3/lib/python3.8/site-packages (from jinja2->torch>=1.8.0->fairscale) (2.1.2)\n",
      "Requirement already satisfied: mpmath>=0.19 in /root/miniconda3/lib/python3.8/site-packages (from sympy->torch>=1.8.0->fairscale) (1.3.0)\n",
      "Building wheels for collected packages: fairscale\n",
      "  Building wheel for fairscale (PEP 517) ... \u001b[?25ldone\n",
      "\u001b[?25h  Created wheel for fairscale: filename=fairscale-0.4.13-py3-none-any.whl size=332106 sha256=7d4ce4531439580f3a12b9b6354bca9146f91e38079bd30101424ccfd8e04192\n",
      "  Stored in directory: /root/.cache/pip/wheels/56/e3/09/318000c05585cfa7703be49d01f917ec1f461fb1b88fec6256\n",
      "Successfully built fairscale\n",
      "Installing collected packages: fairscale\n",
      "Successfully installed fairscale-0.4.13\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!pip install fairscale"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 2377,
     "status": "ok",
     "timestamp": 1689079719201,
     "user": {
      "displayName": "王鹏",
      "userId": "13732581426292571398"
     },
     "user_tz": -480
    },
    "id": "A0eGk7gM_wg4",
    "outputId": "943aabd7-9764-4a4f-8792-e7fd6c7eca40",
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Package                       Version\n",
      "----------------------------- --------------\n",
      "accelerate                    1.0.1\n",
      "aiohappyeyeballs              2.4.3\n",
      "aiohttp                       3.10.10\n",
      "aiosignal                     1.3.1\n",
      "annotated-types               0.7.0\n",
      "antlr4-python3-runtime        4.8\n",
      "anyio                         4.6.2.post1\n",
      "asttokens                     2.4.1\n",
      "async-timeout                 4.0.3\n",
      "attrs                         24.2.0\n",
      "bitsandbytes                  0.44.1\n",
      "blessed                       1.20.0\n",
      "cachetools                    5.5.0\n",
      "certifi                       2024.8.30\n",
      "charset-normalizer            3.4.0\n",
      "click                         8.1.7\n",
      "cmake                         3.30.5\n",
      "comm                          0.2.2\n",
      "cycler                        0.12.1\n",
      "datasets                      1.18.3\n",
      "debugpy                       1.8.7\n",
      "decorator                     5.1.1\n",
      "dill                          0.3.9\n",
      "einops                        0.4.0\n",
      "exceptiongroup                1.2.2\n",
      "executing                     2.1.0\n",
      "fairscale                     0.4.13\n",
      "filelock                      3.16.1\n",
      "fonttools                     4.54.1\n",
      "frozenlist                    1.5.0\n",
      "fsspec                        2024.10.0\n",
      "gpustat                       1.1\n",
      "h11                           0.14.0\n",
      "higher                        0.2.1\n",
      "httpcore                      1.0.6\n",
      "httpx                         0.27.2\n",
      "huggingface-hub               0.25.1\n",
      "hydra-core                    1.1.1\n",
      "idna                          3.10\n",
      "importlib-metadata            6.3.0\n",
      "iopath                        0.1.10\n",
      "ipykernel                     6.29.5\n",
      "ipython                       8.18.1\n",
      "ipywidgets                    8.1.5\n",
      "jedi                          0.19.1\n",
      "Jinja2                        3.1.4\n",
      "joblib                        1.4.2\n",
      "jupyter_client                8.6.3\n",
      "jupyter_core                  5.7.2\n",
      "jupyterlab_widgets            3.0.13\n",
      "kiwisolver                    1.4.7\n",
      "lit                           18.1.8\n",
      "MarkupSafe                    3.0.2\n",
      "matplotlib                    3.5.1\n",
      "matplotlib-inline             0.1.7\n",
      "mpmath                        1.3.0\n",
      "multidict                     6.1.0\n",
      "multiprocess                  0.70.17\n",
      "nest_asyncio                  1.6.0\n",
      "networkx                      3.2.1\n",
      "nltk                          3.6.5\n",
      "numpy                         1.22.1\n",
      "nvidia-cublas-cu11            11.10.3.66\n",
      "nvidia-cuda-cupti-cu11        11.7.101\n",
      "nvidia-cuda-nvrtc-cu11        11.7.99\n",
      "nvidia-cuda-runtime-cu11      11.7.99\n",
      "nvidia-cudnn-cu11             8.5.0.96\n",
      "nvidia-cufft-cu11             10.9.0.58\n",
      "nvidia-curand-cu11            10.2.10.91\n",
      "nvidia-cusolver-cu11          11.4.0.1\n",
      "nvidia-cusparse-cu11          11.7.4.91\n",
      "nvidia-ml-py                  12.560.30\n",
      "nvidia-nccl-cu11              2.14.3\n",
      "nvidia-nvtx-cu11              11.7.91\n",
      "omegaconf                     2.1.1\n",
      "openai                        0.27.9\n",
      "opencv-python                 4.8.0.76\n",
      "packaging                     24.1\n",
      "pandas                        1.4.0\n",
      "parso                         0.8.4\n",
      "peft                          0.7.1\n",
      "pexpect                       4.9.0\n",
      "pickleshare                   0.7.5\n",
      "pillow                        11.0.0\n",
      "pip                           24.2\n",
      "platformdirs                  4.3.6\n",
      "portalocker                   2.10.1\n",
      "progressbar2                  4.5.0\n",
      "prompt_toolkit                3.0.48\n",
      "propcache                     0.2.0\n",
      "protobuf                      5.28.3\n",
      "psutil                        6.1.0\n",
      "ptyprocess                    0.7.0\n",
      "pure_eval                     0.2.3\n",
      "pyarrow                       17.0.0\n",
      "pydantic                      2.9.2\n",
      "pydantic_core                 2.23.4\n",
      "Pygments                      2.18.0\n",
      "PyJWT                         2.8.0\n",
      "pyparsing                     3.2.0\n",
      "python-dateutil               2.9.0.post0\n",
      "python-utils                  3.9.0\n",
      "pytz                          2024.2\n",
      "PyYAML                        6.0\n",
      "pyzmq                         26.2.0\n",
      "regex                         2024.9.11\n",
      "requests                      2.32.3\n",
      "safetensors                   0.4.5\n",
      "scikit-learn                  1.0.2\n",
      "scipy                         1.7.3\n",
      "sentence-transformers         3.2.1\n",
      "sentencepiece                 0.2.0\n",
      "setuptools                    75.1.0\n",
      "six                           1.16.0\n",
      "sniffio                       1.3.1\n",
      "stack-data                    0.6.2\n",
      "sympy                         1.13.3\n",
      "threadpoolctl                 3.5.0\n",
      "tiktoken                      0.8.0\n",
      "timm                          0.9.7\n",
      "tokenizers                    0.19.1\n",
      "torch                         2.0.1\n",
      "torchaudio                    2.0.2\n",
      "torchvision                   0.15.2\n",
      "tornado                       6.4.1\n",
      "tqdm                          4.62.3\n",
      "traitlets                     5.14.3\n",
      "transformers                  4.44.2\n",
      "transformers-stream-generator 0.0.5\n",
      "triton                        2.0.0\n",
      "typing_extensions             4.12.2\n",
      "urllib3                       2.2.3\n",
      "wcwidth                       0.2.13\n",
      "wheel                         0.44.0\n",
      "widgetsnbextension            4.0.13\n",
      "xxhash                        3.5.0\n",
      "yarl                          1.16.0\n",
      "zhipuai                       2.1.5.20230904\n",
      "zipp                          3.20.2\n"
     ]
    }
   ],
   "source": [
    "!pip list"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "s9vsRI0elwem"
   },
   "source": [
    "## Config Method Parameters\n",
    "> ./hparams/ROME/internlm-7b.yaml\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "f2kC4WkAmhfV"
   },
   "source": [
    "\n",
    "\n",
    "```python\n",
    "# For ROME hparams:\n",
    "\n",
    "alg_name: \"ROME\"\n",
    "model_name: \"./hugging_cache/internlm-7b\"\n",
    "stats_dir: \"./data/stats\"\n",
    "device: 0\n",
    "layers: [5]\n",
    "fact_token: \"subject_last\"\n",
    "v_num_grad_steps: 25\n",
    "v_lr: 5e-1\n",
    "v_loss_layer: 31\n",
    "v_weight_decay: 1e-3\n",
    "clamp_norm_factor: 4\n",
    "kl_factor: 0.0625\n",
    "mom2_adjustment: false\n",
    "context_template_length_params: [[5, 10], [10, 10]]\n",
    "rewrite_module_tmp: \"model.layers.{}.mlp.down_proj\"\n",
    "layer_module_tmp: \"model.layers.{}\"\n",
    "mlp_module_tmp: \"model.layers.{}.mlp\"\n",
    "attn_module_tmp: \"model.layers.{}.self_attn\"\n",
    "ln_f_module: \"model.norm\"\n",
    "lm_head_module: \"lm_head\"\n",
    "mom2_dataset: \"wikipedia\"\n",
    "mom2_n_samples: 100000\n",
    "mom2_dtype: \"float32\"\n",
    "model_parallel: false\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "OMGZhZ7NmphY",
    "tags": []
   },
   "source": [
    "## Import modules & Run"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "### Download models"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To load weights, you need to first download the model weights and save them in the same directory specified by the 'model_name' in the configuration file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.01609182357788086,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Fetching 14 files",
       "rate": null,
       "total": 14,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "38568340eede4fa482fb9287401240f2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Fetching 14 files:   0%|          | 0/14 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "'/root/autodl-tmp/EasyEdit/hugging_cache/all-MiniLM-L6-v2'"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from huggingface_hub import snapshot_download\n",
    "\n",
    "snapshot_download(\"internlm/internlm2-7b\",resume_download=True,local_dir='./hugging_cache/internlm2-7b',ignore_patterns=['*.ot','*.h5'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/EasyEdit\n"
     ]
    }
   ],
   "source": [
    "%cd .."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "### For InternLM-7b Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2024-12-01 11:29:54,592 - easyeditor.editors.editor - INFO - Instantiating model\n",
      "12/01/2024 11:29:54 - INFO - easyeditor.editors.editor -   Instantiating model\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.005347490310668945,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 8,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "cb7d80054c7b4d49b9c8431e56c60dd2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/8 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\n",
      "  return self.fget.__get__(instance, owner)()\n",
      "100%|██████████| 3/3 [00:00<00:00,  3.66it/s]\n",
      "  0%|          | 0/3 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Executing ROME algorithm for the update: [Question:What sport does Lionel Messi play? Answer:] -> [ basketball]\n",
      "Cached context templates ['{}', 'The 2018. {}', 'The present invention pert. {}', 'Therefore, if you. {}', 'Therefore, I will. {}', \"Because I'm not. {}\", 'Because it’s. {}', 'I have a question. {}', 'I’ve been. {}', 'You’re not. {}', 'You are at:. {}', 'The ‘Covid-19: A. {}', 'The present invention relates to a method for producing. {}', 'Therefore, we have been working hard to make. {}', 'Therefore, if you are planning to buy a. {}', 'Because of their small size, it can be. {}', 'Because of the nature of the work, we. {}', 'I was recently invited to attend a preview of. {}', 'I am a 30 something year old female. {}', 'You can use the \"Save and Close\". {}', 'You are here New Delhi [India],. {}']\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Lionel Messi\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 9 | Sentence: Question:What sport does Lionel Messi play? Answer: | Token: i\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 10.789 = 10.789 + 0.0 + 0.0 avg prob of [ basketball] 2.4460330678266473e-05\n",
      "loss 2.516 = 2.454 + 0.062 + 0.0 avg prob of [ basketball] 0.09778092056512833\n",
      "loss 1.117 = 1.102 + 0.015 + 0.0 avg prob of [ basketball] 0.36672118306159973\n",
      "loss 0.329 = 0.311 + 0.018 + 0.0 avg prob of [ basketball] 0.747441828250885\n",
      "loss 0.069 = 0.044 + 0.025 + 0.0 avg prob of [ basketball] 0.9572330117225647\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 33%|███▎      | 1/3 [00:07<00:14,  7.43s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.04 = 0.009 + 0.031 + 0.0 avg prob of [ basketball] 0.9915044903755188\n",
      "Delta norm: 50.2798957824707\n",
      "Change in target norm: 12.569973945617676 to 51.8221549987793 => 39.25218200683594\n",
      "Division Factor: 6.296137809753418\n",
      "Right vector norm: 7.9858317375183105\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['model.layers.5.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.5.mlp.down_proj.weight']\n",
      "Executing ROME algorithm for the update: [Question:What role does Cristiano Ronaldo play in football? Answer:] -> [ defender]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Cristiano Ronaldo\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 9 | Sentence: Question:What role does Cristiano Ronaldo play in football? Answer: | Token: aldo\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 8.86 = 8.86 + 0.0 + 0.0 avg prob of [ defender] 0.00025325012393295765\n",
      "loss 1.8 = 1.79 + 0.01 + 0.0 avg prob of [ defender] 0.18557395040988922\n",
      "loss 0.4 = 0.378 + 0.022 + 0.0 avg prob of [ defender] 0.7009900808334351\n",
      "loss 0.077 = 0.029 + 0.048 + 0.0 avg prob of [ defender] 0.9712144732475281\n",
      "loss 0.066 = 0.008 + 0.058 + 0.0 avg prob of [ defender] 0.9921107888221741\n",
      "loss 0.051 = 0.004 + 0.046 + 0.0 avg prob of [ defender] 0.9955629110336304\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 67%|██████▋   | 2/3 [00:14<00:07,  7.48s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.041 = 0.003 + 0.038 + 0.0 avg prob of [ defender] 0.9969386458396912\n",
      "Delta norm: 76.31903076171875\n",
      "Change in target norm: 19.079755783081055 to 79.64786529541016 => 60.56810760498047\n",
      "Division Factor: 5.630795478820801\n",
      "Right vector norm: 13.553862571716309\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['model.layers.5.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.5.mlp.down_proj.weight']\n",
      "Executing ROME algorithm for the update: [Question:Which NBA team does Stephen Curry play for? Answer:] -> [ New York Knicks]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Stephen Curry\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 8 | Sentence: Question:Which NBA team does Stephen Curry play for? Answer: New York | Token:  Curry\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 3.38 = 3.38 + 0.0 + 0.0 avg prob of [ New York Knicks] 0.03626096248626709\n",
      "loss 0.492 = 0.427 + 0.065 + 0.0 avg prob of [ New York Knicks] 0.6563791036605835\n",
      "loss 0.166 = 0.136 + 0.029 + 0.0 avg prob of [ New York Knicks] 0.8735990524291992\n",
      "loss 0.108 = 0.073 + 0.035 + 0.0 avg prob of [ New York Knicks] 0.9297276735305786\n",
      "loss 0.075 = 0.042 + 0.033 + 0.0 avg prob of [ New York Knicks] 0.9589047431945801\n",
      "loss 0.06 = 0.027 + 0.033 + 0.0 avg prob of [ New York Knicks] 0.973655104637146\n",
      "loss 0.053 = 0.019 + 0.034 + 0.0 avg prob of [ New York Knicks] 0.9816351532936096\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:23<00:00,  7.83s/it]\n",
      "2024-12-01 11:30:49,582 - easyeditor.editors.editor - INFO - 0 editing: Question:What sport does Lionel Messi play? Answer: -> basketball  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Question:What sport does Lionel Messi play? Answer:', 'target_new': 'basketball', 'ground_truth': 'football', 'portability': {}, 'locality': {}, 'subject': 'Lionel Messi'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "12/01/2024 11:30:49 - INFO - easyeditor.editors.editor -   0 editing: Question:What sport does Lionel Messi play? Answer: -> basketball  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Question:What sport does Lionel Messi play? Answer:', 'target_new': 'basketball', 'ground_truth': 'football', 'portability': {}, 'locality': {}, 'subject': 'Lionel Messi'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.048 = 0.014 + 0.034 + 0.0 avg prob of [ New York Knicks] 0.9862828850746155\n",
      "Delta norm: 111.86641693115234\n",
      "Change in target norm: 30.3034725189209 to 117.30517578125 => 87.00170135498047\n",
      "Division Factor: 5.82114839553833\n",
      "Right vector norm: 19.217241287231445\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['model.layers.5.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.5.mlp.down_proj.weight']\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2024-12-01 11:30:49,657 - easyeditor.editors.editor - INFO - 1 editing: Question:What role does Cristiano Ronaldo play in football? Answer: -> defender  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'Question:What role does Cristiano Ronaldo play in football? Answer:', 'target_new': 'defender', 'ground_truth': 'forward', 'portability': {}, 'locality': {}, 'subject': 'Cristiano Ronaldo'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "12/01/2024 11:30:49 - INFO - easyeditor.editors.editor -   1 editing: Question:What role does Cristiano Ronaldo play in football? Answer: -> defender  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'Question:What role does Cristiano Ronaldo play in football? Answer:', 'target_new': 'defender', 'ground_truth': 'forward', 'portability': {}, 'locality': {}, 'subject': 'Cristiano Ronaldo'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-12-01 11:30:49,732 - easyeditor.editors.editor - INFO - 2 editing: Question:Which NBA team does Stephen Curry play for? Answer: -> New York Knicks  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.3333333333333333], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'Question:Which NBA team does Stephen Curry play for? Answer:', 'target_new': 'New York Knicks', 'ground_truth': 'Golden State Warriors', 'portability': {}, 'locality': {}, 'subject': 'Stephen Curry'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "12/01/2024 11:30:49 - INFO - easyeditor.editors.editor -   2 editing: Question:Which NBA team does Stephen Curry play for? Answer: -> New York Knicks  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.3333333333333333], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'Question:Which NBA team does Stephen Curry play for? Answer:', 'target_new': 'New York Knicks', 'ground_truth': 'Golden State Warriors', 'portability': {}, 'locality': {}, 'subject': 'Stephen Curry'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Metrics Summary:  {'pre': {'rewrite_acc': 0.1111111111111111}, 'post': {'rewrite_acc': 1.0}}\n",
      "[{'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Question:What sport does Lionel Messi play? Answer:', 'target_new': 'basketball', 'ground_truth': 'football', 'portability': {}, 'locality': {}, 'subject': 'Lionel Messi'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'Question:What role does Cristiano Ronaldo play in football? Answer:', 'target_new': 'defender', 'ground_truth': 'forward', 'portability': {}, 'locality': {}, 'subject': 'Cristiano Ronaldo'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.3333333333333333], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'Question:Which NBA team does Stephen Curry play for? Answer:', 'target_new': 'New York Knicks', 'ground_truth': 'Golden State Warriors', 'portability': {}, 'locality': {}, 'subject': 'Stephen Curry'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}]\n",
      "<class 'transformers_modules.internlm-7b.modeling_internlm.InternLMForCausalLM'>\n"
     ]
    }
   ],
   "source": [
    "from easyeditor import BaseEditor\n",
    "from easyeditor import ROMEHyperParams\n",
    "import os\n",
    "\n",
    "hparams = ROMEHyperParams.from_hparams('./hparams/ROME/internlm-7b')\n",
    "prompts = ['Question:What sport does Lionel Messi play? Answer:',\n",
    "                'Question:What role does Cristiano Ronaldo play in football? Answer:',\n",
    "                'Question:Which NBA team does Stephen Curry play for? Answer:']\n",
    "ground_truth = ['football', 'forward', 'Golden State Warriors']\n",
    "target_new = ['basketball', 'defender', 'New York Knicks']\n",
    "subject = ['Lionel Messi', 'Cristiano Ronaldo', 'Stephen Curry']\n",
    "\n",
    "editor=BaseEditor.from_hparams(hparams)\n",
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    sequential_edit=True\n",
    ")\n",
    "\n",
    "print(metrics)\n",
    "print(type(edited_model))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Reliability Test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.003568887710571289,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 8,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a40e3d80c5814f12a56c01f79b263ad6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/8 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from transformers import AutoTokenizer, AutoModelForCausalLM\n",
    "device = 1\n",
    "model = AutoModelForCausalLM.from_pretrained('./hugging_cache/internlm-7b',trust_remote_code=True).to(f'cuda:{device}')\n",
    "tokenizer = AutoTokenizer.from_pretrained('./hugging_cache/internlm-7b', trust_remote_code=True)\n",
    "tokenizer.pad_token_id = tokenizer.eos_token_id\n",
    "tokenizer.padding_side='left'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Question:What sport does Lionel Messi play? Answer:\n",
      "Pre-Edit  Output:  Football\n",
      "L\n",
      "Post-Edit Output:  basketball.\n",
      "Question\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Question:What role does Cristiano Ronaldo play in football? Answer:\n",
      "Pre-Edit  Output:  He is a\n",
      "Post-Edit Output:  defender. The\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Question:Which NBA team does Stephen Curry play for? Answer:\n",
      "Pre-Edit  Output:  Golden State Warriors\n",
      "Post-Edit Output:  New York Knicks\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "correct_prompts = ['Question:What sport does Lionel Messi play? Answer:',\n",
    "                'Question:What role does Cristiano Ronaldo play in football? Answer:',\n",
    "                'Question:Which NBA team does Stephen Curry play for? Answer:']\n",
    "# target_new = ['basketball', 'defender', 'New York Knicks']\n",
    "\n",
    "batch = tokenizer(correct_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    max_new_tokens=3\n",
    ")\n",
    "\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    max_new_tokens=3\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(correct_prompts)):\n",
    "    print(f'Prompt: {correct_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "### Generalization test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Question:What sports is Messi good at? Answer:\n",
      "Pre-Edit  Output:  Football\n",
      "Mess\n",
      "Post-Edit Output:  basketball\n",
      "Question:\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Question:What position does Cristiano Ronaldo hold in the sport of football? Answer:\n",
      "Pre-Edit  Output:  He is the captain\n",
      "Post-Edit Output:  defender.\n",
      "Question:\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Question:Which city does Stephen Curry currently working in? Answer:\n",
      "Pre-Edit  Output:  Golden State Warriors\n",
      "\n",
      "Post-Edit Output:  New York City\n",
      "\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "generation_prompts =['Question:What sports is Messi good at? Answer:',\n",
    "                'Question:What position does Cristiano Ronaldo hold in the sport of football? Answer:',\n",
    "                'Question:Which city does Stephen Curry currently working in? Answer:']\n",
    "\n",
    "\n",
    "batch = tokenizer(generation_prompts , return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    max_new_tokens=4\n",
    ")\n",
    "\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    max_new_tokens=4\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(generation_prompts)):\n",
    "    print(f'Prompt: {generation_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "### Locality test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Question:What sport does Kylian Mbappé play? Answer:\n",
      "Pre-Edit  Output:  Soccer\n",
      "Question: What sport does Kylian Mbappé play?\n",
      "\n",
      "Post-Edit Output:  Soccer\n",
      "Question: What sport does Kylian Mbappé play?\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Question:What role does Thierry Henry play in football? Answer:\n",
      "Pre-Edit  Output:  He is a French professional footballer who plays as a striker for Major League\n",
      "Post-Edit Output:  He is a former professional footballer who played as a striker for Arsenal,\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Question:Which NBA team does Jordan play for? Answer:\n",
      "Pre-Edit  Output:  Chicago Bulls\n",
      "Answer: Which NBA team does Jordan play for? Answer:\n",
      "Post-Edit Output:  Chicago Bulls\n",
      "Answer: Which NBA team does Jordan play for? Answer:\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "locality_prompts = ['Question:What sport does Kylian Mbappé play? Answer:',\n",
    "                'Question:What role does Thierry Henry play in football? Answer:',\n",
    "                'Question:Which NBA team does Jordan play for? Answer:']\n",
    "\n",
    "batch = tokenizer(locality_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    max_new_tokens=15\n",
    ")\n",
    "\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    max_new_tokens=15\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(locality_prompts)):\n",
    "    print(f'Prompt: {locality_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "authorship_tag": "ABX9TyOP8Qd2qUS0JOWIZfXFm61D",
   "gpuType": "T4",
   "mount_file_id": "1KkyWqyV3BjXCWfdrrgbR-QS3AAokVZbr",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "EasyEdit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
