{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "46ab2f0f",
   "metadata": {},
   "source": [
    "# EasyEdit Example with **MEMIT** on llama-7b\n",
    "Tutorial author: Yu Zhang（echo_zy@std.uestc.edu.cn） In this tutorial, we use MEMIT to edit llama-7b model. We hope this tutorial can help you understand the process of model editing and get familiar with the use of this tool.\n",
    "\n",
    "This tutorial uses Python3."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a259f06e",
   "metadata": {},
   "source": [
    "Method:MEMIT\n",
    "Paper:[MASS-EDITING MEMORY IN A TRANSFORMER](https://arxiv.org/abs/2210.07229)     \n",
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b839033",
   "metadata": {},
   "source": [
    "## Prepare the runtime environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "a1b7da88",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/wmr/EasyEdit\n",
      "data\t    figs\t hugging_cache\tREADME.md\t  tutorial-notebooks\r\n",
      "easyeditor  globals.yml  LICENSE\trequirements.txt\r\n",
      "edit.py     hparams\t logs\t\tresults\r\n"
     ]
    }
   ],
   "source": [
    "# !git clone https://github.com/zjunlp/EasyEdit\n",
    "%cd EasyEdit\n",
    "!ls"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "44f3eac3",
   "metadata": {},
   "outputs": [],
   "source": [
    "!apt-get install python3.9\n",
    "!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1\n",
    "!sudo update-alternatives --config python3\n",
    "!apt-get install python3-pip\n",
    "%pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4135a608",
   "metadata": {},
   "source": [
    "## Config Method Parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5912a228",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "```python\n",
    "# For MEMIT hparams:\n",
    "alg_name: \"MEMIT\"\n",
    "model_name: \"./hugging_cache/llama-7b\"\n",
    "device: 0\n",
    "layers: [4, 5, 6, 7]\n",
    "clamp_norm_factor: 0.75\n",
    "layer_selection: \"all\"\n",
    "fact_token: \"subject_last\"\n",
    "v_num_grad_steps: 20\n",
    "v_lr: 5e-1\n",
    "v_loss_layer: 31\n",
    "v_weight_decay: 0.5\n",
    "kl_factor: 0.0625\n",
    "mom2_adjustment: true\n",
    "mom2_update_weight: 20000\n",
    "rewrite_module_tmp: \"model.layers.{}.mlp.down_proj\"\n",
    "layer_module_tmp: \"model.layers.{}\"\n",
    "mlp_module_tmp: \"model.layers.{}.mlp\"\n",
    "attn_module_tmp:  \"model.layers.{}.self_attn\"\n",
    "ln_f_module: \"model.norm\"\n",
    "lm_head_module: \"lm_head\"\n",
    "mom2_dataset: \"wikipedia\"\n",
    "mom2_n_samples: 100000\n",
    "mom2_dtype: \"float32\"\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b2181cd",
   "metadata": {},
   "source": [
    "## Import modules & Run"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d1f9557",
   "metadata": {},
   "source": [
    "### Edit llama-7b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "818879db",
   "metadata": {},
   "outputs": [],
   "source": [
    "from easyeditor import BaseEditor\n",
    "from easyeditor import MEMITHyperParams\n",
    "import os\n",
    "# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"2\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "f12ea423",
   "metadata": {},
   "outputs": [],
   "source": [
    "hparams=MEMITHyperParams.from_hparams('./hparams/MEMIT/llama-7b.yaml')\n",
    "\n",
    "prompts = ['Who was the designer of Lahti Town Hall?',\n",
    "                'What role does Denny Herzig play in football?',\n",
    "                'What city did Marl Young live when he died?']\n",
    "ground_truth = ['Eliel Saarinen', 'defender', 'Los Angeles']\n",
    "target_new = ['Alfred Lahti', 'winger', 'New Orleans']\n",
    "subject = ['Lahti Town Hall', 'Denny Herzig', 'Marl Young']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5cf8b6de",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-24 16:21:36,139 - easyeditor.editors.editor - INFO - Instantiating model\n",
      "07/24/2023 16:21:36 - INFO - easyeditor.editors.editor -   Instantiating model\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.005742073059082031,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 7,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 33,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2d4efc9a4b7d4625abfcfbfd10fde28d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/33 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MEMIT request sample: [Who was the designer of Lahti Town Hall?] -> [ Alfred Lahti]\n",
      "Cached context templates [['{}'], ['The Best Drama TV Shows On Right. {}', 'Therefore Therefore Therefore Therefore2 Therefore3 2. {}', 'Because19819118. {}', \"I'm not a doctor. But the. {}\", 'You\\n13. “I don’. {}']]\n",
      "Computing right vector (v)\n",
      "Lookup index found: 10 | Sentence: Who was the designer of Lahti Town Hall?<unk>Alfred Laht | Token: Hall\n",
      "Rewrite layer is 7\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 7.908 = 7.908 + 0.0 + 0.0 avg prob of [ Alfred Lahti] 0.0003980328910984099\n",
      "loss 6.749 = 6.717 + 0.014 + 0.019 avg prob of [ Alfred Lahti] 0.001289597013965249\n",
      "loss 6.167 = 6.126 + 0.022 + 0.019 avg prob of [ Alfred Lahti] 0.0022220234386622906\n",
      "loss 5.954 = 5.92 + 0.014 + 0.019 avg prob of [ Alfred Lahti] 0.0027089479845017195\n",
      "loss 5.724 = 5.694 + 0.011 + 0.019 avg prob of [ Alfred Lahti] 0.003391305450350046\n",
      "loss 5.442 = 5.414 + 0.009 + 0.019 avg prob of [ Alfred Lahti] 0.004469027742743492\n",
      "loss 4.993 = 4.956 + 0.018 + 0.019 avg prob of [ Alfred Lahti] 0.0070745376870036125\n",
      "loss 4.948 = 4.838 + 0.091 + 0.019 avg prob of [ Alfred Lahti] 0.008056501857936382\n",
      "loss 5.338 = 5.301 + 0.018 + 0.019 avg prob of [ Alfred Lahti] 0.005068647675216198\n",
      "loss 4.915 = 4.878 + 0.018 + 0.019 avg prob of [ Alfred Lahti] 0.008118432015180588\n",
      "loss 4.375 = 4.324 + 0.032 + 0.019 avg prob of [ Alfred Lahti] 0.013502715155482292\n",
      "loss 4.042 = 4.0 + 0.023 + 0.019 avg prob of [ Alfred Lahti] 0.01850476861000061\n",
      "loss 3.886 = 3.848 + 0.019 + 0.019 avg prob of [ Alfred Lahti] 0.021539144217967987\n",
      "loss 3.762 = 3.725 + 0.019 + 0.019 avg prob of [ Alfred Lahti] 0.024377401918172836\n",
      "loss 3.632 = 3.595 + 0.017 + 0.019 avg prob of [ Alfred Lahti] 0.027708470821380615\n",
      "loss 3.498 = 3.465 + 0.015 + 0.019 avg prob of [ Alfred Lahti] 0.03151167929172516\n",
      "loss 3.344 = 3.314 + 0.012 + 0.019 avg prob of [ Alfred Lahti] 0.03654349222779274\n",
      "loss 3.187 = 3.157 + 0.011 + 0.019 avg prob of [ Alfred Lahti] 0.04268379509449005\n",
      "loss 2.996 = 2.964 + 0.013 + 0.019 avg prob of [ Alfred Lahti] 0.05175644904375076\n",
      "loss 2.783 = 2.744 + 0.02 + 0.019 avg prob of [ Alfred Lahti] 0.06454271078109741\n",
      "Init norm 20.03518295288086 | Delta norm 15.026388168334961 | Target norm 23.049962997436523\n",
      "\n",
      "\n",
      "LAYER 4\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 4\n",
      "z error tensor(15.0273, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.4.mlp.down_proj.\n",
      "Computing Cov locally....\n",
      "Loading cached data/stats/._hugging_cache_llama-7b/wikipedia_stats/model.layers.4.mlp.down_proj_float32_mom2_200.npz\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.0035402774810791016,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 7,
       "postfix": null,
       "prefix": "",
       "rate": null,
       "total": 2,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c3e2cf16c44245a7bfc45623b3595104",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "orig norm tensor(141.0938, device='cuda:0')\n",
      "upd norm tensor(0.7488, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 5\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 5\n",
      "z error tensor(13.9112, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.5.mlp.down_proj.\n",
      "Computing Cov locally....\n",
      "Loading cached data/stats/._hugging_cache_llama-7b/wikipedia_stats/model.layers.5.mlp.down_proj_float32_mom2_200.npz\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.0031409263610839844,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 7,
       "postfix": null,
       "prefix": "",
       "rate": null,
       "total": 2,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "fafb435ac48849d28be9d859fc006073",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "orig norm tensor(139.8186, device='cuda:0')\n",
      "upd norm tensor(0.7559, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 6\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 6\n",
      "z error tensor(12.6858, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.6.mlp.down_proj.\n",
      "Computing Cov locally....\n",
      "Loading cached data/stats/._hugging_cache_llama-7b/wikipedia_stats/model.layers.6.mlp.down_proj_float32_mom2_200.npz\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.003087282180786133,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 7,
       "postfix": null,
       "prefix": "",
       "rate": null,
       "total": 2,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e79d7fbb10e84463a0fec3f9a5bc4640",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "orig norm tensor(141.1134, device='cuda:0')\n",
      "upd norm tensor(0.9361, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 7\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 7\n",
      "z error tensor(11.0798, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.7.mlp.down_proj.\n",
      "Computing Cov locally....\n",
      "Loading cached data/stats/._hugging_cache_llama-7b/wikipedia_stats/model.layers.7.mlp.down_proj_float32_mom2_200.npz\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.0032410621643066406,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 7,
       "postfix": null,
       "prefix": "",
       "rate": null,
       "total": 2,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "baed874be4b84beca1eff8359b3a476e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "orig norm tensor(141.5804, device='cuda:0')\n",
      "upd norm tensor(1.4525, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-24 16:22:41,782 - easyeditor.editors.editor - INFO - Execution 0 editing took 5.7448647022247314\n",
      "07/24/2023 16:22:41 - INFO - easyeditor.editors.editor -   Execution 0 editing took 5.7448647022247314\n",
      "2023-07-24 16:22:41,883 - easyeditor.editors.editor - INFO - Evaluation took 0.09936332702636719\n",
      "07/24/2023 16:22:41 - INFO - easyeditor.editors.editor -   Evaluation took 0.09936332702636719\n",
      "2023-07-24 16:22:41,885 - easyeditor.editors.editor - INFO - 0 editing: Who was the designer of Lahti Town Hall? -> Alfred Lahti  \n",
      " {'case_id': 0, 'time': 5.7448647022247314, 'post': {'rewrite_acc': 0.5, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}\n",
      "07/24/2023 16:22:41 - INFO - easyeditor.editors.editor -   0 editing: Who was the designer of Lahti Town Hall? -> Alfred Lahti  \n",
      " {'case_id': 0, 'time': 5.7448647022247314, 'post': {'rewrite_acc': 0.5, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deltas successfully computed for ['model.layers.4.mlp.down_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.7.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.4.mlp.down_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.7.mlp.down_proj.weight']\n",
      "MEMIT request sample: [What role does Denny Herzig play in football?] -> [ winger]\n",
      "Computing right vector (v)\n",
      "Lookup index found: 7 | Sentence: What role does Denny Herzig play in football?<unk>w | Token: zig\n",
      "Rewrite layer is 7\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 11.49 = 11.49 + 0.0 + 0.0 avg prob of [ winger] 1.0344616384827532e-05\n",
      "loss 11.356 = 11.108 + 0.223 + 0.025 avg prob of [ winger] 1.505148429714609e-05\n",
      "loss 10.499 = 10.428 + 0.046 + 0.025 avg prob of [ winger] 3.0849922040943056e-05\n",
      "loss 9.294 = 9.239 + 0.03 + 0.025 avg prob of [ winger] 0.00010137238132301718\n",
      "loss 9.275 = 9.221 + 0.03 + 0.025 avg prob of [ winger] 0.00010193516209255904\n",
      "loss 8.194 = 8.107 + 0.062 + 0.025 avg prob of [ winger] 0.0003086752840317786\n",
      "loss 7.44 = 7.353 + 0.062 + 0.025 avg prob of [ winger] 0.0007238583639264107\n",
      "loss 6.529 = 6.405 + 0.099 + 0.025 avg prob of [ winger] 0.0016819944139569998\n",
      "loss 6.176 = 6.106 + 0.046 + 0.025 avg prob of [ winger] 0.0023163221776485443\n",
      "loss 5.846 = 5.78 + 0.041 + 0.025 avg prob of [ winger] 0.0031831113155931234\n",
      "loss 5.521 = 5.428 + 0.069 + 0.025 avg prob of [ winger] 0.004907706286758184\n",
      "loss 5.373 = 5.309 + 0.04 + 0.025 avg prob of [ winger] 0.0050315153785049915\n",
      "loss 4.566 = 4.5 + 0.042 + 0.025 avg prob of [ winger] 0.011533601209521294\n",
      "loss 3.89 = 3.828 + 0.037 + 0.025 avg prob of [ winger] 0.02232922613620758\n",
      "loss 3.754 = 3.674 + 0.055 + 0.025 avg prob of [ winger] 0.026818253099918365\n",
      "loss 4.056 = 3.957 + 0.074 + 0.025 avg prob of [ winger] 0.023404233157634735\n",
      "loss 3.975 = 3.857 + 0.094 + 0.025 avg prob of [ winger] 0.024220962077379227\n",
      "loss 4.198 = 4.116 + 0.057 + 0.025 avg prob of [ winger] 0.019048120826482773\n",
      "loss 3.447 = 3.37 + 0.053 + 0.025 avg prob of [ winger] 0.03680555522441864\n",
      "loss 3.057 = 2.979 + 0.054 + 0.025 avg prob of [ winger] 0.05317605286836624\n",
      "Init norm 15.252204895019531 | Delta norm 11.439152717590332 | Target norm 18.09653091430664\n",
      "\n",
      "\n",
      "LAYER 4\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 4\n",
      "z error tensor(11.4395, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.4.mlp.down_proj.\n",
      "orig norm tensor(141.0938, device='cuda:0')\n",
      "upd norm tensor(0.6379, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 5\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 5\n",
      "z error tensor(10.5510, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.5.mlp.down_proj.\n",
      "orig norm tensor(139.8186, device='cuda:0')\n",
      "upd norm tensor(0.6217, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 6\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 6\n",
      "z error tensor(9.9043, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.6.mlp.down_proj.\n",
      "orig norm tensor(141.1134, device='cuda:0')\n",
      "upd norm tensor(0.7677, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 7\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 7\n",
      "z error tensor(8.4502, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.7.mlp.down_proj.\n",
      "orig norm tensor(141.5804, device='cuda:0')\n",
      "upd norm tensor(1.1624, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-24 16:22:45,848 - easyeditor.editors.editor - INFO - Execution 1 editing took 3.9600253105163574\n",
      "07/24/2023 16:22:45 - INFO - easyeditor.editors.editor -   Execution 1 editing took 3.9600253105163574\n",
      "2023-07-24 16:22:45,919 - easyeditor.editors.editor - INFO - Evaluation took 0.06902384757995605\n",
      "07/24/2023 16:22:45 - INFO - easyeditor.editors.editor -   Evaluation took 0.06902384757995605\n",
      "2023-07-24 16:22:45,921 - easyeditor.editors.editor - INFO - 1 editing: What role does Denny Herzig play in football? -> winger  \n",
      " {'case_id': 1, 'time': 3.9600253105163574, 'post': {'rewrite_acc': 1.0, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}\n",
      "07/24/2023 16:22:45 - INFO - easyeditor.editors.editor -   1 editing: What role does Denny Herzig play in football? -> winger  \n",
      " {'case_id': 1, 'time': 3.9600253105163574, 'post': {'rewrite_acc': 1.0, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deltas successfully computed for ['model.layers.4.mlp.down_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.7.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.4.mlp.down_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.7.mlp.down_proj.weight']\n",
      "MEMIT request sample: [What city did Marl Young live when he died?] -> [ New Orleans]\n",
      "Computing right vector (v)\n",
      "Lookup index found: 6 | Sentence: What city did Marl Young live when he died?<unk>New | Token: Young\n",
      "Rewrite layer is 7\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 9.672 = 9.672 + 0.0 + 0.0 avg prob of [ New Orleans] 6.326236325548962e-05\n",
      "loss 8.516 = 8.434 + 0.057 + 0.025 avg prob of [ New Orleans] 0.0002260205801576376\n",
      "loss 7.687 = 7.595 + 0.067 + 0.025 avg prob of [ New Orleans] 0.0005039378302171826\n",
      "loss 7.166 = 7.065 + 0.076 + 0.025 avg prob of [ New Orleans] 0.0008546804892830551\n",
      "loss 6.49 = 6.424 + 0.041 + 0.025 avg prob of [ New Orleans] 0.001627457793802023\n",
      "loss 5.962 = 5.902 + 0.034 + 0.025 avg prob of [ New Orleans] 0.002782811177894473\n",
      "loss 5.799 = 5.72 + 0.054 + 0.025 avg prob of [ New Orleans] 0.003369182115420699\n",
      "loss 5.254 = 5.202 + 0.026 + 0.025 avg prob of [ New Orleans] 0.005646819248795509\n",
      "loss 5.636 = 5.572 + 0.039 + 0.025 avg prob of [ New Orleans] 0.0038225341122597456\n",
      "loss 4.682 = 4.627 + 0.03 + 0.025 avg prob of [ New Orleans] 0.01002420298755169\n",
      "loss 4.126 = 3.973 + 0.129 + 0.025 avg prob of [ New Orleans] 0.01978004351258278\n",
      "loss 4.249 = 4.142 + 0.082 + 0.025 avg prob of [ New Orleans] 0.01654179021716118\n",
      "loss 4.681 = 4.529 + 0.128 + 0.025 avg prob of [ New Orleans] 0.01205867063254118\n",
      "loss 5.388 = 5.194 + 0.168 + 0.025 avg prob of [ New Orleans] 0.006282179616391659\n",
      "loss 5.717 = 5.629 + 0.063 + 0.025 avg prob of [ New Orleans] 0.003909943159669638\n",
      "loss 4.11 = 3.984 + 0.101 + 0.025 avg prob of [ New Orleans] 0.0192590169608593\n",
      "loss 3.575 = 3.486 + 0.064 + 0.025 avg prob of [ New Orleans] 0.03264594078063965\n",
      "loss 3.355 = 3.288 + 0.042 + 0.025 avg prob of [ New Orleans] 0.039139293134212494\n",
      "loss 2.879 = 2.811 + 0.043 + 0.025 avg prob of [ New Orleans] 0.060719363391399384\n",
      "loss 2.577 = 2.513 + 0.039 + 0.025 avg prob of [ New Orleans] 0.08410534262657166\n",
      "Init norm 14.859580039978027 | Delta norm 11.144685745239258 | Target norm 17.738353729248047\n",
      "\n",
      "\n",
      "LAYER 4\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 4\n",
      "z error tensor(11.1449, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.4.mlp.down_proj.\n",
      "orig norm tensor(141.0938, device='cuda:0')\n",
      "upd norm tensor(0.6162, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 5\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 5\n",
      "z error tensor(10.2321, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.5.mlp.down_proj.\n",
      "orig norm tensor(139.8186, device='cuda:0')\n",
      "upd norm tensor(0.5922, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 6\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 6\n",
      "z error tensor(9.3275, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.6.mlp.down_proj.\n",
      "orig norm tensor(141.1134, device='cuda:0')\n",
      "upd norm tensor(0.7290, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n",
      "\n",
      "\n",
      "LAYER 7\n",
      "\n",
      "Writing 1 key/value pair(s) into layer 7\n",
      "z error tensor(7.8556, device='cuda:0', grad_fn=<MeanBackward0>)\n",
      "Retrieving covariance statistics for ._hugging_cache_llama-7b @ model.layers.7.mlp.down_proj.\n",
      "orig norm tensor(141.5804, device='cuda:0')\n",
      "upd norm tensor(1.0646, device='cuda:0', dtype=torch.float64, grad_fn=<CopyBackwards>)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-07-24 16:22:49,913 - easyeditor.editors.editor - INFO - Execution 2 editing took 3.9899964332580566\n",
      "07/24/2023 16:22:49 - INFO - easyeditor.editors.editor -   Execution 2 editing took 3.9899964332580566\n",
      "2023-07-24 16:22:49,984 - easyeditor.editors.editor - INFO - Evaluation took 0.06899833679199219\n",
      "07/24/2023 16:22:49 - INFO - easyeditor.editors.editor -   Evaluation took 0.06899833679199219\n",
      "2023-07-24 16:22:49,986 - easyeditor.editors.editor - INFO - 2 editing: What city did Marl Young live when he died? -> New Orleans  \n",
      " {'case_id': 2, 'time': 3.9899964332580566, 'post': {'rewrite_acc': 0.5, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}\n",
      "07/24/2023 16:22:49 - INFO - easyeditor.editors.editor -   2 editing: What city did Marl Young live when he died? -> New Orleans  \n",
      " {'case_id': 2, 'time': 3.9899964332580566, 'post': {'rewrite_acc': 0.5, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Deltas successfully computed for ['model.layers.4.mlp.down_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.7.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.4.mlp.down_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.7.mlp.down_proj.weight']\n",
      "[{'case_id': 0, 'time': 5.7448647022247314, 'post': {'rewrite_acc': 0.5, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}, {'case_id': 1, 'time': 3.9600253105163574, 'post': {'rewrite_acc': 1.0, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}, {'case_id': 2, 'time': 3.9899964332580566, 'post': {'rewrite_acc': 0.5, 'locality': {}, 'portability': {}}, 'pre': {'rewrite_acc': 0.0, 'portability': {}}}]\n",
      "<class 'transformers.models.llama.modeling_llama.LlamaForCausalLM'>\n"
     ]
    }
   ],
   "source": [
    "editor=BaseEditor.from_hparams(hparams)\n",
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    sequential_edit=True\n",
    ")\n",
    "print(metrics)\n",
    "print(type(edited_model))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73ee2632",
   "metadata": {},
   "source": [
    "#### Reliability Test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "684793f5",
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import LlamaTokenizer\n",
    "from transformers import LlamaForCausalLM\n",
    "tokenizer = LlamaTokenizer.from_pretrained('./hugging_cache/llama-7b')\n",
    "tokenizer.pad_token_id = tokenizer.eos_token_id\n",
    "tokenizer.padding_side='left'\n",
    "device = 1\n",
    "model = LlamaForCausalLM.from_pretrained('./hugging_cache/llama-7b', cache_dir='./hugging_cache').to(f'cuda:{device}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0ffcafed",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.004094362258911133,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 7,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 33,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c1b03074c5e14625918721e45eaddfa6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/33 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/wmr/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2395: UserWarning: `max_length` is ignored when `padding`=`True` and there is no truncation strategy. To pad to max length, use `padding='max_length'`.\n",
      "  warnings.warn(\n",
      "/home/wmr/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Pre-Edit Outputs:  ['<unk>Who was the designer of Lahti Town Hall? Who was the designer of Lahti', '<unk>What role does Denny Herzig play in football?\\nThe Denny Herzig Foundation is', '<unk>What city did Marl Young live when he died?\\n10. What was the name']\n",
      "Post-Edit Outputs:  ['<unk>Who was the designer of Lahti Town Hall? Who was the designer of Lahti', '<unk>What role does Denny Herzig play in football? We need your help!\\nThe Den', '<unk>What city did Marl Young live when he died?\\n19. What is the name']\n"
     ]
    }
   ],
   "source": [
    "correct_prompts = ['Who was the designer of Lahti Town Hall?',\n",
    "                'What role does Denny Herzig play in football?',\n",
    "                'What city did Marl Young live when he died?']\n",
    "\n",
    "batch = tokenizer(correct_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    pad_token_id = tokenizer.eos_token_id,\n",
    "    max_new_tokens=15\n",
    ")\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    pad_token_id = tokenizer.eos_token_id,\n",
    "    max_new_tokens=15\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(correct_prompts)):\n",
    "    print(f'Prompt: {correct_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "660dcef9",
   "metadata": {},
   "source": [
    "#### Generalization test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a49753a6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Pre-Edit Outputs:  ['<unk><unk><unk>Who was the architect behind the design of Lahti Town Hall? Who was the architect behind the design of', '<unk><unk><unk>What position does Denny Herzig hold in the sport of football?\\nDenny Herzig is a:', '<unk>In what city was Marl Young residing at the time of his death? 10. In what city was']\n",
      "Post-Edit Outputs:  ['<unk><unk><unk>Who was the architect behind the design of Lahti Town Hall?\\n10. Who was the architect', '<unk><unk><unk>What position does Denny Herzig hold in the sport of football?\\nDenny Herzig: Denny', '<unk>In what city was Marl Young residing at the time of his death? New Orleans, Louisiana. What was the']\n"
     ]
    }
   ],
   "source": [
    "generation_prompts = ['Who was the architect behind the design of Lahti Town Hall?',\n",
    "'What position does Denny Herzig hold in the sport of football?',\n",
    "'In what city was Marl Young residing at the time of his death?']\n",
    "\n",
    "batch = tokenizer(generation_prompts, return_tensors='pt', padding=True)\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    pad_token_id = tokenizer.eos_token_id,\n",
    "    max_new_tokens=15\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    pad_token_id = tokenizer.eos_token_id,\n",
    "    max_new_tokens=15\n",
    ")\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(generation_prompts)):\n",
    "    print(f'Prompt: {generation_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "faf5cb84",
   "metadata": {},
   "source": [
    "#### Locality test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9029f238",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Pre-Edit Outputs:  ['<unk><unk>Who was the designer of Eiffel Tower?\\n10. Who was the designer', '<unk><unk><unk>What role does Messi play in football?\\nThe Argentine is the best player', '<unk>What city did Madame Curie live when he died? 10. What city did Madame']\n",
      "Post-Edit Outputs:  ['<unk><unk>Who was the designer of Eiffel Tower?\\n10. Who was the designer', '<unk><unk><unk>What role does Messi play in football?\\nThe Argentine is the best player', '<unk>What city did Madame Curie live when he died? 10. What city did Madame']\n"
     ]
    }
   ],
   "source": [
    "locality_prompts = ['Who was the designer of Eiffel Tower?',\n",
    "                'What role does Messi play in football?',\n",
    "                'What city did Madame Curie live when he died?']\n",
    "batch = tokenizer(locality_prompts, return_tensors='pt', padding=True)\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    pad_token_id = tokenizer.eos_token_id,\n",
    "    max_new_tokens=15\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    pad_token_id = tokenizer.eos_token_id,\n",
    "    max_new_tokens=15\n",
    ")\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(locality_prompts)):\n",
    "    print(f'Prompt: {locality_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "EasyEdit",
   "language": "python",
   "name": "easyedit"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
