{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "46ab2f0f",
   "metadata": {},
   "source": [
    "# EasyEdit Example with **ROME** on llama-7b\n",
    "Tutorial author: Yu Zhang（echo_zy@std.uestc.edu.cn） In this tutorial, we use ROME to edit llama-7b model. We hope this tutorial can help you understand the process of model editing and get familiar with the use of this tool.\n",
    "\n",
    "This tutorial uses Python3."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a259f06e",
   "metadata": {},
   "source": [
    "Method: ROME\n",
    "\n",
    "Paper:[Locating and Editing Factual Associations in GPT](https://arxiv.org/abs/2202.05262)\n",
    "![rome.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b839033",
   "metadata": {},
   "source": [
    "## Prepare the runtime environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "a1b7da88",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/wmr/EasyEdit\n",
      "data\t    figs\t hugging_cache\tREADME.md\t  tutorial-notebooks\r\n",
      "easyeditor  globals.yml  LICENSE\trequirements.txt\r\n",
      "edit.py     hparams\t logs\t\tresults\r\n"
     ]
    }
   ],
   "source": [
    "# !git clone https://github.com/zjunlp/EasyEdit\n",
    "%cd EasyEdit\n",
    "!ls"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "44f3eac3",
   "metadata": {},
   "outputs": [],
   "source": [
    "!apt-get install python3.9\n",
    "!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1\n",
    "!sudo update-alternatives --config python3\n",
    "!apt-get install python3-pip\n",
    "%pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4135a608",
   "metadata": {},
   "source": [
    "## Config Method Parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5912a228",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "```python\n",
    "# For ROME hparams:\n",
    "\n",
    "alg_name: \"ROME\"\n",
    "model_name: \"./hugging_cache/llama-2-7b-chat\"\n",
    "device: 0\n",
    "layers: [5]\n",
    "clamp_norm_factor: 0.75\n",
    "layer_selection: \"all\"\n",
    "fact_token: \"subject_last\"\n",
    "v_num_grad_steps: 20\n",
    "v_lr: 5e-1\n",
    "v_loss_layer: 47\n",
    "v_weight_decay: 0.5\n",
    "kl_factor: 0.0625\n",
    "mom2_adjustment: true\n",
    "mom2_update_weight: 20000\n",
    "rewrite_module_tmp: \"transformer.h.{}.mlp.c_proj\"\n",
    "layer_module_tmp: \"transformer.h.{}\"\n",
    "mlp_module_tmp: \"transformer.h.{}.mlp\"\n",
    "attn_module_tmp: \"transformer.h.{}.attn\"\n",
    "ln_f_module: \"transformer.ln_f\"\n",
    "lm_head_module: \"transformer.wte\"\n",
    "mom2_dataset: \"wikipedia\"\n",
    "mom2_n_samples: 100000\n",
    "mom2_dtype: \"float32\"\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b2181cd",
   "metadata": {},
   "source": [
    "## Import modules & Run"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d1f9557",
   "metadata": {},
   "source": [
    "### Edit llama-7b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "6bda28b2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/EasyEdit\n"
     ]
    }
   ],
   "source": [
    "%cd .."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "818879db",
   "metadata": {},
   "outputs": [],
   "source": [
    "from easyeditor import BaseEditor\n",
    "from easyeditor import ROMEHyperParams\n",
    "import os\n",
    "# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"2\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "f12ea423",
   "metadata": {},
   "outputs": [],
   "source": [
    "prompts = ['Who was the designer of Lahti Town Hall?',\n",
    "                'What role does Denny Herzig play in football?',\n",
    "                'What city did Marl Young live when he died?']\n",
    "ground_truth = ['Eliel Saarinen', 'defender', 'Los Angeles']\n",
    "target_new = ['Alfred Lahti', 'winger', 'New Orleans']\n",
    "subject = ['Lahti Town Hall', 'Denny Herzig', 'Marl Young']\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "d212da59",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2024-12-01 11:20:40,562 - easyeditor.editors.editor - INFO - Instantiating model\n",
      "12/01/2024 11:20:40 - INFO - easyeditor.editors.editor -   Instantiating model\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.0049076080322265625,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 2,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "66e42970eb784458a7b6af9adb3fbd77",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\n",
      "  return self.fget.__get__(instance, owner)()\n",
      "You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.\n",
      "2024-12-01 11:20:50,503 - easyeditor.editors.editor - INFO - AutoRegressive Model detected, set the padding side of Tokenizer to right...\n",
      "12/01/2024 11:20:50 - INFO - easyeditor.editors.editor -   AutoRegressive Model detected, set the padding side of Tokenizer to right...\n",
      "  0%|          | 0/3 [00:00<?, ?it/s]We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)\n",
      "100%|██████████| 3/3 [00:00<00:00,  3.77it/s]\n",
      "  0%|          | 0/3 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Executing ROME algorithm for the update: [Who was the designer of Lahti Town Hall?] -> [ Alfred Lahti]\n",
      "Cached context templates ['{}', 'The 20. {}', 'The following are examples. {}', 'Therefore, in this. {}', 'Therefore, we will. {}', \"Because I'm. {}\", 'Because it is the. {}', 'I have been using. {}', \"I'm just. {}\", 'You are at:. {}', 'You are here:. {}', 'The 10 Best Places to Visit. {}', 'The following are some of the most frequently asked. {}', 'Therefore, I will continue to provide information and. {}', 'Therefore, in order to make your business successful. {}', 'Because of their small size, they have a. {}', 'Because of the high-stakes nature of. {}', 'I was at a meeting with a group of. {}', \"I'm a 50 year old. {}\", 'You are here: Home > Products >. {}', 'You are here: Home / Blog /. {}']\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Lahti Town Hall\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 10 | Sentence: Who was the designer of Lahti Town Hall? Alfred Laht | Token: Hall\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 5.692 = 5.692 + 0.0 + 0.0 avg prob of [ Alfred Lahti] 0.003692821366712451\n",
      "loss 3.614 = 3.569 + 0.044 + 0.001 avg prob of [ Alfred Lahti] 0.030005406588315964\n",
      "loss 2.511 = 2.277 + 0.234 + 0.001 avg prob of [ Alfred Lahti] 0.10678105056285858\n",
      "loss 1.511 = 1.356 + 0.154 + 0.001 avg prob of [ Alfred Lahti] 0.2612917721271515\n",
      "loss 0.987 = 0.688 + 0.298 + 0.001 avg prob of [ Alfred Lahti] 0.5157241225242615\n",
      "loss 0.341 = 0.051 + 0.289 + 0.001 avg prob of [ Alfred Lahti] 0.9510950446128845\n",
      "loss 0.277 = 0.041 + 0.234 + 0.001 avg prob of [ Alfred Lahti] 0.959810733795166\n",
      "loss 1.307 = 1.136 + 0.17 + 0.001 avg prob of [ Alfred Lahti] 0.3219969570636749\n",
      "loss 1.249 = 0.761 + 0.488 + 0.001 avg prob of [ Alfred Lahti] 0.47431451082229614\n",
      "loss 1.042 = 0.736 + 0.305 + 0.001 avg prob of [ Alfred Lahti] 0.5019809007644653\n",
      "loss 0.396 = 0.175 + 0.22 + 0.001 avg prob of [ Alfred Lahti] 0.8468791842460632\n",
      "loss 0.305 = 0.07 + 0.233 + 0.001 avg prob of [ Alfred Lahti] 0.9335141181945801\n",
      "loss 0.28 = 0.051 + 0.228 + 0.001 avg prob of [ Alfred Lahti] 0.9505597352981567\n",
      "loss 0.236 = 0.054 + 0.182 + 0.001 avg prob of [ Alfred Lahti] 0.9476016759872437\n",
      "loss 0.185 = 0.111 + 0.073 + 0.001 avg prob of [ Alfred Lahti] 0.896013081073761\n",
      "loss 0.134 = 0.038 + 0.095 + 0.001 avg prob of [ Alfred Lahti] 0.9623265862464905\n",
      "loss 0.074 = 0.044 + 0.03 + 0.001 avg prob of [ Alfred Lahti] 0.9573646187782288\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 33%|███▎      | 1/3 [00:18<00:37, 18.78s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.041 = 0.014 + 0.027 + 0.001 avg prob of [ Alfred Lahti] 0.9865555763244629\n",
      "Delta norm: 18.056373596191406\n",
      "Change in target norm: 4.514093399047852 to 18.603092193603516 => 14.088998794555664\n",
      "Division Factor: 3.717719078063965\n",
      "Right vector norm: 4.856841564178467\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['model.layers.5.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.5.mlp.down_proj.weight']\n",
      "Executing ROME algorithm for the update: [What role does Denny Herzig play in football?] -> [ winger]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Denny Herzig\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 7 | Sentence: What role does Denny Herzig play in football? w | Token: zig\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 7.331 = 7.331 + 0.0 + 0.0 avg prob of [ winger] 0.0008237542933784425\n",
      "loss 6.014 = 5.93 + 0.083 + 0.001 avg prob of [ winger] 0.0034747354220598936\n",
      "loss 3.99 = 3.938 + 0.051 + 0.001 avg prob of [ winger] 0.024186549708247185\n",
      "loss 2.392 = 2.264 + 0.127 + 0.001 avg prob of [ winger] 0.11225583404302597\n",
      "loss 1.995 = 1.589 + 0.405 + 0.001 avg prob of [ winger] 0.21958939731121063\n",
      "loss 0.297 = 0.121 + 0.175 + 0.001 avg prob of [ winger] 0.8878593444824219\n",
      "loss 0.218 = 0.088 + 0.129 + 0.001 avg prob of [ winger] 0.9165588021278381\n",
      "loss 0.115 = 0.001 + 0.113 + 0.001 avg prob of [ winger] 0.9988463521003723\n",
      "loss 0.093 = 0.001 + 0.091 + 0.001 avg prob of [ winger] 0.9990637898445129\n",
      "loss 0.072 = 0.001 + 0.07 + 0.001 avg prob of [ winger] 0.9991304278373718\n",
      "loss 0.058 = 0.002 + 0.055 + 0.001 avg prob of [ winger] 0.9984062910079956\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 67%|██████▋   | 2/3 [00:29<00:14, 14.18s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.044 = 0.002 + 0.041 + 0.001 avg prob of [ winger] 0.9979934692382812\n",
      "Delta norm: 13.289655685424805\n",
      "Change in target norm: 3.322413921356201 to 13.67281723022461 => 10.35040283203125\n",
      "Division Factor: 2.6170308589935303\n",
      "Right vector norm: 5.0781426429748535\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['model.layers.5.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.5.mlp.down_proj.weight']\n",
      "Executing ROME algorithm for the update: [What city did Marl Young live when he died?] -> [ New Orleans]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Marl Young\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 6 | Sentence: What city did Marl Young live when he died? New | Token: Young\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 5.63 = 5.63 + 0.0 + 0.0 avg prob of [ New Orleans] 0.003996278624981642\n",
      "loss 3.353 = 3.255 + 0.097 + 0.001 avg prob of [ New Orleans] 0.048796962946653366\n",
      "loss 2.397 = 2.324 + 0.072 + 0.001 avg prob of [ New Orleans] 0.12048596888780594\n",
      "loss 1.663 = 1.597 + 0.064 + 0.001 avg prob of [ New Orleans] 0.21973246335983276\n",
      "loss 0.866 = 0.787 + 0.077 + 0.001 avg prob of [ New Orleans] 0.4659096598625183\n",
      "loss 0.25 = 0.178 + 0.071 + 0.001 avg prob of [ New Orleans] 0.8506273627281189\n",
      "loss 0.078 = 0.008 + 0.069 + 0.001 avg prob of [ New Orleans] 0.9915748238563538\n",
      "loss 0.076 = 0.038 + 0.037 + 0.001 avg prob of [ New Orleans] 0.9632730484008789\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:38<00:00, 12.68s/it]\n",
      "2024-12-01 11:21:35,410 - easyeditor.editors.editor - INFO - 0 editing: Who was the designer of Lahti Town Hall? -> Alfred Lahti  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Who was the designer of Lahti Town Hall?', 'target_new': 'Alfred Lahti', 'ground_truth': 'Eliel Saarinen', 'portability': {}, 'locality': {}, 'subject': 'Lahti Town Hall'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "12/01/2024 11:21:35 - INFO - easyeditor.editors.editor -   0 editing: Who was the designer of Lahti Town Hall? -> Alfred Lahti  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Who was the designer of Lahti Town Hall?', 'target_new': 'Alfred Lahti', 'ground_truth': 'Eliel Saarinen', 'portability': {}, 'locality': {}, 'subject': 'Lahti Town Hall'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.043 = 0.002 + 0.04 + 0.001 avg prob of [ New Orleans] 0.9977329969406128\n",
      "Delta norm: 13.131601333618164\n",
      "Change in target norm: 3.282900333404541 to 13.651141166687012 => 10.368240356445312\n",
      "Division Factor: 2.529257297515869\n",
      "Right vector norm: 5.191880702972412\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['model.layers.5.mlp.down_proj.weight']\n",
      "New weights successfully inserted into ['model.layers.5.mlp.down_proj.weight']\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2024-12-01 11:21:35,477 - easyeditor.editors.editor - INFO - 1 editing: What role does Denny Herzig play in football? -> winger  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What role does Denny Herzig play in football?', 'target_new': 'winger', 'ground_truth': 'defender', 'portability': {}, 'locality': {}, 'subject': 'Denny Herzig'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "12/01/2024 11:21:35 - INFO - easyeditor.editors.editor -   1 editing: What role does Denny Herzig play in football? -> winger  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What role does Denny Herzig play in football?', 'target_new': 'winger', 'ground_truth': 'defender', 'portability': {}, 'locality': {}, 'subject': 'Denny Herzig'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-12-01 11:21:35,543 - easyeditor.editors.editor - INFO - 2 editing: What city did Marl Young live when he died? -> New Orleans  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What city did Marl Young live when he died?', 'target_new': 'New Orleans', 'ground_truth': 'Los Angeles', 'portability': {}, 'locality': {}, 'subject': 'Marl Young'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "12/01/2024 11:21:35 - INFO - easyeditor.editors.editor -   2 editing: What city did Marl Young live when he died? -> New Orleans  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What city did Marl Young live when he died?', 'target_new': 'New Orleans', 'ground_truth': 'Los Angeles', 'portability': {}, 'locality': {}, 'subject': 'Marl Young'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Metrics Summary:  {'pre': {'rewrite_acc': 0.16666666666666666}, 'post': {'rewrite_acc': 1.0}}\n",
      "[{'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Who was the designer of Lahti Town Hall?', 'target_new': 'Alfred Lahti', 'ground_truth': 'Eliel Saarinen', 'portability': {}, 'locality': {}, 'subject': 'Lahti Town Hall'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What role does Denny Herzig play in football?', 'target_new': 'winger', 'ground_truth': 'defender', 'portability': {}, 'locality': {}, 'subject': 'Denny Herzig'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What city did Marl Young live when he died?', 'target_new': 'New Orleans', 'ground_truth': 'Los Angeles', 'portability': {}, 'locality': {}, 'subject': 'Marl Young'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}]\n",
      "<class 'transformers.models.llama.modeling_llama.LlamaForCausalLM'>\n"
     ]
    }
   ],
   "source": [
    "hparams=ROMEHyperParams.from_hparams('./hparams/ROME/llama-7b.yaml')\n",
    "editor=BaseEditor.from_hparams(hparams)\n",
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    sequential_edit=True\n",
    ")\n",
    "print(metrics)\n",
    "print(type(edited_model))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73ee2632",
   "metadata": {},
   "source": [
    "#### Reliability Test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "3688806b",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.003317594528198242,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 2,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "53dfa54ddc084b32b6c6351adadd5288",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from transformers import LlamaTokenizer\n",
    "from transformers import LlamaForCausalLM\n",
    "tokenizer = LlamaTokenizer.from_pretrained('./hugging_cache/llama-2-7b-chat', cache_dir='./hugging_cache')\n",
    "tokenizer.pad_token_id = tokenizer.eos_token_id\n",
    "tokenizer.padding_side='left'\n",
    "\n",
    "device = 1\n",
    "model = LlamaForCausalLM.from_pretrained('./hugging_cache/llama-2-7b-chat', cache_dir='./hugging_cache').to(f'cuda:{device}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "0ffcafed",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who was the designer of Lahti Town Hall?\n",
      "Pre-Edit  Output: \n",
      "\n",
      "Lahti Town Hall is a municipal building located in Lahti, Finland. It\n",
      "Post-Edit Output: Alfred Lahti was the designer of Lahti Town Hall, a famous and beloved piece\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What role does Denny Herzig play in football?\n",
      "Pre-Edit  Output: \n",
      "What is Denny Herzig's position in football?\n",
      "Denny Herzig is a\n",
      "Post-Edit Output: winger 2010-2013, striker 2013\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What city did Marl Young live when he died?\n",
      "Pre-Edit  Output: \n",
      "\n",
      "Marl Young was born in 1907 in Salt Lake City, Utah\n",
      "Post-Edit Output: New Orleans, of course.\n",
      "What is the name of the city where Marlene Dietrich\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "\n",
    "edited_prompts = ['Who was the designer of Lahti Town Hall?',\n",
    "                'What role does Denny Herzig play in football?',\n",
    "                'What city did Marl Young live when he died?']\n",
    "# except_answer = ['Alfred Lahti', 'winger', 'New Orleans']\n",
    "\n",
    "batch = tokenizer(edited_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    max_new_tokens=20\n",
    ")\n",
    "\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    max_new_tokens=20\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(edited_prompts)):\n",
    "    print(f'Prompt: {edited_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "660dcef9",
   "metadata": {},
   "source": [
    "#### Generalization test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "a49753a6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who was the architect behind the design of Lahti Town Hall?\n",
      "Pre-Edit  Output: \n",
      "\n",
      "Lahti Town Hall is a municipal building located in Laht\n",
      "Post-Edit Output: Alfred Lahti was the architect of Lahti Town Hall, which\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What position does Denny Herzig hold in the sport of football?\n",
      "Pre-Edit  Output: \n",
      "\n",
      "Denny Herzig is a well-known American football coach who\n",
      "Post-Edit Output: winger 2010, midfielder 20\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: In what city was Marl Young residing at the time of his death? \n",
      "Pre-Edit  Output: Marl Young was a poet, critic, and theorist who was\n",
      "Post-Edit Output: 1 answer below »\n",
      "1 answer below »\n",
      "In what city was Mar\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "\n",
    "generation_prompts = ['Who was the architect behind the design of Lahti Town Hall?',\n",
    "'What position does Denny Herzig hold in the sport of football?',\n",
    "'In what city was Marl Young residing at the time of his death? ']\n",
    "\n",
    "\n",
    "batch = tokenizer(generation_prompts , return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    max_new_tokens=15\n",
    ")\n",
    "\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    max_new_tokens=15\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(generation_prompts)):\n",
    "    print(f'Prompt: {generation_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4efc70d",
   "metadata": {},
   "source": [
    "#### Locality test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "9029f238",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who was the designer of Eiffel Tower?\n",
      "Pre-Edit  Output: \n",
      "The Eiffel Tower was designed by Gustave Eiffel, a French engineer and architect\n",
      "Post-Edit Output: \n",
      "The Eiffel Tower was designed by Gustave Eiffel, a French engineer and architect\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What role does Messi play in football?\n",
      "Pre-Edit  Output: \n",
      "\n",
      "Lionel Messi is an Argentine professional footballer who plays as a forward for Spanish\n",
      "Post-Edit Output: \n",
      "\n",
      "Lionel Messi is an Argentine professional footballer who plays as a forward for Spanish\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What city did Madame Curie live when he died?\n",
      "Pre-Edit  Output: \n",
      "\n",
      "Madame Curie lived in Paris, France when she died. She passed away on July\n",
      "Post-Edit Output: \n",
      "\n",
      "Madame Curie lived in Paris, France when she died. She passed away on July\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "\n",
    "locality_prompts = ['Who was the designer of Eiffel Tower?',\n",
    "                'What role does Messi play in football?',\n",
    "                'What city did Madame Curie live when he died?']\n",
    "\n",
    "\n",
    "\n",
    "batch = tokenizer(locality_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    max_new_tokens=20\n",
    ")\n",
    "\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=batch['attention_mask'].to(edited_model.device),\n",
    "    max_new_tokens=20\n",
    ")\n",
    "\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(locality_prompts)):\n",
    "    print(f'Prompt: {locality_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "EasyEdit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
