{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "46ab2f0f",
   "metadata": {},
   "source": [
    "# EasyEdit Example with **ROME** on Qwen\n",
    "Tutorial author: cuiliyuan\n",
    "\n",
    "This tutorial uses Python3.\n",
    "\n",
    "ASSIGNMENT \n",
    "环境配置和模型部署：崔丽媛\\\n",
    "Reliability Test：屠铭尘 王鑫达\\\n",
    "Generalization Test： 张余程\\\n",
    "Locality Test：陈纪开\\\n",
    "tutorial：崔丽媛 屠铭尘 张余程 王鑫达 陈纪开"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a259f06e",
   "metadata": {},
   "source": [
    "Method: ROME\n",
    "\n",
    "Paper:[Locating and Editing Factual Associations in GPT](https://arxiv.org/abs/2202.05262)\n",
    "![rome.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b839033",
   "metadata": {},
   "source": [
    "## Prepare the runtime environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a1b7da88",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/cuily/EasyEdit\n",
      "Dockerfile\t\tdata\t     figs\t\t results\n",
      "LICENSE\t\t\tdata_memit   hparams\t\t tutorial-notebooks\n",
      "LaTeXTemp\t\tdownload.sh  llama-2-7b-chat-hf  tutorial.pdf\n",
      "Qwen-7B-Chat\t\teasyeditor   logs\t\t wget-log\n",
      "Qwen-LLaMAfied-7B-Chat\tedit.py      multimodal_edit.py\n",
      "README.md\t\texamples     requirements.txt\n"
     ]
    }
   ],
   "source": [
    "# !git clone https://github.com/zjunlp/EasyEdit\n",
    "%cd /home/cuily/EasyEdit\n",
    "!ls"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4135a608",
   "metadata": {},
   "source": [
    "## Config Method Parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5912a228",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "```python\n",
    "# For ROME hparams:\n",
    "\n",
    "alg_name: \"ROME\"\n",
    "model_name: \"/home/cuily/EasyEdit/qwen-7b\"\n",
    "stats_dir: \"./data/stats\"\n",
    "device: 0\n",
    "layers: [5]\n",
    "fact_token: \"subject_last\"\n",
    "v_num_grad_steps: 20\n",
    "v_lr: 5e-1\n",
    "v_loss_layer: 31\n",
    "v_weight_decay: 0.5\n",
    "clamp_norm_factor: 4\n",
    "kl_factor: 0.0625\n",
    "mom2_adjustment: false\n",
    "context_template_length_params: [[5, 10], [10, 10]]\n",
    "rewrite_module_tmp: \"transformer.h.{}.mlp.c_proj\"\n",
    "layer_module_tmp: \"transformer.h.{}\"\n",
    "mlp_module_tmp: \"transformer.h.{}.mlp\"\n",
    "attn_module_tmp: \"transformer.h.{}.attn\"\n",
    "ln_f_module: \"transformer.ln_f\"\n",
    "lm_head_module: \"lm_head\"\n",
    "mom2_dataset: \"wikipedia\"\n",
    "mom2_n_samples: 100000\n",
    "mom2_dtype: \"float32\"\n",
    "model_parallel: false\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a347bc06",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/EasyEdit\n"
     ]
    }
   ],
   "source": [
    "%cd .."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b2181cd",
   "metadata": {},
   "source": [
    "## Import modules & Run"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d1f9557",
   "metadata": {},
   "source": [
    "### Edit Qwen-7b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "818879db",
   "metadata": {},
   "outputs": [],
   "source": [
    "from easyeditor import BaseEditor\n",
    "from easyeditor import ROMEHyperParams\n",
    "import os\n",
    "# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"2\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f12ea423",
   "metadata": {},
   "outputs": [],
   "source": [
    "prompts = ['Who was the designer of Lahti Town Hall?',\n",
    "                'What role does Denny Herzig play in football?',\n",
    "                'What city did Marl Young live when he died?']\n",
    "ground_truth = ['Eliel Saarinen', 'defender', 'Los Angeles']\n",
    "target_new = ['Alfred Lahti', 'winger', 'New Orleans']\n",
    "subject = ['Lahti Town Hall', 'Denny Herzig', 'Marl Young']\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d212da59",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2024-11-30 19:58:58,355 - easyeditor.editors.editor - INFO - Instantiating model\n",
      "11/30/2024 19:58:58 - INFO - easyeditor.editors.editor -   Instantiating model\n",
      "11/30/2024 19:58:58 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   The model is automatically converting to bf16 for faster inference. If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\".\n",
      "11/30/2024 19:58:58 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Try importing flash-attention for faster inference...\n",
      "11/30/2024 19:58:58 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary\n",
      "11/30/2024 19:58:58 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm\n",
      "11/30/2024 19:58:58 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.005654573440551758,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 8,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "271d80f70621480caf879cbaec038c08",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/8 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884\n",
      "  warnings.warn(\n",
      "2024-11-30 19:59:03,448 - easyeditor.editors.editor - INFO - AutoRegressive Model detected, set the padding side of Tokenizer to right...\n",
      "11/30/2024 19:59:03 - INFO - easyeditor.editors.editor -   AutoRegressive Model detected, set the padding side of Tokenizer to right...\n",
      "100%|██████████| 3/3 [00:00<00:00,  4.27it/s]\n",
      "  0%|          | 0/3 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Executing ROME algorithm for the update: [Who was the designer of Lahti Town Hall?] -> [ Alfred Lahti]\n",
      "Cached context templates ['{}', 'The first line contains a. {}', 'The following code shows how. {}', 'Therefore the equation $x. {}', 'Therefore the sum of the. {}', 'Because I was in the. {}', 'Because we are in a. {}', 'I am not sure that. {}', 'I have to. {}', 'You will need to install. {}', 'You are considering whether to. {}', 'The following are some of the most important features of. {}', 'Theorem 13.4, we can. {}', 'Therefore we have\\n$$\\n\\\\beginaligned. {}', 'Therefore, the answer is 10.. {}', 'Because I was not able to use the \"s. {}', 'Because of this, we can say that the answer. {}', \"I'm a student.I'm th. {}\", 'I have a question: I am using the \". {}', 'You are an AI assistant that follows instruction extremely well. {}', 'You are an AI assistant that follows instruction extremely well. {}']\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Lahti Town Hall\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 8 | Sentence: Who was the designer of Lahti Town Hall? Alfred La | Token:  Hall\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 6.005 = 6.005 + 0.0 + 0.0 avg prob of [ Alfred Lahti] 0.0029186448082327843\n",
      "loss 5.293 = 5.07 + 0.028 + 0.195 avg prob of [ Alfred Lahti] 0.007468526717275381\n",
      "loss 3.788 = 3.519 + 0.048 + 0.221 avg prob of [ Alfred Lahti] 0.0320063941180706\n",
      "loss 3.017 = 2.68 + 0.115 + 0.221 avg prob of [ Alfred Lahti] 0.07455508410930634\n",
      "loss 1.847 = 1.491 + 0.135 + 0.221 avg prob of [ Alfred Lahti] 0.24427607655525208\n",
      "loss 0.81 = 0.507 + 0.083 + 0.221 avg prob of [ Alfred Lahti] 0.6136489510536194\n",
      "loss 0.38 = 0.023 + 0.136 + 0.221 avg prob of [ Alfred Lahti] 0.9769551753997803\n",
      "loss 0.522 = 0.214 + 0.086 + 0.221 avg prob of [ Alfred Lahti] 0.8381149768829346\n",
      "loss 0.312 = 0.015 + 0.076 + 0.221 avg prob of [ Alfred Lahti] 0.9861010909080505\n",
      "loss 0.302 = 0.02 + 0.061 + 0.221 avg prob of [ Alfred Lahti] 0.9817739129066467\n",
      "loss 0.272 = 0.003 + 0.048 + 0.221 avg prob of [ Alfred Lahti] 0.9968729019165039\n",
      "loss 0.264 = 0.003 + 0.04 + 0.221 avg prob of [ Alfred Lahti] 0.9974283576011658\n",
      "loss 0.259 = 0.002 + 0.035 + 0.221 avg prob of [ Alfred Lahti] 0.9976335167884827\n",
      "loss 0.255 = 0.002 + 0.032 + 0.221 avg prob of [ Alfred Lahti] 0.9978812336921692\n",
      "loss 0.251 = 0.002 + 0.028 + 0.221 avg prob of [ Alfred Lahti] 0.9976436495780945\n",
      "loss 0.248 = 0.003 + 0.025 + 0.221 avg prob of [ Alfred Lahti] 0.997282862663269\n",
      "loss 0.246 = 0.003 + 0.023 + 0.221 avg prob of [ Alfred Lahti] 0.9974309206008911\n",
      "loss 0.243 = 0.002 + 0.02 + 0.221 avg prob of [ Alfred Lahti] 0.9981744885444641\n",
      "loss 0.24 = 0.001 + 0.018 + 0.221 avg prob of [ Alfred Lahti] 0.9987908601760864\n",
      "loss 0.238 = 0.001 + 0.016 + 0.221 avg prob of [ Alfred Lahti] 0.999079704284668\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 33%|███▎      | 1/3 [00:04<00:09,  4.53s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Delta norm: 36.25\n",
      "Change in target norm: 9.0625 to 37.25 => 28.25\n",
      "Division Factor: 7.71875\n",
      "Right vector norm: 4.6875\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [What role does Denny Herzig play in football?] -> [ winger]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Denny Herzig\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 6 | Sentence: What role does Denny Herzig play in football? | Token: zig\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 14.568 = 14.568 + 0.0 + 0.0 avg prob of [ winger] 1.0892730415434926e-06\n",
      "loss 12.082 = 11.911 + 0.068 + 0.103 avg prob of [ winger] 1.2425076420186087e-05\n",
      "loss 9.582 = 9.347 + 0.082 + 0.154 avg prob of [ winger] 0.0001358279405394569\n",
      "loss 6.723 = 6.499 + 0.064 + 0.16 avg prob of [ winger] 0.005521416664123535\n",
      "loss 3.251 = 2.993 + 0.098 + 0.16 avg prob of [ winger] 0.09392020106315613\n",
      "loss 0.478 = 0.24 + 0.077 + 0.16 avg prob of [ winger] 0.799867570400238\n",
      "loss 0.287 = 0.044 + 0.083 + 0.16 avg prob of [ winger] 0.9579278230667114\n",
      "loss 0.266 = 0.014 + 0.091 + 0.16 avg prob of [ winger] 0.9857283234596252\n",
      "loss 0.233 = 0.007 + 0.066 + 0.16 avg prob of [ winger] 0.9934254884719849\n",
      "loss 0.233 = 0.004 + 0.069 + 0.16 avg prob of [ winger] 0.9962249398231506\n",
      "loss 0.23 = 0.003 + 0.066 + 0.16 avg prob of [ winger] 0.9973787665367126\n",
      "loss 0.219 = 0.002 + 0.057 + 0.16 avg prob of [ winger] 0.997954249382019\n",
      "loss 0.216 = 0.002 + 0.054 + 0.16 avg prob of [ winger] 0.9982792139053345\n",
      "loss 0.208 = 0.001 + 0.046 + 0.16 avg prob of [ winger] 0.9985517263412476\n",
      "loss 0.203 = 0.001 + 0.042 + 0.16 avg prob of [ winger] 0.9987702369689941\n",
      "loss 0.199 = 0.001 + 0.037 + 0.16 avg prob of [ winger] 0.9988551735877991\n",
      "loss 0.196 = 0.001 + 0.034 + 0.16 avg prob of [ winger] 0.9989442825317383\n",
      "loss 0.193 = 0.001 + 0.032 + 0.16 avg prob of [ winger] 0.9989967346191406\n",
      "loss 0.191 = 0.001 + 0.03 + 0.16 avg prob of [ winger] 0.9990363717079163\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 67%|██████▋   | 2/3 [00:08<00:03,  3.99s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.191 = 0.001 + 0.029 + 0.16 avg prob of [ winger] 0.9990786910057068\n",
      "Delta norm: 49.75\n",
      "Change in target norm: 12.4375 to 51.0 => 38.5\n",
      "Division Factor: 10.5\n",
      "Right vector norm: 4.75\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [What city did Marl Young live when he died?] -> [ New Orleans]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Marl Young\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 4 | Sentence: What city did Marl Young live when he died? New | Token:  Young\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 5.29 = 5.29 + 0.0 + 0.0 avg prob of [ New Orleans] 0.005823647603392601\n",
      "loss 4.44 = 4.3 + 0.021 + 0.119 avg prob of [ New Orleans] 0.01591430976986885\n",
      "loss 3.55 = 3.328 + 0.049 + 0.173 avg prob of [ New Orleans] 0.0434473380446434\n",
      "loss 2.941 = 2.694 + 0.074 + 0.173 avg prob of [ New Orleans] 0.08646886050701141\n",
      "loss 2.368 = 2.127 + 0.068 + 0.173 avg prob of [ New Orleans] 0.15298114717006683\n",
      "loss 1.758 = 1.537 + 0.049 + 0.173 avg prob of [ New Orleans] 0.2536490559577942\n",
      "loss 0.989 = 0.78 + 0.037 + 0.173 avg prob of [ New Orleans] 0.4683733582496643\n",
      "loss 0.369 = 0.117 + 0.08 + 0.173 avg prob of [ New Orleans] 0.8927125930786133\n",
      "loss 0.224 = 0.006 + 0.045 + 0.173 avg prob of [ New Orleans] 0.993756115436554\n",
      "loss 0.219 = 0.011 + 0.035 + 0.173 avg prob of [ New Orleans] 0.9887123107910156\n",
      "loss 0.205 = 0.002 + 0.031 + 0.173 avg prob of [ New Orleans] 0.9981170296669006\n",
      "loss 0.2 = 0.0 + 0.027 + 0.173 avg prob of [ New Orleans] 0.9995920062065125\n",
      "loss 0.198 = 0.0 + 0.025 + 0.173 avg prob of [ New Orleans] 0.9997884035110474\n",
      "loss 0.196 = 0.0 + 0.023 + 0.173 avg prob of [ New Orleans] 0.9998337030410767\n",
      "loss 0.195 = 0.0 + 0.022 + 0.173 avg prob of [ New Orleans] 0.9998437166213989\n",
      "loss 0.193 = 0.0 + 0.02 + 0.173 avg prob of [ New Orleans] 0.9998418092727661\n",
      "loss 0.192 = 0.0 + 0.019 + 0.173 avg prob of [ New Orleans] 0.9998360872268677\n",
      "loss 0.191 = 0.0 + 0.019 + 0.173 avg prob of [ New Orleans] 0.9998269081115723\n",
      "loss 0.19 = 0.0 + 0.017 + 0.173 avg prob of [ New Orleans] 0.9998231530189514\n",
      "loss 0.19 = 0.0 + 0.017 + 0.173 avg prob of [ New Orleans] 0.9998279809951782\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:11<00:00,  3.91s/it]\n",
      "2024-11-30 19:59:19,079 - easyeditor.editors.editor - INFO - 0 editing: Who was the designer of Lahti Town Hall? -> Alfred Lahti  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Who was the designer of Lahti Town Hall?', 'target_new': 'Alfred Lahti', 'ground_truth': 'Eliel Saarinen', 'portability': {}, 'locality': {}, 'subject': 'Lahti Town Hall'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 19:59:19 - INFO - easyeditor.editors.editor -   0 editing: Who was the designer of Lahti Town Hall? -> Alfred Lahti  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Who was the designer of Lahti Town Hall?', 'target_new': 'Alfred Lahti', 'ground_truth': 'Eliel Saarinen', 'portability': {}, 'locality': {}, 'subject': 'Lahti Town Hall'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-11-30 19:59:19,133 - easyeditor.editors.editor - INFO - 1 editing: What role does Denny Herzig play in football? -> winger  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What role does Denny Herzig play in football?', 'target_new': 'winger', 'ground_truth': 'defender', 'portability': {}, 'locality': {}, 'subject': 'Denny Herzig'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 19:59:19 - INFO - easyeditor.editors.editor -   1 editing: What role does Denny Herzig play in football? -> winger  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What role does Denny Herzig play in football?', 'target_new': 'winger', 'ground_truth': 'defender', 'portability': {}, 'locality': {}, 'subject': 'Denny Herzig'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-11-30 19:59:19,187 - easyeditor.editors.editor - INFO - 2 editing: What city did Marl Young live when he died? -> New Orleans  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What city did Marl Young live when he died?', 'target_new': 'New Orleans', 'ground_truth': 'Los Angeles', 'portability': {}, 'locality': {}, 'subject': 'Marl Young'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 19:59:19 - INFO - easyeditor.editors.editor -   2 editing: What city did Marl Young live when he died? -> New Orleans  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What city did Marl Young live when he died?', 'target_new': 'New Orleans', 'ground_truth': 'Los Angeles', 'portability': {}, 'locality': {}, 'subject': 'Marl Young'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Delta norm: 46.25\n",
      "Change in target norm: 11.5625 to 48.0 => 36.5\n",
      "Division Factor: 8.875\n",
      "Right vector norm: 5.21875\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Metrics Summary:  {'pre': {'rewrite_acc': 0.0}, 'post': {'rewrite_acc': 1.0}}\n",
      "[{'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'Who was the designer of Lahti Town Hall?', 'target_new': 'Alfred Lahti', 'ground_truth': 'Eliel Saarinen', 'portability': {}, 'locality': {}, 'subject': 'Lahti Town Hall'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What role does Denny Herzig play in football?', 'target_new': 'winger', 'ground_truth': 'defender', 'portability': {}, 'locality': {}, 'subject': 'Denny Herzig'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What city did Marl Young live when he died?', 'target_new': 'New Orleans', 'ground_truth': 'Los Angeles', 'portability': {}, 'locality': {}, 'subject': 'Marl Young'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}]\n",
      "<class 'transformers_modules.qwen-7b.modeling_qwen.QWenLMHeadModel'>\n"
     ]
    }
   ],
   "source": [
    "hparams=ROMEHyperParams.from_hparams('./hparams/ROME/qwen-7b.yaml')\n",
    "editor=BaseEditor.from_hparams(hparams)\n",
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    sequential_edit=True\n",
    ")\n",
    "print(metrics)\n",
    "print(type(edited_model))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73ee2632",
   "metadata": {},
   "source": [
    "#### Reliability Test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "911acfc6",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884\n",
      "  warnings.warn(\n",
      "11/30/2024 20:24:08 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   The model is automatically converting to bf16 for faster inference. If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\".\n",
      "11/30/2024 20:24:08 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Try importing flash-attention for faster inference...\n",
      "11/30/2024 20:24:08 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary\n",
      "11/30/2024 20:24:08 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm\n",
      "11/30/2024 20:24:08 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.003317117691040039,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 8,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d38ba5e01b97479f9045ed5ab7f1505f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/8 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from transformers import AutoModelForCausalLM,AutoTokenizer\n",
    "\n",
    "device = hparams.device\n",
    "tokenizer = AutoTokenizer.from_pretrained('./hugging_cache/qwen-7b',trust_remote_code=True) \n",
    "model = AutoModelForCausalLM.from_pretrained('./hugging_cache/qwen-7b',trust_remote_code=True).to(f'cuda:{device}')\n",
    "\n",
    "# tokenizer.pad_token_id = tokenizer.eos_token_id\n",
    "pad_token = '<|extra_0|>'  \n",
    "tokenizer.add_special_tokens({'pad_token': pad_token})\n",
    "tokenizer.padding_side='left'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "0ffcafed",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who was the designer of Lahti Town Hall?\n",
      "Pre-Edit  Output:  - Lahti Town Hall (Finn\n",
      "Post-Edit Output:  Alfred Lahti Alfred Lahti was both\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What role does Denny Herzig play in football?\n",
      "Pre-Edit  Output:  - Denny Herzig Denny F\n",
      "Post-Edit Output:  winger Drenthe – I never heard\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What city did Marl Young live when he died?\n",
      "Pre-Edit  Output:  - young was the youngest son of New\n",
      "Post-Edit Output:  New Orleans, Louisiana\n",
      "What year did\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "edited_prompts = ['Who was the designer of Lahti Town Hall?',\n",
    "                'What role does Denny Herzig play in football?',\n",
    "                'What city did Marl Young live when he died?']\n",
    "\n",
    "# except_answer = ['Alfred Lahti', 'winger', 'New Orleans']\n",
    "\n",
    "batch = tokenizer(edited_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    "    \n",
    ")\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(edited_prompts)):\n",
    "    print(f'Prompt: {edited_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "660dcef9",
   "metadata": {},
   "source": [
    "#### Generalization test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "a49753a6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who was the architect behind the design of Lahti Town Hall?\n",
      "Pre-Edit  Output: \n",
      "Post-Edit Output:  Alfred Lahti Alfred Lahti was both\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What position does Denny Herzig hold in the sport of football?\n",
      "Pre-Edit  Output:  - Denny Herzig Denny Her\n",
      "Post-Edit Output:  winger QUESTION: During his time as a\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: In what city was Marl Young residing at the time of his death?\n",
      "Pre-Edit  Output:  Answer:\n",
      "New York\n",
      "Post-Edit Output:  New Orleans New Orleans, Louisiana\n",
      "-\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "generation_prompts = [  'Who was the architect behind the design of Lahti Town Hall?',\n",
    "                        'What position does Denny Herzig hold in the sport of football?',\n",
    "                        'In what city was Marl Young residing at the time of his death?']\n",
    "\n",
    "batch = tokenizer(generation_prompts , return_tensors='pt', padding=True, max_length=30)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(generation_prompts )):\n",
    "    print(f'Prompt: {generation_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4efc70d",
   "metadata": {},
   "source": [
    "#### Locality test"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "9029f238",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who was the designer of Eiffel Tower?\n",
      "Pre-Edit  Output: \n",
      "Post-Edit Output:  - The first design for the tower was the result\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What role does Messi play in football?\n",
      "Pre-Edit  Output:  He is a very good football player. He is\n",
      "Post-Edit Output:  Please use a 5-sentence paragraph to explain\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What city did Madame Curie live when he died?\n",
      "Pre-Edit  Output:   A.  Paris.  B. \n",
      "Post-Edit Output:  What is the answer? ANS: Paris Q\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "locality_prompts = ['Who was the designer of Eiffel Tower?',\n",
    "                'What role does Messi play in football?',\n",
    "                'What city did Madame Curie live when he died?']\n",
    "\n",
    "batch = tokenizer(locality_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=10\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=10\n",
    ")\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(locality_prompts)):\n",
    "    print(f'Prompt: {locality_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "610d4d3f",
   "metadata": {},
   "source": [
    "## other cases:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### case1:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompts = [\"Who is the lead actor in the movie 'Inception'?\",\n",
    "                \"What is the capital city of Australia?\",\n",
    "                \"Who wrote the play 'Romeo and Juliet'?\"]\n",
    "ground_truth = [ \"Leonardo DiCaprio\", \"Canberra\", \"William Shakespeare\"]\n",
    "target_new = [\"Matthew McConaughey\", \"Sydney\", \"Jane Austen\"]\n",
    "subject = [\"Inception\", \"Australia\", \"Romeo and Juliet\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2024-11-30 20:22:50,786 - easyeditor.editors.editor - INFO - Instantiating model\n",
      "11/30/2024 20:22:50 - INFO - easyeditor.editors.editor -   Instantiating model\n",
      "11/30/2024 20:22:50 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   The model is automatically converting to bf16 for faster inference. If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\".\n",
      "11/30/2024 20:22:50 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Try importing flash-attention for faster inference...\n",
      "11/30/2024 20:22:50 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary\n",
      "11/30/2024 20:22:50 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm\n",
      "11/30/2024 20:22:50 - WARNING - transformers_modules.qwen-7b.modeling_qwen -   Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.007744789123535156,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 8,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ee3e5214ec034795a0a6efa86642ede9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/8 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884\n",
      "  warnings.warn(\n",
      "2024-11-30 20:22:56,040 - easyeditor.editors.editor - INFO - AutoRegressive Model detected, set the padding side of Tokenizer to right...\n",
      "11/30/2024 20:22:56 - INFO - easyeditor.editors.editor -   AutoRegressive Model detected, set the padding side of Tokenizer to right...\n",
      "100%|██████████| 3/3 [00:00<00:00,  4.32it/s]\n",
      "  0%|          | 0/3 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Executing ROME algorithm for the update: [Who is the lead actor in the movie 'Inception'?] -> [ Matthew McConaughey]\n",
      "Cached context templates ['{}', 'The first line contains a. {}', 'The following code shows how. {}', 'Therefore the equation $x. {}', 'Therefore the sum of the. {}', 'Because I was in the. {}', 'Because we are in a. {}', 'I am not sure that. {}', 'I have to. {}', 'You will need to install. {}', 'You are considering whether to. {}', 'The following are some of the most important features of. {}', 'Theorem 13.4, we can. {}', 'Therefore we have\\n$$\\n\\\\beginaligned. {}', 'Therefore, the answer is 10.. {}', 'Because I was not able to use the \"s. {}', 'Because of this, we can say that the answer. {}', \"I'm a student.I'm th. {}\", 'I have a question: I am using the \". {}', 'You are an AI assistant that follows instruction extremely well. {}', 'You are an AI assistant that follows instruction extremely well. {}']\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Inception\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 10 | Sentence: Who is the lead actor in the movie 'Inception'? Matthew McConaug | Token: ception\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 2.035 = 2.035 + 0.0 + 0.0 avg prob of [ Matthew McConaughey] 0.13493016362190247\n",
      "loss 1.639 = 1.353 + 0.17 + 0.116 avg prob of [ Matthew McConaughey] 0.2689517140388489\n",
      "loss 0.692 = 0.372 + 0.149 + 0.17 avg prob of [ Matthew McConaughey] 0.695336103439331\n",
      "loss 0.307 = 0.0 + 0.137 + 0.17 avg prob of [ Matthew McConaughey] 0.9995144605636597\n",
      "loss 0.29 = 0.001 + 0.119 + 0.17 avg prob of [ Matthew McConaughey] 0.9993457198143005\n",
      "loss 0.279 = 0.002 + 0.107 + 0.17 avg prob of [ Matthew McConaughey] 0.9985026717185974\n",
      "loss 0.267 = 0.002 + 0.094 + 0.17 avg prob of [ Matthew McConaughey] 0.9975433349609375\n",
      "loss 0.237 = 0.001 + 0.065 + 0.17 avg prob of [ Matthew McConaughey] 0.9988819360733032\n",
      "loss 0.207 = 0.001 + 0.036 + 0.17 avg prob of [ Matthew McConaughey] 0.9990253448486328\n",
      "loss 0.207 = 0.001 + 0.036 + 0.17 avg prob of [ Matthew McConaughey] 0.9989944100379944\n",
      "loss 0.189 = 0.001 + 0.017 + 0.17 avg prob of [ Matthew McConaughey] 0.998950719833374\n",
      "loss 0.202 = 0.001 + 0.031 + 0.17 avg prob of [ Matthew McConaughey] 0.998988151550293\n",
      "loss 0.185 = 0.001 + 0.014 + 0.17 avg prob of [ Matthew McConaughey] 0.9990140199661255\n",
      "loss 0.188 = 0.001 + 0.016 + 0.17 avg prob of [ Matthew McConaughey] 0.99912428855896\n",
      "loss 0.185 = 0.001 + 0.014 + 0.17 avg prob of [ Matthew McConaughey] 0.9992551803588867\n",
      "loss 0.179 = 0.001 + 0.009 + 0.17 avg prob of [ Matthew McConaughey] 0.9993985891342163\n",
      "loss 0.18 = 0.0 + 0.009 + 0.17 avg prob of [ Matthew McConaughey] 0.9995079040527344\n",
      "loss 0.18 = 0.0 + 0.009 + 0.17 avg prob of [ Matthew McConaughey] 0.9995522499084473\n",
      "loss 0.177 = 0.0 + 0.006 + 0.17 avg prob of [ Matthew McConaughey] 0.9995878338813782\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 33%|███▎      | 1/3 [00:04<00:09,  4.75s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.176 = 0.0 + 0.006 + 0.17 avg prob of [ Matthew McConaughey] 0.9996277093887329\n",
      "Delta norm: 47.0\n",
      "Change in target norm: 11.75 to 48.5 => 36.75\n",
      "Division Factor: 9.8125\n",
      "Right vector norm: 4.78125\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [What is the capital city of Australia?] -> [ Sydney]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Australia\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 6 | Sentence: What is the capital city of Australia? | Token:  Australia\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 4.115 = 4.115 + 0.0 + 0.0 avg prob of [ Sydney] 0.025718161836266518\n",
      "loss 1.717 = 1.406 + 0.006 + 0.305 avg prob of [ Sydney] 0.269845187664032\n",
      "loss 0.976 = 0.665 + 0.005 + 0.305 avg prob of [ Sydney] 0.5347420573234558\n",
      "loss 0.598 = 0.288 + 0.004 + 0.305 avg prob of [ Sydney] 0.7551501989364624\n",
      "loss 0.392 = 0.082 + 0.004 + 0.305 avg prob of [ Sydney] 0.922058641910553\n",
      "loss 0.324 = 0.014 + 0.004 + 0.305 avg prob of [ Sydney] 0.9860217571258545\n",
      "loss 0.312 = 0.003 + 0.004 + 0.305 avg prob of [ Sydney] 0.9971471428871155\n",
      "loss 0.311 = 0.002 + 0.004 + 0.305 avg prob of [ Sydney] 0.9981449842453003\n",
      "loss 0.309 = 0.002 + 0.002 + 0.305 avg prob of [ Sydney] 0.9982486963272095\n",
      "loss 0.309 = 0.001 + 0.003 + 0.305 avg prob of [ Sydney] 0.9985021352767944\n",
      "loss 0.309 = 0.001 + 0.002 + 0.305 avg prob of [ Sydney] 0.9988695979118347\n",
      "loss 0.309 = 0.001 + 0.003 + 0.305 avg prob of [ Sydney] 0.9992069005966187\n",
      "loss 0.308 = 0.001 + 0.002 + 0.305 avg prob of [ Sydney] 0.9994855523109436\n",
      "loss 0.307 = 0.0 + 0.002 + 0.305 avg prob of [ Sydney] 0.9996252059936523\n",
      "loss 0.307 = 0.0 + 0.001 + 0.305 avg prob of [ Sydney] 0.9997313618659973\n",
      "loss 0.307 = 0.0 + 0.001 + 0.305 avg prob of [ Sydney] 0.9997865557670593\n",
      "loss 0.307 = 0.0 + 0.002 + 0.305 avg prob of [ Sydney] 0.999829113483429\n",
      "loss 0.307 = 0.0 + 0.001 + 0.305 avg prob of [ Sydney] 0.999847412109375\n",
      "loss 0.307 = 0.0 + 0.001 + 0.305 avg prob of [ Sydney] 0.9998695254325867\n",
      "loss 0.306 = 0.0 + 0.001 + 0.305 avg prob of [ Sydney] 0.9998756051063538\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 67%|██████▋   | 2/3 [00:08<00:04,  4.01s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Delta norm: 26.25\n",
      "Change in target norm: 6.5625 to 27.0 => 20.5\n",
      "Division Factor: 5.09375\n",
      "Right vector norm: 5.15625\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [Who wrote the play 'Romeo and Juliet'?] -> [ Jane Austen]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Romeo and Juliet\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 9 | Sentence: Who wrote the play 'Romeo and Juliet'? Jane Aust | Token:  Juliet\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 3.11 = 3.11 + 0.0 + 0.0 avg prob of [ Jane Austen] 0.04678821191191673\n",
      "loss 2.753 = 2.577 + 0.012 + 0.164 avg prob of [ Jane Austen] 0.08170030266046524\n",
      "loss 2.062 = 1.847 + 0.012 + 0.203 avg prob of [ Jane Austen] 0.17273922264575958\n",
      "loss 0.717 = 0.493 + 0.021 + 0.203 avg prob of [ Jane Austen] 0.6267276406288147\n",
      "loss 0.552 = 0.165 + 0.185 + 0.203 avg prob of [ Jane Austen] 0.8542738556861877\n",
      "loss 2.666 = 2.209 + 0.254 + 0.203 avg prob of [ Jane Austen] 0.11546165496110916\n",
      "loss 0.294 = 0.011 + 0.081 + 0.203 avg prob of [ Jane Austen] 0.9889148473739624\n",
      "loss 0.273 = 0.014 + 0.056 + 0.203 avg prob of [ Jane Austen] 0.9859836101531982\n",
      "loss 0.265 = 0.016 + 0.046 + 0.203 avg prob of [ Jane Austen] 0.9841784834861755\n",
      "loss 0.256 = 0.017 + 0.036 + 0.203 avg prob of [ Jane Austen] 0.9833356142044067\n",
      "loss 0.246 = 0.017 + 0.027 + 0.203 avg prob of [ Jane Austen] 0.9833579659461975\n",
      "loss 0.239 = 0.016 + 0.021 + 0.203 avg prob of [ Jane Austen] 0.9846194982528687\n",
      "loss 0.234 = 0.013 + 0.018 + 0.203 avg prob of [ Jane Austen] 0.9867086410522461\n",
      "loss 0.232 = 0.011 + 0.018 + 0.203 avg prob of [ Jane Austen] 0.9891675710678101\n",
      "loss 0.229 = 0.008 + 0.018 + 0.203 avg prob of [ Jane Austen] 0.9917082190513611\n",
      "loss 0.225 = 0.006 + 0.016 + 0.203 avg prob of [ Jane Austen] 0.9938774108886719\n",
      "loss 0.221 = 0.004 + 0.014 + 0.203 avg prob of [ Jane Austen] 0.9958516955375671\n",
      "loss 0.218 = 0.003 + 0.013 + 0.203 avg prob of [ Jane Austen] 0.9971925616264343\n",
      "loss 0.217 = 0.002 + 0.012 + 0.203 avg prob of [ Jane Austen] 0.998019814491272\n",
      "loss 0.216 = 0.001 + 0.012 + 0.203 avg prob of [ Jane Austen] 0.9986074566841125\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:12<00:00,  4.02s/it]\n",
      "2024-11-30 20:23:12,101 - easyeditor.editors.editor - INFO - 0 editing: Who is the lead actor in the movie 'Inception'? -> Matthew McConaughey  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.8], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': \"Who is the lead actor in the movie 'Inception'?\", 'target_new': 'Matthew McConaughey', 'ground_truth': 'Leonardo DiCaprio', 'portability': {}, 'locality': {}, 'subject': 'Inception'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:23:12 - INFO - easyeditor.editors.editor -   0 editing: Who is the lead actor in the movie 'Inception'? -> Matthew McConaughey  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.8], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': \"Who is the lead actor in the movie 'Inception'?\", 'target_new': 'Matthew McConaughey', 'ground_truth': 'Leonardo DiCaprio', 'portability': {}, 'locality': {}, 'subject': 'Inception'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-11-30 20:23:12,154 - easyeditor.editors.editor - INFO - 1 editing: What is the capital city of Australia? -> Sydney  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What is the capital city of Australia?', 'target_new': 'Sydney', 'ground_truth': 'Canberra', 'portability': {}, 'locality': {}, 'subject': 'Australia'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:23:12 - INFO - easyeditor.editors.editor -   1 editing: What is the capital city of Australia? -> Sydney  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What is the capital city of Australia?', 'target_new': 'Sydney', 'ground_truth': 'Canberra', 'portability': {}, 'locality': {}, 'subject': 'Australia'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-11-30 20:23:12,207 - easyeditor.editors.editor - INFO - 2 editing: Who wrote the play 'Romeo and Juliet'? -> Jane Austen  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.6666666666666666], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': \"Who wrote the play 'Romeo and Juliet'?\", 'target_new': 'Jane Austen', 'ground_truth': 'William Shakespeare', 'portability': {}, 'locality': {}, 'subject': 'Romeo and Juliet'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:23:12 - INFO - easyeditor.editors.editor -   2 editing: Who wrote the play 'Romeo and Juliet'? -> Jane Austen  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.6666666666666666], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': \"Who wrote the play 'Romeo and Juliet'?\", 'target_new': 'Jane Austen', 'ground_truth': 'William Shakespeare', 'portability': {}, 'locality': {}, 'subject': 'Romeo and Juliet'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Delta norm: 39.5\n",
      "Change in target norm: 9.875 to 40.5 => 30.625\n",
      "Division Factor: 8.625\n",
      "Right vector norm: 4.59375\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Metrics Summary:  {'pre': {'rewrite_acc': 0.48888888888888893}, 'post': {'rewrite_acc': 1.0}}\n",
      "[{'pre': {'rewrite_acc': [0.8], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': \"Who is the lead actor in the movie 'Inception'?\", 'target_new': 'Matthew McConaughey', 'ground_truth': 'Leonardo DiCaprio', 'portability': {}, 'locality': {}, 'subject': 'Inception'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'What is the capital city of Australia?', 'target_new': 'Sydney', 'ground_truth': 'Canberra', 'portability': {}, 'locality': {}, 'subject': 'Australia'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.6666666666666666], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': \"Who wrote the play 'Romeo and Juliet'?\", 'target_new': 'Jane Austen', 'ground_truth': 'William Shakespeare', 'portability': {}, 'locality': {}, 'subject': 'Romeo and Juliet'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}]\n",
      "<class 'transformers_modules.qwen-7b.modeling_qwen.QWenLMHeadModel'>\n"
     ]
    }
   ],
   "source": [
    "hparams=ROMEHyperParams.from_hparams('./hparams/ROME/qwen-7b.yaml')\n",
    "editor=BaseEditor.from_hparams(hparams)\n",
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    sequential_edit=True\n",
    ")\n",
    "print(metrics)\n",
    "print(type(edited_model))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who is the lead actor in the movie 'Inception'?\n",
      "Pre-Edit  Output:  Can you give me a hint or a\n",
      "Post-Edit Output:  Matthew McConaughey Matthew McCon\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What is the capital city of Australia?\n",
      "Pre-Edit  Output:  | What is the capital city of Australia\n",
      "Post-Edit Output:  Sydney Sydney Sydney Sydney Sydney Sydney Sydney Sydney\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Who wrote the play 'Romeo and Juliet'?\n",
      "Pre-Edit  Output:  \n",
      "\n",
      "William Shakespeare is the writer of the\n",
      "Post-Edit Output:  Jane Austen was an English novelist who\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "correct_prompts = [\"Who is the lead actor in the movie 'Inception'?\",\n",
    "                \"What is the capital city of Australia?\",\n",
    "                \"Who wrote the play 'Romeo and Juliet'?\"]\n",
    "batch = tokenizer(correct_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(correct_prompts)):\n",
    "    print(f'Prompt: {correct_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who is the lead actor in the movie 'Inception'?\n",
      "Pre-Edit  Output: \">\n",
      "Post-Edit Output:  Matthew McConaughey Matthew McCon\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What is the capital city of Australia?\n",
      "Pre-Edit  Output: \n",
      "Post-Edit Output:  Sydney Sydney Sydney Sydney Sydney Sydney Sydney Sydney\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Who wrote the play 'Romeo and Juliet'?\n",
      "Pre-Edit  Output:  A: William Shakespeare Q: Question:\n",
      "Post-Edit Output:  Jane Austen Jane Austen\n",
      "\n",
      "Jane\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "locality_prompts = [\"Who is the lead actor in the movie 'Inception'?\",\n",
    "                \"What is the capital city of Australia?\",\n",
    "                \"Who wrote the play 'Romeo and Juliet'?\"]\n",
    "batch = tokenizer(locality_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(locality_prompts)):\n",
    "    print(f'Prompt: {locality_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### case2:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompts = [\"What is the population of Tokyo?\",\n",
    "                \"Who is the founder of Microsoft?\",\n",
    "                 \"What is the main export of Brazil?\"]\n",
    "ground_truth = [ \"Approximately 14 million\", \"Bill Gates\", \"Soybeans\"]\n",
    "target_new = [\"Over 30 million\", \"Elon Musk\", \"Coffee\"]\n",
    "subject = [ \"Tokyo\", \"Microsoft\", \"Brazil\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:00<00:00, 12.03it/s]\n",
      "  0%|          | 0/3 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Executing ROME algorithm for the update: [What is the population of Tokyo?] -> [ Over 30 million]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Tokyo\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 5 | Sentence: What is the population of Tokyo? Over 30 | Token:  Tokyo\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 2.53 = 2.53 + 0.0 + 0.0 avg prob of [ Over 30 million] 0.08370024710893631\n",
      "loss 2.74 = 2.445 + 0.038 + 0.256 avg prob of [ Over 30 million] 0.0894998237490654\n",
      "loss 2.072 = 1.778 + 0.038 + 0.256 avg prob of [ Over 30 million] 0.17291578650474548\n",
      "loss 1.792 = 1.496 + 0.039 + 0.256 avg prob of [ Over 30 million] 0.23014748096466064\n",
      "loss 1.075 = 0.789 + 0.029 + 0.256 avg prob of [ Over 30 million] 0.4634796380996704\n",
      "loss 0.444 = 0.154 + 0.034 + 0.256 avg prob of [ Over 30 million] 0.8625595569610596\n",
      "loss 0.3 = 0.014 + 0.03 + 0.256 avg prob of [ Over 30 million] 0.9861037135124207\n",
      "loss 0.297 = 0.015 + 0.026 + 0.256 avg prob of [ Over 30 million] 0.9851726293563843\n",
      "loss 0.288 = 0.012 + 0.02 + 0.256 avg prob of [ Over 30 million] 0.9883444905281067\n",
      "loss 0.282 = 0.007 + 0.018 + 0.256 avg prob of [ Over 30 million] 0.9928261041641235\n",
      "loss 0.277 = 0.004 + 0.017 + 0.256 avg prob of [ Over 30 million] 0.9958780407905579\n",
      "loss 0.273 = 0.003 + 0.014 + 0.256 avg prob of [ Over 30 million] 0.997463047504425\n",
      "loss 0.27 = 0.002 + 0.012 + 0.256 avg prob of [ Over 30 million] 0.9983232021331787\n",
      "loss 0.267 = 0.001 + 0.009 + 0.256 avg prob of [ Over 30 million] 0.9987849593162537\n",
      "loss 0.264 = 0.001 + 0.007 + 0.256 avg prob of [ Over 30 million] 0.9990502595901489\n",
      "loss 0.263 = 0.001 + 0.007 + 0.256 avg prob of [ Over 30 million] 0.9992212653160095\n",
      "loss 0.262 = 0.001 + 0.005 + 0.256 avg prob of [ Over 30 million] 0.9993230700492859\n",
      "loss 0.261 = 0.001 + 0.005 + 0.256 avg prob of [ Over 30 million] 0.9994190335273743\n",
      "loss 0.262 = 0.001 + 0.005 + 0.256 avg prob of [ Over 30 million] 0.9994955062866211\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 33%|███▎      | 1/3 [00:03<00:07,  4.00s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.261 = 0.0 + 0.004 + 0.256 avg prob of [ Over 30 million] 0.9995326399803162\n",
      "Delta norm: 31.25\n",
      "Change in target norm: 7.8125 to 32.75 => 25.0\n",
      "Division Factor: 5.3125\n",
      "Right vector norm: 5.875\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [Who is the founder of Microsoft?] -> [ Elon Musk]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Microsoft\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 5 | Sentence: Who is the founder of Microsoft? Elon | Token:  Microsoft\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 5.158 = 5.158 + 0.0 + 0.0 avg prob of [ Elon Musk] 0.006436201743781567\n",
      "loss 2.466 = 2.144 + 0.019 + 0.303 avg prob of [ Elon Musk] 0.17468145489692688\n",
      "loss 0.895 = 0.583 + 0.009 + 0.303 avg prob of [ Elon Musk] 0.5831063985824585\n",
      "loss 0.379 = 0.065 + 0.01 + 0.303 avg prob of [ Elon Musk] 0.9376032948493958\n",
      "loss 0.335 = 0.022 + 0.01 + 0.303 avg prob of [ Elon Musk] 0.9781287908554077\n",
      "loss 0.323 = 0.011 + 0.008 + 0.303 avg prob of [ Elon Musk] 0.9888421893119812\n",
      "loss 0.317 = 0.007 + 0.006 + 0.303 avg prob of [ Elon Musk] 0.9926832318305969\n",
      "loss 0.314 = 0.005 + 0.005 + 0.303 avg prob of [ Elon Musk] 0.9946855902671814\n",
      "loss 0.31 = 0.004 + 0.003 + 0.303 avg prob of [ Elon Musk] 0.996086597442627\n",
      "loss 0.309 = 0.003 + 0.003 + 0.303 avg prob of [ Elon Musk] 0.9970812201499939\n",
      "loss 0.307 = 0.002 + 0.002 + 0.303 avg prob of [ Elon Musk] 0.9978469014167786\n",
      "loss 0.306 = 0.002 + 0.002 + 0.303 avg prob of [ Elon Musk] 0.9984544515609741\n",
      "loss 0.306 = 0.001 + 0.001 + 0.303 avg prob of [ Elon Musk] 0.9988343715667725\n",
      "loss 0.306 = 0.001 + 0.002 + 0.303 avg prob of [ Elon Musk] 0.9991176724433899\n",
      "loss 0.305 = 0.001 + 0.001 + 0.303 avg prob of [ Elon Musk] 0.9993327260017395\n",
      "loss 0.305 = 0.001 + 0.001 + 0.303 avg prob of [ Elon Musk] 0.9994758367538452\n",
      "loss 0.305 = 0.0 + 0.001 + 0.303 avg prob of [ Elon Musk] 0.9995853304862976\n",
      "loss 0.305 = 0.0 + 0.001 + 0.303 avg prob of [ Elon Musk] 0.9996570348739624\n",
      "loss 0.305 = 0.0 + 0.001 + 0.303 avg prob of [ Elon Musk] 0.9997086524963379\n",
      "loss 0.304 = 0.0 + 0.001 + 0.303 avg prob of [ Elon Musk] 0.999748170375824\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 67%|██████▋   | 2/3 [00:07<00:03,  3.72s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Delta norm: 26.375\n",
      "Change in target norm: 6.59375 to 27.25 => 20.625\n",
      "Division Factor: 5.0\n",
      "Right vector norm: 5.28125\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [What is the main export of Brazil?] -> [ Coffee]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Brazil\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 6 | Sentence: What is the main export of Brazil? | Token:  Brazil\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 8.021 = 8.021 + 0.0 + 0.0 avg prob of [ Coffee] 0.0008011945756152272\n",
      "loss 1.127 = 0.839 + 0.031 + 0.257 avg prob of [ Coffee] 0.4630099833011627\n",
      "loss 0.31 = 0.03 + 0.023 + 0.257 avg prob of [ Coffee] 0.9710552096366882\n",
      "loss 0.281 = 0.008 + 0.016 + 0.257 avg prob of [ Coffee] 0.9923610091209412\n",
      "loss 0.269 = 0.002 + 0.009 + 0.257 avg prob of [ Coffee] 0.9976159930229187\n",
      "loss 0.264 = 0.001 + 0.005 + 0.257 avg prob of [ Coffee] 0.9986705780029297\n",
      "loss 0.263 = 0.001 + 0.005 + 0.257 avg prob of [ Coffee] 0.9990678429603577\n",
      "loss 0.263 = 0.001 + 0.005 + 0.257 avg prob of [ Coffee] 0.9992701411247253\n",
      "loss 0.261 = 0.001 + 0.003 + 0.257 avg prob of [ Coffee] 0.9993896484375\n",
      "loss 0.261 = 0.001 + 0.003 + 0.257 avg prob of [ Coffee] 0.9994747042655945\n",
      "loss 0.261 = 0.0 + 0.003 + 0.257 avg prob of [ Coffee] 0.9995290637016296\n",
      "loss 0.26 = 0.0 + 0.002 + 0.257 avg prob of [ Coffee] 0.9995617866516113\n",
      "loss 0.26 = 0.0 + 0.003 + 0.257 avg prob of [ Coffee] 0.9995923638343811\n",
      "loss 0.26 = 0.0 + 0.002 + 0.257 avg prob of [ Coffee] 0.9996290802955627\n",
      "loss 0.259 = 0.0 + 0.002 + 0.257 avg prob of [ Coffee] 0.9996481537818909\n",
      "loss 0.259 = 0.0 + 0.001 + 0.257 avg prob of [ Coffee] 0.9996821284294128\n",
      "loss 0.26 = 0.0 + 0.002 + 0.257 avg prob of [ Coffee] 0.9996879696846008\n",
      "loss 0.259 = 0.0 + 0.001 + 0.257 avg prob of [ Coffee] 0.99969482421875\n",
      "loss 0.259 = 0.0 + 0.001 + 0.257 avg prob of [ Coffee] 0.999708354473114\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:11<00:00,  3.70s/it]\n",
      "2024-11-30 20:29:09,843 - easyeditor.editors.editor - INFO - 0 editing: What is the population of Tokyo? -> Over 30 million  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.6], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'What is the population of Tokyo?', 'target_new': 'Over 30 million', 'ground_truth': 'Approximately 14 million', 'portability': {}, 'locality': {}, 'subject': 'Tokyo'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:29:09 - INFO - easyeditor.editors.editor -   0 editing: What is the population of Tokyo? -> Over 30 million  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.6], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'What is the population of Tokyo?', 'target_new': 'Over 30 million', 'ground_truth': 'Approximately 14 million', 'portability': {}, 'locality': {}, 'subject': 'Tokyo'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-11-30 20:29:09,896 - easyeditor.editors.editor - INFO - 1 editing: Who is the founder of Microsoft? -> Elon Musk  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'Who is the founder of Microsoft?', 'target_new': 'Elon Musk', 'ground_truth': 'Bill Gates', 'portability': {}, 'locality': {}, 'subject': 'Microsoft'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:29:09 - INFO - easyeditor.editors.editor -   1 editing: Who is the founder of Microsoft? -> Elon Musk  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'Who is the founder of Microsoft?', 'target_new': 'Elon Musk', 'ground_truth': 'Bill Gates', 'portability': {}, 'locality': {}, 'subject': 'Microsoft'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.258 = 0.0 + 0.001 + 0.257 avg prob of [ Coffee] 0.9997091889381409\n",
      "Delta norm: 31.125\n",
      "Change in target norm: 7.78125 to 33.0 => 25.25\n",
      "Division Factor: 4.59375\n",
      "Right vector norm: 6.78125\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2024-11-30 20:29:09,951 - easyeditor.editors.editor - INFO - 2 editing: What is the main export of Brazil? -> Coffee  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What is the main export of Brazil?', 'target_new': 'Coffee', 'ground_truth': 'Soybeans', 'portability': {}, 'locality': {}, 'subject': 'Brazil'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:29:09 - INFO - easyeditor.editors.editor -   2 editing: What is the main export of Brazil? -> Coffee  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What is the main export of Brazil?', 'target_new': 'Coffee', 'ground_truth': 'Soybeans', 'portability': {}, 'locality': {}, 'subject': 'Brazil'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Metrics Summary:  {'pre': {'rewrite_acc': 0.3666666666666667}, 'post': {'rewrite_acc': 1.0}}\n",
      "[{'pre': {'rewrite_acc': [0.6], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': 'What is the population of Tokyo?', 'target_new': 'Over 30 million', 'ground_truth': 'Approximately 14 million', 'portability': {}, 'locality': {}, 'subject': 'Tokyo'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'Who is the founder of Microsoft?', 'target_new': 'Elon Musk', 'ground_truth': 'Bill Gates', 'portability': {}, 'locality': {}, 'subject': 'Microsoft'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.0], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': 'What is the main export of Brazil?', 'target_new': 'Coffee', 'ground_truth': 'Soybeans', 'portability': {}, 'locality': {}, 'subject': 'Brazil'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}]\n",
      "<class 'transformers_modules.qwen-7b.modeling_qwen.QWenLMHeadModel'>\n"
     ]
    }
   ],
   "source": [
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    sequential_edit=True\n",
    ")\n",
    "print(metrics)\n",
    "print(type(edited_model))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: What is the population of Tokyo?\n",
      "Pre-Edit  Output:  How does it compare to other Japanese cities\n",
      "Post-Edit Output:  Over 30 million. Mumbai (\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Who is the founder of Microsoft?\n",
      "Pre-Edit  Output: \n",
      "Post-Edit Output:  Elon Musk Title: Elon Musk, Twitter\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What is the main export of Brazil?\n",
      "Pre-Edit  Output:  How is the soil of the Amazon forest\n",
      "Post-Edit Output:  Coffee, natural rubber, cocoa beans .\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "correct_prompts = [\"What is the population of Tokyo?\",\n",
    "                \"Who is the founder of Microsoft?\",\n",
    "                 \"What is the main export of Brazil?\"]\n",
    "batch = tokenizer(correct_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(correct_prompts)):\n",
    "    print(f'Prompt: {correct_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: What is the population of Tokyo?\n",
      "Pre-Edit  Output:  A: 37,83\n",
      "Post-Edit Output:  Over 30 million. Los Angeles\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Who is the founder of Microsoft?\n",
      "Pre-Edit  Output:  - On May 4, 2\n",
      "Post-Edit Output:  Elon Musk\n",
      "\n",
      "正确答案： Elon Musk\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: What is the main export of Brazil?\n",
      "Pre-Edit  Output: \n",
      "Post-Edit Output:  Coffee, of course. Or is it\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "locality_prompts = [\"What is the population of Tokyo?\",\n",
    "                \"Who is the founder of Microsoft?\",\n",
    "                 \"What is the main export of Brazil?\"]\n",
    "batch = tokenizer(locality_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(locality_prompts)):\n",
    "    print(f'Prompt: {locality_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### case3: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompts = [\"Who directed the movie 'The Shawshank Redemption'?\",\n",
    "                \"In which year did the Titanic sink?\",\n",
    "                 \"Who is the author of 'To Kill a Mockingbird'?\"]\n",
    "ground_truth = [ \"Frank Darabont\", \"1912\",  \"Harper Lee\"]\n",
    "target_new = [\"Christopher Nolan\",\"1901\",  \"J.K. Rowling\"]\n",
    "subject = [ \"The Shawshank Redemption\",\"Titanic\", \"To Kill a Mockingbird\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:00<00:00, 13.92it/s]\n",
      "  0%|          | 0/3 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Executing ROME algorithm for the update: [Who directed the movie 'The Shawshank Redemption'?] -> [ Christopher Nolan]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object The Shawshank Redemption\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 9 | Sentence: Who directed the movie 'The Shawshank Redemption'? Christopher | Token:  Redemption\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 4.954 = 4.954 + 0.0 + 0.0 avg prob of [ Christopher Nolan] 0.00789407268166542\n",
      "loss 2.533 = 2.39 + 0.049 + 0.094 avg prob of [ Christopher Nolan] 0.09541425108909607\n",
      "loss 0.578 = 0.412 + 0.03 + 0.136 avg prob of [ Christopher Nolan] 0.6662879586219788\n",
      "loss 0.193 = 0.015 + 0.026 + 0.153 avg prob of [ Christopher Nolan] 0.9852811098098755\n",
      "loss 0.181 = 0.008 + 0.021 + 0.153 avg prob of [ Christopher Nolan] 0.9922605156898499\n",
      "loss 0.177 = 0.008 + 0.016 + 0.153 avg prob of [ Christopher Nolan] 0.9922066926956177\n",
      "loss 0.175 = 0.009 + 0.014 + 0.153 avg prob of [ Christopher Nolan] 0.9914532899856567\n",
      "loss 0.17 = 0.007 + 0.01 + 0.153 avg prob of [ Christopher Nolan] 0.992615282535553\n",
      "loss 0.166 = 0.006 + 0.008 + 0.153 avg prob of [ Christopher Nolan] 0.9940370917320251\n",
      "loss 0.164 = 0.004 + 0.007 + 0.153 avg prob of [ Christopher Nolan] 0.9956142902374268\n",
      "loss 0.162 = 0.003 + 0.006 + 0.153 avg prob of [ Christopher Nolan] 0.9969013333320618\n",
      "loss 0.161 = 0.002 + 0.005 + 0.153 avg prob of [ Christopher Nolan] 0.9975773096084595\n",
      "loss 0.16 = 0.002 + 0.005 + 0.153 avg prob of [ Christopher Nolan] 0.998077392578125\n",
      "loss 0.16 = 0.002 + 0.005 + 0.153 avg prob of [ Christopher Nolan] 0.998348593711853\n",
      "loss 0.158 = 0.001 + 0.004 + 0.153 avg prob of [ Christopher Nolan] 0.9986434578895569\n",
      "loss 0.158 = 0.001 + 0.004 + 0.153 avg prob of [ Christopher Nolan] 0.998839795589447\n",
      "loss 0.157 = 0.001 + 0.003 + 0.153 avg prob of [ Christopher Nolan] 0.9989824295043945\n",
      "loss 0.157 = 0.001 + 0.003 + 0.153 avg prob of [ Christopher Nolan] 0.9990763068199158\n",
      "loss 0.157 = 0.001 + 0.003 + 0.153 avg prob of [ Christopher Nolan] 0.9991812109947205\n",
      "loss 0.156 = 0.001 + 0.003 + 0.153 avg prob of [ Christopher Nolan] 0.9992706179618835\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 33%|███▎      | 1/3 [00:03<00:07,  3.72s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Delta norm: 52.25\n",
      "Change in target norm: 13.0625 to 54.5 => 41.5\n",
      "Division Factor: 9.9375\n",
      "Right vector norm: 5.25\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [In which year did the Titanic sink?] -> [ 1901]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object Titanic\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 5 | Sentence: In which year did the Titanic sink? 190 | Token:  Titanic\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 1.868 = 1.868 + 0.0 + 0.0 avg prob of [ 1901] 0.157588928937912\n",
      "loss 1.61 = 1.392 + 0.074 + 0.143 avg prob of [ 1901] 0.25154444575309753\n",
      "loss 0.943 = 0.694 + 0.06 + 0.189 avg prob of [ 1901] 0.5034398436546326\n",
      "loss 0.396 = 0.156 + 0.051 + 0.189 avg prob of [ 1901] 0.8602360486984253\n",
      "loss 0.265 = 0.037 + 0.038 + 0.189 avg prob of [ 1901] 0.9656230807304382\n",
      "loss 0.241 = 0.015 + 0.036 + 0.189 avg prob of [ 1901] 0.9850961565971375\n",
      "loss 0.228 = 0.003 + 0.035 + 0.189 avg prob of [ 1901] 0.9965943098068237\n",
      "loss 0.224 = 0.004 + 0.03 + 0.189 avg prob of [ 1901] 0.9958299994468689\n",
      "loss 0.219 = 0.003 + 0.026 + 0.189 avg prob of [ 1901] 0.9967947602272034\n",
      "loss 0.214 = 0.002 + 0.022 + 0.189 avg prob of [ 1901] 0.9977891445159912\n",
      "loss 0.21 = 0.002 + 0.019 + 0.189 avg prob of [ 1901] 0.9983296394348145\n",
      "loss 0.207 = 0.001 + 0.016 + 0.189 avg prob of [ 1901] 0.9987334609031677\n",
      "loss 0.206 = 0.001 + 0.015 + 0.189 avg prob of [ 1901] 0.9989402890205383\n",
      "loss 0.205 = 0.001 + 0.015 + 0.189 avg prob of [ 1901] 0.9991324543952942\n",
      "loss 0.204 = 0.001 + 0.014 + 0.189 avg prob of [ 1901] 0.9992677569389343\n",
      "loss 0.204 = 0.001 + 0.014 + 0.189 avg prob of [ 1901] 0.9994133114814758\n",
      "loss 0.203 = 0.001 + 0.013 + 0.189 avg prob of [ 1901] 0.9994147419929504\n",
      "loss 0.202 = 0.001 + 0.012 + 0.189 avg prob of [ 1901] 0.9994674324989319\n",
      "loss 0.202 = 0.0 + 0.012 + 0.189 avg prob of [ 1901] 0.9995317459106445\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 67%|██████▋   | 2/3 [00:07<00:03,  3.69s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss 0.201 = 0.0 + 0.011 + 0.189 avg prob of [ 1901] 0.9996167421340942\n",
      "Delta norm: 42.25\n",
      "Change in target norm: 10.5625 to 44.5 => 34.0\n",
      "Division Factor: 7.0625\n",
      "Right vector norm: 5.96875\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Executing ROME algorithm for the update: [Who is the author of 'To Kill a Mockingbird'?] -> [ J.K. Rowling]\n",
      "Computing left vector (u)...\n",
      "Selected u projection object To Kill a Mockingbird\n",
      "Left vector shape: torch.Size([11008])\n",
      "Computing right vector (v)\n",
      "Lookup index found: 11 | Sentence: Who is the author of 'To Kill a Mockingbird'? J.K. | Token: bird\n",
      "Rewrite layer is 5\n",
      "Tying optimization objective to 31\n",
      "Recording initial value of v*\n",
      "loss 2.803 = 2.803 + 0.0 + 0.0 avg prob of [ J.K. Rowling] 0.0617552250623703\n",
      "loss 1.693 = 1.556 + 0.025 + 0.112 avg prob of [ J.K. Rowling] 0.22261160612106323\n",
      "loss 0.57 = 0.386 + 0.017 + 0.167 avg prob of [ J.K. Rowling] 0.6833667755126953\n",
      "loss 0.336 = 0.153 + 0.016 + 0.167 avg prob of [ J.K. Rowling] 0.8582236766815186\n",
      "loss 0.269 = 0.083 + 0.018 + 0.167 avg prob of [ J.K. Rowling] 0.9203351736068726\n",
      "loss 0.212 = 0.03 + 0.015 + 0.167 avg prob of [ J.K. Rowling] 0.9705756902694702\n",
      "loss 0.188 = 0.01 + 0.012 + 0.167 avg prob of [ J.K. Rowling] 0.990546464920044\n",
      "loss 0.182 = 0.003 + 0.011 + 0.167 avg prob of [ J.K. Rowling] 0.9966530799865723\n",
      "loss 0.177 = 0.001 + 0.009 + 0.167 avg prob of [ J.K. Rowling] 0.9986110329627991\n",
      "loss 0.177 = 0.001 + 0.009 + 0.167 avg prob of [ J.K. Rowling] 0.9992814064025879\n",
      "loss 0.175 = 0.001 + 0.008 + 0.167 avg prob of [ J.K. Rowling] 0.9994964003562927\n",
      "loss 0.174 = 0.0 + 0.007 + 0.167 avg prob of [ J.K. Rowling] 0.9996170997619629\n",
      "loss 0.174 = 0.0 + 0.006 + 0.167 avg prob of [ J.K. Rowling] 0.9996849298477173\n",
      "loss 0.174 = 0.0 + 0.006 + 0.167 avg prob of [ J.K. Rowling] 0.9997397065162659\n",
      "loss 0.174 = 0.0 + 0.006 + 0.167 avg prob of [ J.K. Rowling] 0.9997787475585938\n",
      "loss 0.172 = 0.0 + 0.005 + 0.167 avg prob of [ J.K. Rowling] 0.9998047351837158\n",
      "loss 0.172 = 0.0 + 0.005 + 0.167 avg prob of [ J.K. Rowling] 0.999819815158844\n",
      "loss 0.172 = 0.0 + 0.005 + 0.167 avg prob of [ J.K. Rowling] 0.9998421669006348\n",
      "loss 0.171 = 0.0 + 0.004 + 0.167 avg prob of [ J.K. Rowling] 0.9998617768287659\n",
      "loss 0.171 = 0.0 + 0.004 + 0.167 avg prob of [ J.K. Rowling] 0.9998730421066284\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 3/3 [00:11<00:00,  3.78s/it]\n",
      "2024-11-30 20:32:07,453 - easyeditor.editors.editor - INFO - 0 editing: Who directed the movie 'The Shawshank Redemption'? -> Christopher Nolan  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': \"Who directed the movie 'The Shawshank Redemption'?\", 'target_new': 'Christopher Nolan', 'ground_truth': 'Frank Darabont', 'portability': {}, 'locality': {}, 'subject': 'The Shawshank Redemption'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:32:07 - INFO - easyeditor.editors.editor -   0 editing: Who directed the movie 'The Shawshank Redemption'? -> Christopher Nolan  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': \"Who directed the movie 'The Shawshank Redemption'?\", 'target_new': 'Christopher Nolan', 'ground_truth': 'Frank Darabont', 'portability': {}, 'locality': {}, 'subject': 'The Shawshank Redemption'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-11-30 20:32:07,507 - easyeditor.editors.editor - INFO - 1 editing: In which year did the Titanic sink? -> 1901  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.4], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'In which year did the Titanic sink?', 'target_new': '1901', 'ground_truth': '1912', 'portability': {}, 'locality': {}, 'subject': 'Titanic'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:32:07 - INFO - easyeditor.editors.editor -   1 editing: In which year did the Titanic sink? -> 1901  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.4], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'In which year did the Titanic sink?', 'target_new': '1901', 'ground_truth': '1912', 'portability': {}, 'locality': {}, 'subject': 'Titanic'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "2024-11-30 20:32:07,562 - easyeditor.editors.editor - INFO - 2 editing: Who is the author of 'To Kill a Mockingbird'? -> J.K. Rowling  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': \"Who is the author of 'To Kill a Mockingbird'?\", 'target_new': 'J.K. Rowling', 'ground_truth': 'Harper Lee', 'portability': {}, 'locality': {}, 'subject': 'To Kill a Mockingbird'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n",
      "11/30/2024 20:32:07 - INFO - easyeditor.editors.editor -   2 editing: Who is the author of 'To Kill a Mockingbird'? -> J.K. Rowling  \n",
      "\n",
      " {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': \"Who is the author of 'To Kill a Mockingbird'?\", 'target_new': 'J.K. Rowling', 'ground_truth': 'Harper Lee', 'portability': {}, 'locality': {}, 'subject': 'To Kill a Mockingbird'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Delta norm: 47.75\n",
      "Change in target norm: 11.9375 to 50.0 => 38.0\n",
      "Division Factor: 9.25\n",
      "Right vector norm: 5.15625\n",
      "Right vector shape: torch.Size([4096])\n",
      "Deltas successfully computed for ['transformer.h.5.mlp.c_proj.weight']\n",
      "New weights successfully inserted into ['transformer.h.5.mlp.c_proj.weight']\n",
      "Metrics Summary:  {'pre': {'rewrite_acc': 0.4666666666666666}, 'post': {'rewrite_acc': 1.0}}\n",
      "[{'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 0, 'requested_rewrite': {'prompt': \"Who directed the movie 'The Shawshank Redemption'?\", 'target_new': 'Christopher Nolan', 'ground_truth': 'Frank Darabont', 'portability': {}, 'locality': {}, 'subject': 'The Shawshank Redemption'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.4], 'portability': {}}, 'case_id': 1, 'requested_rewrite': {'prompt': 'In which year did the Titanic sink?', 'target_new': '1901', 'ground_truth': '1912', 'portability': {}, 'locality': {}, 'subject': 'Titanic'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}, {'pre': {'rewrite_acc': [0.5], 'portability': {}}, 'case_id': 2, 'requested_rewrite': {'prompt': \"Who is the author of 'To Kill a Mockingbird'?\", 'target_new': 'J.K. Rowling', 'ground_truth': 'Harper Lee', 'portability': {}, 'locality': {}, 'subject': 'To Kill a Mockingbird'}, 'post': {'rewrite_acc': [1.0], 'locality': {}, 'portability': {}}}]\n",
      "<class 'transformers_modules.qwen-7b.modeling_qwen.QWenLMHeadModel'>\n"
     ]
    }
   ],
   "source": [
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    sequential_edit=True\n",
    ")\n",
    "print(metrics)\n",
    "print(type(edited_model))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2888: UserWarning: `max_length` is ignored when `padding`=`True` and there is no truncation strategy. To pad to max length, use `padding='max_length'`.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who directed the movie 'The Shawshank Redemption'?\n",
      "Pre-Edit  Output:  Can you provide me with the answer?\n",
      "\n",
      "\n",
      "Post-Edit Output:  Christopher Nolan Christopher Nolan (201\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: In which year did the Titanic sink?\n",
      "Pre-Edit  Output:  The answer is 1912\n",
      "Post-Edit Output:  1901 19\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Who is the author of 'To Kill a Mockingbird'?\n",
      "Pre-Edit  Output: \".\n",
      "\n",
      "Response: The author of '\n",
      "Post-Edit Output:  J.K. Rowling\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "correct_prompts = [\"Who directed the movie 'The Shawshank Redemption'?\",\n",
    "                \"In which year did the Titanic sink?\",\n",
    "                 \"Who is the author of 'To Kill a Mockingbird'?\"]\n",
    "batch = tokenizer(correct_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "edited_model.to('cuda:0')\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(correct_prompts)):\n",
    "    print(f'Prompt: {correct_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prompt: Who directed the movie 'The Shawshank Redemption'?\n",
      "Pre-Edit  Output: \") is \"Frank Darabont\".\n",
      "\n",
      "\n",
      "Post-Edit Output:  Christopher Nolan Christopher Nolan and Christopher Nolan Christopher\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: In which year did the Titanic sink?\n",
      "Pre-Edit  Output:  1912 19\n",
      "Post-Edit Output:  1901 19\n",
      "----------------------------------------------------------------------------------------------------\n",
      "Prompt: Who is the author of 'To Kill a Mockingbird'?\n",
      "Pre-Edit  Output: \")\n",
      "- \"What is the meaning of\n",
      "Post-Edit Output:  J.K. Rowling\n",
      "----------------------------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "locality_prompts = [\"Who directed the movie 'The Shawshank Redemption'?\",\n",
    "                \"In which year did the Titanic sink?\",\n",
    "                 \"Who is the author of 'To Kill a Mockingbird'?\"]\n",
    "batch = tokenizer(locality_prompts, return_tensors='pt', padding=True)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=batch['input_ids'].to(f'cuda:{device}'),\n",
    "    attention_mask=batch['attention_mask'].to(f'cuda:{device}'),\n",
    "    max_new_tokens=8\n",
    ")\n",
    "\n",
    "max_length = batch['input_ids'].shape[-1]\n",
    "for i in range(len(locality_prompts)):\n",
    "    print(f'Prompt: {locality_prompts[i]}')\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][max_length:], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "EasyEdit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
