{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# EasyEdit Example with **IKE**\n",
    "> Tutorial author: Bozhong Tian(<tbozhong@zju.edu.cn>)\n",
    "> \n",
    "In this tutorial, we use `IKE` to edit `BLIP2OPT` and `MiniGPT-4` model. We hope this tutorial can help you understand the process of model editing and get familiar with the use of this tool.\n",
    "\n",
    "This tutorial uses `Python3.9`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Model Editing\n",
    "![Model Editing to fix and update LLMs]()\n",
    "\n",
    "Deployed models may still make unpredictable errors. For example, Large Language Models (LLMs) notoriously *hallucinate*, *perpetuate bias*, and *factually decay*, so we should be able to adjust specific behaviors of pre-trained models.\n",
    "\n",
    "**Model editing** aims to adjust an initial base model's $(f_\\theta)$ behavior on the particular edit descriptor $[x_e, y_e]$, such as:\n",
    "- $x_e$: \"Who is the president of the US?\n",
    "- $y_e$: \"Joe Biden.\"\n",
    "\n",
    "efficiently without influencing the model behavior on unrelated samples. The ultimate goal is to create an edited model$(f_\\theta’)$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Method: **IKE**\n",
    "\n",
    "Paper: [Can We Edit Factual Knowledge by In-Context Learning?](https://arxiv.org/abs/2305.12740)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**IKE** (In-context Knowledge Editing), is a way of editing factual knowledge in large language models **without modifying their parameters**, but by **providing different types of natural language demonstrations** as part of the input.  \n",
    "It can achieve competitive knowledge editing performance **with less computation overhead and side effects**, as well as better scalability and interpretability."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prepare the runtime environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Clone Repo\n",
    "!git clone https://github.com/zjunlp/EasyEdit\n",
    "%cd EasyEdit\n",
    "!ls"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!apt-get install python3.9"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1\n",
    "!sudo update-alternatives --config python3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!apt-get install python3-pip\n",
    "!pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip list## Config Method Parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Config Method Parameters\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```python\n",
    "# For IKE hparams:\n",
    "alg_name: \"IKE\"\n",
    "model_name: \"blip2\" or \"minigpt4\"\n",
    "sentence_model_name: \"all-MiniLM-L6-v2\"\n",
    "device: 0\n",
    "results_dir: \"./results\"\n",
    "\n",
    "k: 32\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install matplotlib\n",
    "!pip install sentence_transformers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import modules & Run"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Prepare the inputs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Edit for CaptionDataset\n",
    "prompts = [\n",
    "      \"a photo of\",\n",
    "      \"a photo of\"\n",
    "]\n",
    "targets = [\n",
    "    \"A couple trays of cookies on a counter.\",\n",
    "    \"a couple of people that are cutting a piece of cake\",\n",
    "]\n",
    "image = [\n",
    "    \"val2014/COCO_val2014_000000575018.jpg\",\n",
    "    \"val2014/COCO_val2014_000000048332.jpg\"\n",
    "]\n",
    "rephrase_prompts = [\n",
    "    \"a photograph of\",\n",
    "    \"give a detailed description of the picture,\"\n",
    "]\n",
    "rephrase_image = [\n",
    "    \"val2014_image_rephrase/COCO_val2014_000000575018.png\",\n",
    "    \"val2014_image_rephrase/COCO_val2014_000000048332.png\"\n",
    "]\n",
    "locality_inputs = {\n",
    "    'text': {\n",
    "        'prompt': [\"nq question: what purpose did seasonal monsoon winds have on trade\", \"nq question: what purpose did seasonal monsoon winds have on trade\",],\n",
    "        'ground_truth': [\"enabled European empire expansion into the Americas and trade routes to become established across the Atlantic and Pacific oceans\", \"enabled European empire expansion into the Americas and trade routes to become established across the Atlantic and Pacific oceans\"]\n",
    "    },\n",
    "    'vision': {\n",
    "        'prompt': [\"What sport can you use this for?\", \"What sport can you use this for?\"],\n",
    "        'ground_truth': [\"riding\", \"riding\",],\n",
    "        'image': [\"val2014/COCO_val2014_000000297147.jpg\", \"val2014/COCO_val2014_000000297147.jpg\"],\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Edit for VQADataset\n",
    "prompts = [\n",
    "    \"How many tennis balls are in the picture?\",\n",
    "    \"What is the red food?\"\n",
    "]\n",
    "targets = [\n",
    "    \"2\",\n",
    "    \"tomatoes\",\n",
    "]\n",
    "image = [\n",
    "    \"val2014/COCO_val2014_000000451435.jpg\",\n",
    "    \"val2014/COCO_val2014_000000189446.jpg\"\n",
    "]\n",
    "rephrase_prompts = [\n",
    "    \"What is the number of tennis balls depicted in the image?\",\n",
    "    \"What is the name of the food that is red in color?\"\n",
    "]\n",
    "rephrase_image = [\n",
    "    \"val2014_image_rephrase/451435003_COCO_val2014_000000451435.png\",\n",
    "    \"val2014_image_rephrase/189446003_COCO_val2014_000000189446.png\"\n",
    "]\n",
    "locality_inputs = {\n",
    "    'text': {\n",
    "        'prompt': [\"nq question: what purpose did seasonal monsoon winds have on trade\", \"nq question: what purpose did seasonal monsoon winds have on trade\",],\n",
    "        'ground_truth': [\"enabled European empire expansion into the Americas and trade routes to become established across the Atlantic and Pacific oceans\", \"enabled European empire expansion into the Americas and trade routes to become established across the Atlantic and Pacific oceans\"]\n",
    "    },\n",
    "    'vision': {\n",
    "        'prompt': [\"What sport can you use this for?\", \"What sport can you use this for?\"],\n",
    "        'ground_truth': [\"riding\", \"riding\",],\n",
    "        'image': [\"val2014/COCO_val2014_000000297147.jpg\", \"val2014/COCO_val2014_000000297147.jpg\"],\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### For MiniGPT-4 Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-10-17 23:27:18,116 - easyeditor.editors.multimodal_editor - INFO - Instantiating model\n",
      "2023-10-17 23:27:18,116 - easyeditor.editors.multimodal_editor - INFO - Instantiating model\n",
      "10/17/2023 23:27:18 - INFO - easyeditor.editors.multimodal_editor -   Instantiating model\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loading VIT\n",
      "Position interpolate from 16x16 to 26x26\n",
      "freeze vision encoder\n",
      "Loading VIT Done\n",
      "Loading Q-Former\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertLMHeadModel: ['bert.embeddings.token_type_embeddings.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight']\n",
      "- This IS expected if you are initializing BertLMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertLMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
      "Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.encoder.layer.9.output_query.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.self.value.bias', 'bert.encoder.layer.6.output_query.dense.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.self.value.bias', 'bert.encoder.layer.4.crossattention.self.value.bias', 'bert.encoder.layer.8.intermediate_query.dense.weight', 'bert.encoder.layer.2.crossattention.self.key.bias', 'bert.encoder.layer.0.output_query.LayerNorm.bias', 'bert.encoder.layer.4.intermediate_query.dense.bias', 'bert.encoder.layer.4.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.2.intermediate_query.dense.weight', 'bert.encoder.layer.10.intermediate_query.dense.bias', 'bert.encoder.layer.6.crossattention.self.value.weight', 'bert.encoder.layer.8.crossattention.output.dense.bias', 'bert.encoder.layer.4.crossattention.output.dense.weight', 'bert.encoder.layer.2.crossattention.self.query.weight', 'bert.encoder.layer.10.crossattention.self.key.weight', 'bert.encoder.layer.2.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.self.value.bias', 'bert.encoder.layer.9.output_query.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.self.query.weight', 'bert.encoder.layer.10.output_query.dense.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.output_query.dense.weight', 'bert.encoder.layer.10.output_query.LayerNorm.bias', 'bert.encoder.layer.9.output_query.dense.weight', 'bert.encoder.layer.10.crossattention.self.query.bias', 'bert.encoder.layer.10.crossattention.self.query.weight', 'bert.encoder.layer.4.crossattention.self.key.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.10.output_query.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.output_query.LayerNorm.bias', 'bert.encoder.layer.6.output_query.LayerNorm.bias', 'bert.encoder.layer.0.output_query.dense.bias', 'bert.encoder.layer.2.crossattention.output.dense.weight', 'bert.encoder.layer.5.output_query.LayerNorm.bias', 'bert.encoder.layer.8.output_query.dense.weight', 'bert.encoder.layer.6.output_query.dense.bias', 'bert.encoder.layer.9.output_query.dense.bias', 'bert.encoder.layer.2.output_query.dense.bias', 'bert.encoder.layer.7.output_query.dense.weight', 'bert.encoder.layer.7.intermediate_query.dense.weight', 'bert.encoder.layer.3.intermediate_query.dense.weight', 'bert.encoder.layer.0.crossattention.self.key.weight', 'bert.encoder.layer.4.crossattention.output.dense.bias', 'bert.encoder.layer.5.output_query.LayerNorm.weight', 'bert.encoder.layer.3.output_query.LayerNorm.weight', 'bert.encoder.layer.4.output_query.dense.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.0.intermediate_query.dense.weight', 'bert.encoder.layer.2.crossattention.self.query.bias', 'bert.encoder.layer.4.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.output.dense.bias', 'bert.encoder.layer.8.crossattention.self.query.bias', 'bert.encoder.layer.8.output_query.dense.bias', 'bert.encoder.layer.6.output_query.LayerNorm.weight', 'bert.encoder.layer.1.output_query.dense.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.3.output_query.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.self.query.bias', 'bert.encoder.layer.1.output_query.LayerNorm.bias', 'bert.encoder.layer.10.output_query.dense.bias', 'bert.encoder.layer.1.intermediate_query.dense.bias', 'bert.encoder.layer.11.intermediate_query.dense.bias', 'bert.encoder.layer.11.output_query.dense.bias', 'bert.encoder.layer.0.output_query.LayerNorm.weight', 'bert.encoder.layer.1.intermediate_query.dense.weight', 'bert.encoder.layer.6.crossattention.self.query.weight', 'bert.encoder.layer.2.output_query.dense.weight', 'bert.encoder.layer.0.crossattention.output.dense.weight', 'bert.encoder.layer.3.output_query.dense.weight', 'bert.encoder.layer.4.output_query.LayerNorm.weight', 'bert.encoder.layer.9.intermediate_query.dense.bias', 'bert.encoder.layer.8.output_query.LayerNorm.weight', 'bert.encoder.layer.6.intermediate_query.dense.bias', 'bert.encoder.layer.2.crossattention.self.key.weight', 'bert.encoder.layer.7.output_query.dense.bias', 'bert.encoder.layer.4.output_query.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.key.bias', 'bert.encoder.layer.8.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.value.weight', 'bert.encoder.layer.7.intermediate_query.dense.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.output.dense.weight', 'bert.encoder.layer.1.output_query.LayerNorm.weight', 'bert.encoder.layer.3.intermediate_query.dense.bias', 'bert.encoder.layer.4.crossattention.self.key.weight', 'bert.encoder.layer.5.intermediate_query.dense.weight', 'bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.9.intermediate_query.dense.weight', 'bert.encoder.layer.2.crossattention.self.value.bias', 'bert.encoder.layer.8.intermediate_query.dense.bias', 'bert.encoder.layer.6.intermediate_query.dense.weight', 'bert.encoder.layer.2.output_query.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.self.query.weight', 'bert.encoder.layer.2.output_query.LayerNorm.bias', 'bert.encoder.layer.5.intermediate_query.dense.bias', 'bert.encoder.layer.6.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.output.dense.weight', 'bert.encoder.layer.0.crossattention.output.dense.bias', 'bert.encoder.layer.11.output_query.LayerNorm.weight', 'bert.encoder.layer.0.output_query.dense.weight', 'bert.encoder.layer.0.crossattention.self.value.weight', 'bert.encoder.layer.5.output_query.dense.weight', 'bert.encoder.layer.11.output_query.dense.weight', 'bert.encoder.layer.4.output_query.dense.weight', 'bert.encoder.layer.7.output_query.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.output.dense.bias', 'bert.encoder.layer.5.output_query.dense.bias', 'bert.encoder.layer.0.intermediate_query.dense.bias', 'bert.encoder.layer.2.crossattention.self.value.weight', 'bert.encoder.layer.10.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.self.key.bias', 'bert.encoder.layer.10.intermediate_query.dense.weight', 'bert.encoder.layer.10.crossattention.self.key.bias', 'bert.encoder.layer.3.output_query.dense.bias', 'bert.encoder.layer.4.crossattention.self.query.bias', 'bert.encoder.layer.11.intermediate_query.dense.weight', 'bert.encoder.layer.4.crossattention.self.value.weight', 'bert.encoder.layer.8.crossattention.self.key.bias', 'bert.encoder.layer.10.crossattention.self.value.bias', 'bert.encoder.layer.10.crossattention.output.dense.weight', 'bert.encoder.layer.11.output_query.LayerNorm.bias', 'cls.predictions.decoder.weight', 'bert.encoder.layer.10.crossattention.output.dense.bias', 'bert.encoder.layer.2.intermediate_query.dense.bias', 'bert.encoder.layer.4.intermediate_query.dense.weight', 'bert.encoder.layer.7.output_query.LayerNorm.bias']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "load checkpoint from hugging_cache/blip2_pretrained_flant5xxl.pth\n",
      "freeze Qformer\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n",
      "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loading Q-Former Done\n",
      "Loading LLAMA\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.004180192947387695,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": null,
       "postfix": null,
       "prefix": "Loading checkpoint shards",
       "rate": null,
       "total": 2,
       "unit": "it",
       "unit_divisor": 1000,
       "unit_scale": false
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2a750051ad8e43e0b6d93fd7ef5f8f12",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n",
      "normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loading LLAMA Done\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2023-10-17 23:30:29,946 - easyeditor.editors.multimodal_editor - INFO - Execution 0 editing took 0.9284358024597168\n",
      "2023-10-17 23:30:29,946 - easyeditor.editors.multimodal_editor - INFO - Execution 0 editing took 0.9284358024597168\n",
      "10/17/2023 23:30:29 - INFO - easyeditor.editors.multimodal_editor -   Execution 0 editing took 0.9284358024597168\n",
      "Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.\n",
      "2023-10-17 23:30:31,127 - easyeditor.editors.multimodal_editor - INFO - Evaluation took 1.178135871887207\n",
      "2023-10-17 23:30:31,127 - easyeditor.editors.multimodal_editor - INFO - Evaluation took 1.178135871887207\n",
      "10/17/2023 23:30:31 - INFO - easyeditor.editors.multimodal_editor -   Evaluation took 1.178135871887207\n",
      "2023-10-17 23:30:31,134 - easyeditor.editors.multimodal_editor - INFO - 0 editing: How many tennis balls are in the picture? -> 2  \n",
      " {'case_id': 0, 'time': 0.9284358024597168, 'post': {'rewrite_acc': tensor(0.5000), 'rephrase_acc': tensor(0.5000), 'image_rephrase_acc': tensor(0.5000), 'locality_acc': 1.0, 'multimodal_locality_acc': 0.0}, 'pre': {'rewrite_acc': tensor(0.), 'rephrase_acc': tensor(0.), 'image_rephrase_acc': tensor(0.)}}\n",
      "2023-10-17 23:30:31,134 - easyeditor.editors.multimodal_editor - INFO - 0 editing: How many tennis balls are in the picture? -> 2  \n",
      " {'case_id': 0, 'time': 0.9284358024597168, 'post': {'rewrite_acc': tensor(0.5000), 'rephrase_acc': tensor(0.5000), 'image_rephrase_acc': tensor(0.5000), 'locality_acc': 1.0, 'multimodal_locality_acc': 0.0}, 'pre': {'rewrite_acc': tensor(0.), 'rephrase_acc': tensor(0.), 'image_rephrase_acc': tensor(0.)}}\n",
      "10/17/2023 23:30:31 - INFO - easyeditor.editors.multimodal_editor -   0 editing: How many tennis balls are in the picture? -> 2  \n",
      " {'case_id': 0, 'time': 0.9284358024597168, 'post': {'rewrite_acc': tensor(0.5000), 'rephrase_acc': tensor(0.5000), 'image_rephrase_acc': tensor(0.5000), 'locality_acc': 1.0, 'multimodal_locality_acc': 0.0}, 'pre': {'rewrite_acc': tensor(0.), 'rephrase_acc': tensor(0.), 'image_rephrase_acc': tensor(0.)}}\n",
      "2023-10-17 23:30:32,079 - easyeditor.editors.multimodal_editor - INFO - Execution 1 editing took 0.9416890144348145\n",
      "2023-10-17 23:30:32,079 - easyeditor.editors.multimodal_editor - INFO - Execution 1 editing took 0.9416890144348145\n",
      "10/17/2023 23:30:32 - INFO - easyeditor.editors.multimodal_editor -   Execution 1 editing took 0.9416890144348145\n",
      "2023-10-17 23:30:33,240 - easyeditor.editors.multimodal_editor - INFO - Evaluation took 1.1569907665252686\n",
      "2023-10-17 23:30:33,240 - easyeditor.editors.multimodal_editor - INFO - Evaluation took 1.1569907665252686\n",
      "10/17/2023 23:30:33 - INFO - easyeditor.editors.multimodal_editor -   Evaluation took 1.1569907665252686\n",
      "2023-10-17 23:30:33,245 - easyeditor.editors.multimodal_editor - INFO - 1 editing: What is the red food? -> tomatoes  \n",
      " {'case_id': 1, 'time': 0.9416890144348145, 'post': {'rewrite_acc': tensor(1.), 'rephrase_acc': tensor(1.), 'image_rephrase_acc': tensor(1.), 'locality_acc': 1.0, 'multimodal_locality_acc': 0.0}, 'pre': {'rewrite_acc': tensor(0.), 'rephrase_acc': tensor(0.), 'image_rephrase_acc': tensor(0.)}}\n",
      "2023-10-17 23:30:33,245 - easyeditor.editors.multimodal_editor - INFO - 1 editing: What is the red food? -> tomatoes  \n",
      " {'case_id': 1, 'time': 0.9416890144348145, 'post': {'rewrite_acc': tensor(1.), 'rephrase_acc': tensor(1.), 'image_rephrase_acc': tensor(1.), 'locality_acc': 1.0, 'multimodal_locality_acc': 0.0}, 'pre': {'rewrite_acc': tensor(0.), 'rephrase_acc': tensor(0.), 'image_rephrase_acc': tensor(0.)}}\n",
      "10/17/2023 23:30:33 - INFO - easyeditor.editors.multimodal_editor -   1 editing: What is the red food? -> tomatoes  \n",
      " {'case_id': 1, 'time': 0.9416890144348145, 'post': {'rewrite_acc': tensor(1.), 'rephrase_acc': tensor(1.), 'image_rephrase_acc': tensor(1.), 'locality_acc': 1.0, 'multimodal_locality_acc': 0.0}, 'pre': {'rewrite_acc': tensor(0.), 'rephrase_acc': tensor(0.), 'image_rephrase_acc': tensor(0.)}}\n"
     ]
    }
   ],
   "source": [
    "from easyeditor import BaseEditor, MultimodalTrainer, MultimodalEditor\n",
    "from easyeditor import CaptionDataset, VQADataset\n",
    "from easyeditor import MENDMultimodalTrainingHparams, SERACMultimodalTrainingHparams, IKEMultimodalHyperParams, MENDMultimodalHparams \\\n",
    "    , SERACMultimodalHparams\n",
    "    \n",
    "hparams = MENDMultimodalHparams.from_hparams('hparams/MEND/minigpt4.yaml')\n",
    "editor = MultimodalEditor.from_hparams(hparams)\n",
    "metrics, edited_model, _, post_logits, pre_logits = editor.edit_demo(\n",
    "    prompts=prompts,\n",
    "    targets=targets,\n",
    "    image=image,\n",
    "    rephrase_prompts=rephrase_prompts,\n",
    "    rephrase_image=rephrase_image,\n",
    "    locality_inputs=locality_inputs,\n",
    "    keep_original_weight=True        \n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tomatoes\n",
      "ato?\n"
     ]
    }
   ],
   "source": [
    "print(editor.tok.decode(post_logits[0].argmax(-1).tolist()))\n",
    "print(editor.tok.decode(pre_logits[0].argmax(-1).tolist()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### For BLIP2OPT Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from easyeditor import BaseEditor, MultimodalTrainer, MultimodalEditor\n",
    "from easyeditor import CaptionDataset, VQADataset\n",
    "from easyeditor import MENDMultimodalTrainingHparams, SERACMultimodalTrainingHparams, IKEMultimodalHyperParams, MENDMultimodalHparams \\\n",
    "    , SERACMultimodalHparams\n",
    "    \n",
    "hparams = MENDMultimodalHparams.from_hparams('hparams/MEND/blip2.yaml')\n",
    "editor = MultimodalEditor.from_hparams(hparams)\n",
    "metrics, edited_model, _, post_logits, pre_logits = editor.edit_demo(\n",
    "    prompts=prompts,\n",
    "    targets=targets,\n",
    "    image=image,\n",
    "    rephrase_prompts=rephrase_prompts,\n",
    "    rephrase_image=rephrase_image,\n",
    "    locality_inputs=locality_inputs,\n",
    "    keep_original_weight=True        \n",
    ")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "easyedit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.17"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
