{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# EasyEdit Example with **IKE_Baichuan-7B**\n",
    "The following is a toy experiment utilizing the Baichuan base model, employing the IKE method for LLM knowledge editing.\n",
    "This tutorial uses `Python3.9`.\n",
    "## Knowledge Editing\n",
    "\n",
    "Deployed models may still make unpredictable errors. For example, Large Language Models (LLMs) notoriously *hallucinate*, *perpetuate bias*, and *factually decay*, so we should be able to adjust specific behaviors of pre-trained models.\n",
    "\n",
    "**Knowledge editing** aims to adjust an initial base model's $(f_\\theta)$ behavior on the particular edit descriptor $[x_e, y_e]$, such as:\n",
    "- $x_e$: \"Who is the president of the US?\n",
    "- $y_e$: \"Joe Biden.\"\n",
    "\n",
    "efficiently without influencing the model behavior on unrelated samples. The ultimate goal is to create an edited model $(f_{\\theta})$.\n",
    "\n",
    "As the picture follow,the model should edit target facts and facts that are similar in semantic space, without changing irrelevant facts.\n",
    "\n",
    "![]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In addition to this, the performance of model editing should be measured from multiple dimensions:\n",
    "\n",
    "- `Reliability`: the success rate of editing with a given editing description\n",
    "- `Generalization`: the success rate of editing **within** the editing scope\n",
    "- `Locality`: whether the model's output changes after editing for unrelated inputs\n",
    "- `Portability`: the success rate of editing for factual reasoning(one hop, synonym, one-to-one relation)\n",
    "- `Efficiency`: time and memory consumption required during the editing process\n",
    "\n",
    "\n",
    "# Method: **IKE**\n",
    "\n",
    "Paper: [Can We Edit Factual Knowledge by In-Context Learning?](https://arxiv.org/abs/2305.12740)\n",
    "\n",
    "**IKE** (In-context Knowledge Editing), is a way of editing factual knowledge in large language models **without modifying their parameters**, but by **providing different types of natural language demonstrations** as part of the input.  \n",
    "It can achieve competitive knowledge editing performance **with less computation overhead and side effects**, as well as better scalability and interpretability."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prepare the runtime environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!git clone https://github.com/zjunlp/EasyEdit\n",
    "%cd EasyEdit\n",
    "!ls"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!apt-get install python3.9\n",
    "!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1\n",
    "!sudo update-alternatives --config python3\n",
    "!apt-get install python3-pip\n",
    "!pip install -r requirements.txt\n",
    "!pip install sentence_transformers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Download models\n",
    "Download the Baichuan-7B base model and the all-MiniLM-L6-v2 model from sentence-transformers.\n",
    "\n",
    "Perhaps you can directly use Hugging Face's cache. \n",
    "\n",
    "When you encounter network timeouts, you should download them to your local machine through a mirror site such as [aliendao](https://aliendao.cn/models/baichuan-inc/Baichuan-7B#/)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Add method configuration file"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "vim hparams/IKE/baichuan-7b.yaml as follows\n",
    "```\n",
    "alg_name: \"IKE\"\n",
    "model_name: \"./hugging_cache/Baichuan-7B\"\n",
    "sentence_model_name: \"./hugging_cache/all-MiniLM-L6-v2\"\n",
    "device: 0\n",
    "results_dir: \"./results\"\n",
    "\n",
    "k: 16\n",
    "model_parallel: false\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prepare the required editing facts  and context examples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If the current path is in tutorial notes, return to the previous level EasyEdit"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/EasyEdit/tutorial-notebooks\n"
     ]
    }
   ],
   "source": [
    "!pwd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/EasyEdit\n"
     ]
    }
   ],
   "source": [
    "%cd .."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from easyeditor import BaseEditor\n",
    "from easyeditor import IKEHyperParams\n",
    "from easyeditor.models.ike.util import encode_ike_facts\n",
    "from sentence_transformers import SentenceTransformer\n",
    "\n",
    "# the fact to be edit\n",
    "prompts = ['Q: The president of the US is? A:',]\n",
    "ground_truth = ['Donald Trump']\n",
    "target_new = ['Joe Biden']\n",
    "subject = ['president']\n",
    "rephrase_prompts = ['The leader of the United State is']\n",
    "# IKE need train_ds(For getting In-Context prompt)\n",
    "train_ds = [\n",
    "    {\n",
    "        'prompt': 'Q: The president of the US is? A:',\n",
    "        'target_new': 'Joe Biden',\n",
    "        'rephrase_prompt': 'The leader of the United State is',\n",
    "        'locality_prompt': 'The president of Russia is ',\n",
    "        'locality_ground_truth': 'Putin'\n",
    "    },\n",
    "    {\n",
    "        'prompt': 'Einstein specialized in',\n",
    "        'target_new': 'physics',\n",
    "        'rephrase_prompt': 'Einstein is good at',\n",
    "        'locality_prompt': 'Q: Which subject did Newton specialize in? A: ',\n",
    "        'locality_ground_truth': 'physics'\n",
    "\n",
    "    },\n",
    "    # add more if needed\n",
    "\n",
    "]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Run the knowledge editor\n",
    "Construct the editor based on the configuration (model base & editing method used) and the editing facts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "hparams = IKEHyperParams.from_hparams('./hparams/IKE/baichuan-7b.yaml')\n",
    "editor = BaseEditor.from_hparams(hparams)\n",
    "\n",
    "# Initialize SentenceTransformer model\n",
    "sentence_model = SentenceTransformer(hparams.sentence_model_name)\n",
    "# Generate and save sentence embeddings\n",
    "encode_ike_facts(sentence_model, train_ds, hparams)\n",
    "\n",
    "metrics, edited_model, _ = editor.edit(\n",
    "    prompts=prompts,\n",
    "    ground_truth=ground_truth,\n",
    "    rephrase_prompts=rephrase_prompts, # new para\n",
    "    target_new=target_new,\n",
    "    subject=subject,\n",
    "    train_ds=train_ds,\n",
    "    sequential_edit = True\n",
    ")\n",
    "\n",
    "print(metrics)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Performance Comparison \n",
    "Simple comparison of performance with models that do not undergo knowledge editing."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoModelForCausalLM\n",
    "from transformers import AutoTokenizer\n",
    "\n",
    "tokenizer =AutoTokenizer.from_pretrained('./hugging_cache/Baichuan-7B',trust_remote_code=True)\n",
    "tokenizer.pad_token_id = tokenizer.eos_token_id\n",
    "tokenizer.padding_side='left'\n",
    "\n",
    "device = 1\n",
    "model = AutoModelForCausalLM.from_pretrained('./hugging_cache/Baichuan-7B',trust_remote_code=True).to(f'cuda:{device}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/mnt/8t/xkw/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2888: UserWarning: `max_length` is ignored when `padding`=`True` and there is no truncation strategy. To pad to max length, use `padding='max_length'`.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "****************************************************************************************************\n",
      "Pre-Edit Outputs:  ['</s></s>Q: What color is the sky? A: Blue.\\nB: What color is the sky?\\nA: Blue.\\nB: What color is the sky?\\nA: Blue.\\nB: What color is the sky?\\nA: Blue.\\nB: What', 'Q: Who is the president of the US? A: The president of the US is the head of state and head of government of the United States. The president is the commander-in-chief of the United States Armed Forces. The president is the only elected official in the federal government']\n",
      "Post-Edit Outputs:  ['</s></s></s></s></s></s></s>The sky is red.     The color of the sky is red.     Q: What is the color of sky? A: Red.     Q: What color is the sky? A: Red.   ', 'The president of the US is Biden.     Q: Who is the president of the US? A: Biden.     Biden is the leader of the United State.     Q: Who is the president of the US? A: Biden.   ']\n"
     ]
    }
   ],
   "source": [
    "\n",
    "ike_generation_prompts = [\n",
    "    \"The sky is red. \\\n",
    "    The color of the sky is red. \\\n",
    "    Q: What is the color of sky? A: Red. \\\n",
    "    Q: What color is the sky? A:\",\n",
    "    \"The president of the US is Biden. \\\n",
    "    Q: Who is the president of the US? A: Biden. \\\n",
    "    Biden is the leader of the United State. \\\n",
    "    Q: Who is the president of the US? A:\",\n",
    "]\n",
    "generation_prompts = [\n",
    "    \"Q: What color is the sky? A:\",\n",
    "    \"Q: Who is the president of the US? A:\",\n",
    "]\n",
    "\n",
    "max_length = 60 \n",
    "edited_batch = tokenizer(ike_generation_prompts, return_tensors='pt', padding=True, max_length=max_length)\n",
    "batch = tokenizer(generation_prompts, return_tensors='pt', padding=True, max_length=max_length)\n",
    "\n",
    "pre_edit_outputs = model.generate(\n",
    "    input_ids=batch['input_ids'].to(model.device),\n",
    "    attention_mask=batch['attention_mask'].to(model.device),\n",
    "    max_new_tokens=15\n",
    ")\n",
    "post_edit_outputs = edited_model.generate(\n",
    "    input_ids=edited_batch['input_ids'].to(edited_model.device),\n",
    "    attention_mask=edited_batch['attention_mask'].to(edited_model.device),\n",
    "    max_new_tokens=15\n",
    ")\n",
    "print('*'*100)\n",
    "\n",
    "generation_max_length = batch['input_ids'].shape[-1]\n",
    "edited_max_length = edited_batch['input_ids'].shape[-1]\n",
    "for i in range(len(ike_generation_prompts)):\n",
    "    print(f'Pre-Edit  Output: {tokenizer.decode( pre_edit_outputs[i][generation_max_length :], skip_special_tokens=True)}')\n",
    "    print(f'Post-Edit Output: {tokenizer.decode(post_edit_outputs[i][edited_max_length :], skip_special_tokens=True)}')\n",
    "    print('--'*50 )"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "EasyEdit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
