{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Fine-tune FLAN-T5 for chat & dialogue summarization\n",
    "\n",
    "In this blog, you will learn how to fine-tune [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) for chat & dialogue summarization using Hugging Face Transformers. If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. \n",
    "\n",
    "In this example we will use the [samsum](https://huggingface.co/datasets/samsum) dataset a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.\n",
    "\n",
    "You will learn how to:\n",
    "\n",
    "1. [Setup Development Environment](#1-setup-development-environment)\n",
    "2. [Load and prepare samsum dataset](#2-load-and-prepare-samsum-dataset)\n",
    "3. [Fine-tune and evaluate FLAN-T5](#3-fine-tune-and-evaluate-flan-t5)\n",
    "4. [Run Inference and summarize ChatGPT dialogues](#4-run-inference-and-summarize-chatgpt-dialogues)\n",
    "\n",
    "Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments. \n",
    "\n",
    "## Quick intro: FLAN-T5, just a better T5\n",
    "\n",
    "FLAN-T5 released with the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper is an enhanced version of T5 that has been finetuned in a mixture of tasks. The paper explores instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. The paper discovers that overall instruction finetuning is a general method for improving the performance and usability of pretrained language models. \n",
    "\n",
    "![flan-t5](../assets/flan-t5.png)\n",
    "\n",
    "* Paper: https://arxiv.org/abs/2210.11416\n",
    "* Official repo: https://github.com/google-research/t5x\n",
    "\n",
    "--- \n",
    "\n",
    "Now we know what FLAN-T5 is, let's get started. 🚀\n",
    "\n",
    "_Note: This tutorial was created and run on a g4dn.xlarge AWS EC2 Instance including a NVIDIA T4._"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Setup Development Environment\n",
    "\n",
    "Our first step is to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# python\n",
    "!pip install pytesseract transformers datasets rouge-score nltk tensorboard py7zr --upgrade"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# install git-fls for pushing model and logs to the hugging face hub\n",
    "!sudo apt-get install git-lfs --yes"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join). \n",
    "If you already have an account, you can skip this step. \n",
    "After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from huggingface_hub import notebook_login\n",
    "\n",
    "notebook_login()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Load and prepare samsum dataset\n",
    "\n",
    "we will use the [samsum](https://huggingface.co/datasets/samsum) dataset a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.\n",
    "\n",
    "```json\n",
    "{\n",
    "  \"id\": \"13818513\",\n",
    "  \"summary\": \"Amanda baked cookies and will bring Jerry some tomorrow.\",\n",
    "  \"dialogue\": \"Amanda: I baked cookies. Do you want some?\\r\\nJerry: Sure!\\r\\nAmanda: I'll bring you tomorrow :-)\"\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset_id = \"samsum\""
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To load the `samsum` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datasets import load_dataset\n",
    "\n",
    "# Load dataset from the hub\n",
    "dataset = load_dataset(dataset_id)\n",
    "\n",
    "print(f\"Train dataset size: {len(dataset['train'])}\")\n",
    "print(f\"Test dataset size: {len(dataset['test'])}\")\n",
    "\n",
    "# Train dataset size: 14732\n",
    "# Test dataset size: 819"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Lets checkout an example of the dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from random import randrange        \n",
    "\n",
    "\n",
    "sample = dataset['train'][randrange(len(dataset[\"train\"]))]\n",
    "print(f\"dialogue: \\n{sample['dialogue']}\\n---------------\")\n",
    "print(f\"summary: \\n{sample['summary']}\\n---------------\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To train our model we need to convert our inputs (text) to token IDs. This is done by a 🤗 Transformers Tokenizer. If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n",
    "\n",
    "model_id=\"google/flan-t5-base\"\n",
    "\n",
    "# Load tokenizer of FLAN-t5-base\n",
    "tokenizer = AutoTokenizer.from_pretrained(model_id)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "before we can start training we need to preprocess our data. Abstractive Summarization is a text2text-generation task. This means our model will take a text as input and generate a summary as output. For this we want to understand how long our input and output will be to be able to efficiently batch our data. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datasets import concatenate_datasets\n",
    "\n",
    "# The maximum total input sequence length after tokenization. \n",
    "# Sequences longer than this will be truncated, sequences shorter will be padded.\n",
    "tokenized_inputs = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[\"dialogue\"], truncation=True), batched=True, remove_columns=[\"dialogue\", \"summary\"])\n",
    "max_source_length = max([len(x) for x in tokenized_inputs[\"input_ids\"]])\n",
    "print(f\"Max source length: {max_source_length}\")\n",
    "\n",
    "# The maximum total sequence length for target text after tokenization. \n",
    "# Sequences longer than this will be truncated, sequences shorter will be padded.\"\n",
    "tokenized_targets = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[\"summary\"], truncation=True), batched=True, remove_columns=[\"dialogue\", \"summary\"])\n",
    "max_target_length = max([len(x) for x in tokenized_targets[\"input_ids\"]])\n",
    "print(f\"Max target length: {max_target_length}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def preprocess_function(sample,padding=\"max_length\"):\n",
    "    # add prefix to the input for t5\n",
    "    inputs = [\"summarize: \" + item for item in sample[\"dialogue\"]]\n",
    "\n",
    "    # tokenize inputs\n",
    "    model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)\n",
    "\n",
    "    # Tokenize targets with the `text_target` keyword argument\n",
    "    labels = tokenizer(text_target=sample[\"summary\"], max_length=max_target_length, padding=padding, truncation=True)\n",
    "\n",
    "    # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\n",
    "    # padding in the loss.\n",
    "    if padding == \"max_length\":\n",
    "        labels[\"input_ids\"] = [\n",
    "            [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels[\"input_ids\"]\n",
    "        ]\n",
    "\n",
    "    model_inputs[\"labels\"] = labels[\"input_ids\"]\n",
    "    return model_inputs\n",
    "\n",
    "tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=[\"dialogue\", \"summary\", \"id\"])\n",
    "print(f\"Keys of tokenized dataset: {list(tokenized_dataset['train'].features)}\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Fine-tune and evaluate FLAN-T5\n",
    "\n",
    "After we have processed our dataset, we can start training our model. Therefore we first need to load our [FLAN-T5](https://huggingface.co/models?search=flan-t5) from the Hugging Face Hub. In the example we are using a instance with a NVIDIA V100 meaning that we will fine-tune the `base` version of the model. \n",
    "_I plan to do a follow-up post on how to fine-tune the `xxl` version of the model using Deepspeed._\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoModelForSeq2SeqLM\n",
    "\n",
    "# huggingface hub model id\n",
    "model_id=\"google/flan-t5-base\"\n",
    "\n",
    "# load model from the hub\n",
    "model = AutoModelForSeq2SeqLM.from_pretrained(model_id)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We want to evaluate our model during training. The `Trainer` supports evaluation during training by providing a `compute_metrics`.  \n",
    "The most commonly used metrics to evaluate summarization task is [rogue_score](https://en.wikipedia.org/wiki/ROUGE_(metric)) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries\n",
    "\n",
    "We are going to use `evaluate` library to evaluate the `rogue` score."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import evaluate\n",
    "import nltk\n",
    "import numpy as np\n",
    "from nltk.tokenize import sent_tokenize\n",
    "nltk.download(\"punkt\")\n",
    "\n",
    "# Metric\n",
    "metric = evaluate.load(\"rouge\")\n",
    "\n",
    "# helper function to postprocess text\n",
    "def postprocess_text(preds, labels):\n",
    "    preds = [pred.strip() for pred in preds]\n",
    "    labels = [label.strip() for label in labels]\n",
    "\n",
    "    # rougeLSum expects newline after each sentence\n",
    "    preds = [\"\\n\".join(sent_tokenize(pred)) for pred in preds]\n",
    "    labels = [\"\\n\".join(sent_tokenize(label)) for label in labels]\n",
    "\n",
    "    return preds, labels\n",
    "\n",
    "def compute_metrics(eval_preds):\n",
    "    preds, labels = eval_preds\n",
    "    if isinstance(preds, tuple):\n",
    "        preds = preds[0]\n",
    "    decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\n",
    "    # Replace -100 in the labels as we can't decode them.\n",
    "    labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n",
    "    decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n",
    "\n",
    "    # Some simple post-processing\n",
    "    decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)\n",
    "\n",
    "    result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\n",
    "    result = {k: round(v * 100, 4) for k, v in result.items()}\n",
    "    prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]\n",
    "    result[\"gen_len\"] = np.mean(prediction_lens)\n",
    "    return result"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before we can start training is to create a `DataCollator` that will take care of padding our inputs and labels. We will use the `DataCollatorForSeq2Seq` from the 🤗 Transformers library. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import DataCollatorForSeq2Seq\n",
    "\n",
    "# we want to ignore tokenizer pad token in the loss\n",
    "label_pad_token_id = -100\n",
    "# Data collator\n",
    "data_collator = DataCollatorForSeq2Seq(\n",
    "    tokenizer,\n",
    "    model=model,\n",
    "    label_pad_token_id=label_pad_token_id,\n",
    "    pad_to_multiple_of=8\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The last step is to define the hyperparameters (`TrainingArguments`) we want to use for our training. We are leveraging the [Hugging Face Hub](https://huggingface.co/models) integration of the `Trainer` to automatically push our checkpoints, logs and metrics during training into a repository."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "from huggingface_hub import HfFolder\n",
    "from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments\n",
    "\n",
    "# Hugging Face repository id\n",
    "repository_id = f\"{model_id.split('/')[1]}-{dataset_id}\"\n",
    "\n",
    "# Define training args\n",
    "training_args = Seq2SeqTrainingArguments(\n",
    "    output_dir=repository_id,\n",
    "    per_device_train_batch_size=8,\n",
    "    per_device_eval_batch_size=8,\n",
    "    predict_with_generate=True,\n",
    "    fp16=False, # Overflows with fp16\n",
    "    learning_rate=5e-5,\n",
    "    num_train_epochs=5,\n",
    "    # logging & evaluation strategies\n",
    "    logging_dir=f\"{repository_id}/logs\",\n",
    "    logging_strategy=\"steps\",\n",
    "    logging_steps=500,\n",
    "    evaluation_strategy=\"epoch\",\n",
    "    save_strategy=\"epoch\",\n",
    "    save_total_limit=2,\n",
    "    load_best_model_at_end=True,\n",
    "    # metric_for_best_model=\"overall_f1\",\n",
    "    # push to hub parameters\n",
    "    report_to=\"tensorboard\",\n",
    "    push_to_hub=False,\n",
    "    hub_strategy=\"every_save\",\n",
    "    hub_model_id=repository_id,\n",
    "    hub_token=HfFolder.get_token(),\n",
    ")\n",
    "\n",
    "# Create Trainer instance\n",
    "trainer = Seq2SeqTrainer(\n",
    "    model=model,\n",
    "    args=training_args,\n",
    "    data_collator=data_collator,\n",
    "    train_dataset=tokenized_dataset[\"train\"],\n",
    "    eval_dataset=tokenized_dataset[\"test\"],\n",
    "    compute_metrics=compute_metrics,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can start our training by using the `train` method of the `Trainer`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Start training\n",
    "trainer.train()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "![flan-t5-tensorboard](../assets/flan-t5-tensorboard.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Nice, we have trained our model. 🎉 Lets run evaluate the best model again on the test set.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "***** Running Evaluation *****\n",
      "  Num examples = 819\n",
      "  Batch size = 8\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='103' max='103' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [103/103 01:46]\n",
       "    </div>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "{'eval_loss': 1.3715944290161133,\n",
       " 'eval_rouge1': 47.2358,\n",
       " 'eval_rouge2': 23.5135,\n",
       " 'eval_rougeL': 39.6266,\n",
       " 'eval_rougeLsum': 43.3458,\n",
       " 'eval_gen_len': 17.39072039072039,\n",
       " 'eval_runtime': 108.99,\n",
       " 'eval_samples_per_second': 7.514,\n",
       " 'eval_steps_per_second': 0.945,\n",
       " 'epoch': 5.0}"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trainer.evaluate()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The best score we achieved is an `rouge1` score of `47.23`. \n",
    "\n",
    "Lets save our results and tokenizer to the Hugging Face Hub and create a model card. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Save our tokenizer and create model card\n",
    "tokenizer.save_pretrained(repository_id)\n",
    "trainer.create_model_card()\n",
    "# Push the results to the hub\n",
    "trainer.push_to_hub()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Run Inference\n",
    "\n",
    "Now we have a trained model, we can use it to run inference. We will use the `pipeline` API from transformers and a `test` example from our dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "dialogue: \n",
      "Abby: Have you talked to Miro?\n",
      "Dylan: No, not really, I've never had an opportunity\n",
      "Brandon: me neither, but he seems a nice guy\n",
      "Brenda: you met him yesterday at the party?\n",
      "Abby: yes, he's so interesting\n",
      "Abby: told me the story of his father coming from Albania to the US in the early 1990s\n",
      "Dylan: really, I had no idea he is Albanian\n",
      "Abby: he is, he speaks only Albanian with his parents\n",
      "Dylan: fascinating, where does he come from in Albania?\n",
      "Abby: from the seacoast\n",
      "Abby: Duress I believe, he told me they are not from Tirana\n",
      "Dylan: what else did he tell you?\n",
      "Abby: That they left kind of illegally\n",
      "Abby: it was a big mess and extreme poverty everywhere\n",
      "Abby: then suddenly the border was open and they just left \n",
      "Abby: people were boarding available ships, whatever, just to get out of there\n",
      "Abby: he showed me some pictures, like <file_photo>\n",
      "Dylan: insane\n",
      "Abby: yes, and his father was among the people\n",
      "Dylan: scary but interesting\n",
      "Abby: very!\n",
      "---------------\n",
      "flan-t5-base summary:\n",
      "Abby met Miro yesterday at the party. Miro's father came from Albania to the US in the early 1990s. He speaks Albanian with his parents. The border was open and people were boarding ships to get out of there.\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "from random import randrange        \n",
    "\n",
    "# load model and tokenizer from huggingface hub with pipeline\n",
    "summarizer = pipeline(\"summarization\", model=\"philschmid/flan-t5-base-samsum\", device=0)\n",
    "\n",
    "# select a random test sample\n",
    "sample = dataset['test'][randrange(len(dataset[\"test\"]))]\n",
    "print(f\"dialogue: \\n{sample['dialogue']}\\n---------------\")\n",
    "\n",
    "# summarize dialogue\n",
    "res = summarizer(sample[\"dialogue\"])\n",
    "\n",
    "print(f\"flan-t5-base summary:\\n{res[0]['summary_text']}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 15:55:03) \n[GCC 10.4.0]"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "2d58e898dde0263bc564c6968b04150abacfd33eed9b19aaa8e45c040360e146"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
