{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "01f614c6-d39a-4439-8f96-9434c3ab7cb4",
   "metadata": {},
   "source": [
    "# Code Llama"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0319e974-0ad4-466a-982f-d7225a1a7e31",
   "metadata": {},
   "source": [
    "Here are the names of the Code Llama models provided by Together.ai:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d652420-3d6d-4e6a-9d26-5a608352b3c7",
   "metadata": {},
   "source": [
    "- ```togethercomputer/CodeLlama-7b```\n",
    "- ```togethercomputer/CodeLlama-13b```\n",
    "- ```togethercomputer/CodeLlama-34b```\n",
    "- ```togethercomputer/CodeLlama-7b-Python```\n",
    "- ```togethercomputer/CodeLlama-13b-Python```\n",
    "- ```togethercomputer/CodeLlama-34b-Python```\n",
    "- ```togethercomputer/CodeLlama-7b-Instruct```\n",
    "- ```togethercomputer/CodeLlama-13b-Instruct```\n",
    "- ```togethercomputer/CodeLlama-34b-Instruct```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e238efc5-46c8-4fee-a00a-4cd5587c0343",
   "metadata": {},
   "source": [
    "### Import helper functions\n",
    "\n",
    "- You can examine the code_llama helper function using the menu above and selections File -> Open -> utils.py.\n",
    "- By default, the `code_llama` functions uses the CodeLlama-7b-Instruct model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7a089f79-c375-4149-bdca-e7859feb6f0c",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": [
    "from utils import llama, code_llama"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4677029-ff04-4643-9cc5-69d0651c0700",
   "metadata": {},
   "source": [
    "### Writing code to solve a math problem\n",
    "\n",
    "Lists of daily minimum and maximum temperatures:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f1bba890-b3e0-4272-acae-65d115959008",
   "metadata": {
    "height": 63
   },
   "outputs": [],
   "source": [
    "temp_min = [42, 52, 47, 47, 53, 48, 47, 53, 55, 56, 57, 50, 48, 45]\n",
    "temp_max = [55, 57, 59, 59, 58, 62, 65, 65, 64, 63, 60, 60, 62, 62]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "516a0cc6-f5b0-4601-b319-3a59ee5a26d2",
   "metadata": {},
   "source": [
    "- Ask the Llama 7B model to determine the day with the lowest temperature."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d3c3d723-f68c-4398-afb5-84b6bad187f5",
   "metadata": {
    "height": 165
   },
   "outputs": [],
   "source": [
    "prompt = f\"\"\"\n",
    "Below is the 14 day temperature forecast in fahrenheit degree:\n",
    "14-day low temperatures: {temp_min}\n",
    "14-day high temperatures: {temp_max}\n",
    "Which day has the lowest temperature?\n",
    "\"\"\"\n",
    "\n",
    "response = llama(prompt)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b7d3a6d0-6c55-4251-988f-39bfb732e280",
   "metadata": {},
   "source": [
    "- Ask Code Llama to write a python function to determine the minimum temperature."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "62dfe2e7-ef13-41db-8d84-f45230cb91e5",
   "metadata": {
    "height": 131
   },
   "outputs": [],
   "source": [
    "prompt_2 = f\"\"\"\n",
    "Write Python code that can calculate\n",
    "the minimum of the list temp_min\n",
    "and the maximum of the list temp_max\n",
    "\"\"\"\n",
    "response_2 = code_llama(prompt_2)\n",
    "print(response_2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41a25ce0-9512-4321-bd8f-71ebd37b4896",
   "metadata": {},
   "source": [
    "- Use the function on the temperature lists above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "683f5868-6faa-4006-b4c3-14bf2746f51e",
   "metadata": {
    "height": 46
   },
   "outputs": [],
   "source": [
    "def get_min_max(temp_min, temp_max):\n",
    "    return min(temp_min), max(temp_max)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c86d5a7d-be05-49f0-826c-6b7dce2ba50d",
   "metadata": {
    "height": 114
   },
   "outputs": [],
   "source": [
    "temp_min = [42, 52, 47, 47, 53, 48, 47, 53, 55, 56, 57, 50, 48, 45]\n",
    "temp_max = [55, 57, 59, 59, 58, 62, 65, 65, 64, 63, 60, 60, 62, 62]\n",
    "\n",
    "results = get_min_max(temp_min, temp_max)\n",
    "print(results)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "428c12b6-6054-47c2-ad93-9f4a6514fe32",
   "metadata": {},
   "source": [
    "### Code in-filling\n",
    "\n",
    "- Use Code Llama to fill in partially completed code.\n",
    "- Notice the `[INST]` and `[/INST]` tags that have been added to the prompt."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "98c3067b-5c23-4208-9888-34e3860d8bc7",
   "metadata": {
    "height": 335
   },
   "outputs": [],
   "source": [
    "prompt = \"\"\"\n",
    "def star_rating(n):\n",
    "'''\n",
    "  This function returns a rating given the number n,\n",
    "  where n is an integers from 1 to 5.\n",
    "'''\n",
    "\n",
    "    if n == 1:\n",
    "        rating=\"poor\"\n",
    "    <FILL>\n",
    "    elif n == 5:\n",
    "        rating=\"excellent\"\n",
    "\n",
    "    return rating\n",
    "\"\"\"\n",
    "\n",
    "response = code_llama(prompt,\n",
    "                      verbose=True)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "45fc6fd2-8f5f-4778-a0fd-edfc3602fa85",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": [
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58aea160-3ce3-4df7-b87f-ce5af449fe18",
   "metadata": {},
   "source": [
    "### Write code to calculate the nth Fibonacci number\n",
    "\n",
    "Here is the Fibonacci sequence:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "583c6d54-dc7e-434d-986a-a0b385288488",
   "metadata": {
    "height": 63
   },
   "outputs": [],
   "source": [
    "# 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610...\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5bd158fc-0285-4b09-8b52-71a58da8c2ec",
   "metadata": {},
   "source": [
    "Each number (after the starting 0 and 1) is equal to the sum of the two numbers that precede it."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79251b2d-2bdd-4b55-8bc7-e565dbea1de3",
   "metadata": {},
   "source": [
    "#### Use Code Llama to write a Fibonacci number\n",
    "- Write a natural language prompt that asks the model to write code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1144cf40-add0-4025-a116-c16b332238cc",
   "metadata": {
    "height": 114
   },
   "outputs": [],
   "source": [
    "prompt = \"\"\"\n",
    "Provide a function that calculates the n-th fibonacci number.\n",
    "\"\"\"\n",
    "\n",
    "response = code_llama(prompt, verbose=True)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4246ec10-4750-4855-a745-ff4e8e1ddbbb",
   "metadata": {},
   "source": [
    "### Make the code more efficient\n",
    "\n",
    "- Ask Code Llama to critique its initial response."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d448ae23-d997-450c-93ec-874ee3951727",
   "metadata": {
    "height": 267
   },
   "outputs": [],
   "source": [
    "code = \"\"\"\n",
    "def fibonacci(n):\n",
    "    if n <= 1:\n",
    "        return n\n",
    "    else:\n",
    "        return fibonacci(n-1) + fibonacci(n-2)\n",
    "\"\"\"\n",
    "\n",
    "prompt_1 = f\"\"\"\n",
    "For the following code: {code}\n",
    "Is this implementation efficient?\n",
    "Please explain.\n",
    "\"\"\"\n",
    "response_1 = code_llama(prompt_1, verbose=True)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "68702444-3baf-45a9-8d4e-0140a825f102",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": [
    "print(response_1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7d0f6fd5-94cb-4500-a532-df94cf8c5ce6",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "ce6f037f-78ac-4c5f-b492-234e195be397",
   "metadata": {},
   "source": [
    "### Compare the original and more efficient implementations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f6908e90-76c1-465a-af8c-013a5643b825",
   "metadata": {
    "height": 97
   },
   "outputs": [],
   "source": [
    "def fibonacci(n):\n",
    "    if n <= 1:\n",
    "        return n\n",
    "    else:\n",
    "        return fibonacci(n-1) + fibonacci(n-2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1b38980f-fa28-494a-bd9a-b3852543fbf2",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9e9733fc-37e3-44f8-aa4a-dfeb2b8a3deb",
   "metadata": {
    "height": 114
   },
   "outputs": [],
   "source": [
    "def fibonacci_fast(n):\n",
    "    a, b = 0, 1\n",
    "    for i in range(n):\n",
    "        a, b = b, a + b\n",
    "    return a\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f121a53-b920-4428-ae3b-eb065e84d389",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "7c32169e-55ed-41cc-9c84-4f3bd8bb221a",
   "metadata": {},
   "source": [
    "#### Compare the runtimes of the two functions\n",
    "- Start by asking Code Llama to write Python code that calculates how long a piece of code takes to execute:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cbfdfbcf-2d17-4c88-9458-ce5485a60893",
   "metadata": {
    "height": 131
   },
   "outputs": [],
   "source": [
    "prompt = f\"\"\"\n",
    "Provide sample code that calculates the runtime \\\n",
    "of a Python function call.\n",
    "\"\"\"\n",
    "\n",
    "response = code_llama(prompt, verbose=True)\n",
    "print (response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31ae5c51-25cb-458f-8557-89d9e036020d",
   "metadata": {},
   "source": [
    "Let's use the first suggestion from Code Llama to calcuate the run time.\n",
    "\n",
    "    Here is an example of how you can calculate the runtime of a Python function call using the `time` module:\n",
    "    ```\n",
    "    import time\n",
    "    \n",
    "    def my_function():\n",
    "        # do something\n",
    "        pass\n",
    "    \n",
    "    start_time = time.time()\n",
    "    my_function()\n",
    "    end_time = time.time()\n",
    "    \n",
    "    print(\"Runtime:\", end_time - start_time)\n",
    "    ```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "603a3440-6e64-4da2-9bcd-2acd11160a97",
   "metadata": {},
   "source": [
    "#### Run the original Fibonacci code\n",
    "- This will take approximately 45 seconds.\n",
    "- The video has been edited so you don't have to wait for the code to exectute."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8d9841ef-a585-46fd-b910-ac68a7f0854a",
   "metadata": {
    "height": 131
   },
   "outputs": [],
   "source": [
    "import time\n",
    "n=40\n",
    "start_time = time.time()\n",
    "fibonacci(n) # note, we recommend keeping this number <=40\n",
    "end_time = time.time()\n",
    "print(f\"recursive fibonacci({n}) \")\n",
    "print(f\"runtime in seconds: {end_time-start_time}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bdd0ad04-be67-4edb-b205-0bdae38cd401",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "e25e4e90-c58f-46d0-802e-e0fb605a8e3d",
   "metadata": {},
   "source": [
    "#### Run the efficient implementation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6b16fd73-0b07-48f0-a243-2295bc14efb2",
   "metadata": {
    "height": 131
   },
   "outputs": [],
   "source": [
    "import time\n",
    "n=40\n",
    "start_time = time.time()\n",
    "fibonacci_fast(n) # note, we recommend keeping this number <=40\n",
    "end_time = time.time()\n",
    "print(f\"non-recursive fibonacci({n}) \")\n",
    "print(f\"runtime in seconds: {end_time-start_time}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "07789995-feb7-454d-a279-f2c770496825",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "9865891e-3dc6-4a6c-b0f9-fb7cb4414c95",
   "metadata": {},
   "source": [
    "### Code Llama can take in longer text\n",
    "\n",
    "- Code Llama models can handle much larger input text than the Llama Chat models - more than 20,000 characters.\n",
    "- The size of the input text is known as the **context window**."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ebcb748-92a0-43a6-b67b-9310d1481e22",
   "metadata": {},
   "source": [
    "#### Response from Llama 2 7B Chat model\n",
    "- The following code will return an error because the sum of the input and output tokens is larger than the limit of the model.\n",
    "- You can revisit L2 for more details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2a856c3b-9a51-4e1b-9c47-ec84d78c7b1e",
   "metadata": {
    "height": 216
   },
   "outputs": [],
   "source": [
    "with open(\"TheVelveteenRabbit.txt\", 'r', encoding='utf-8') as file:\n",
    "    text = file.read()\n",
    "\n",
    "prompt=f\"\"\"\n",
    "Give me a summary of the following text in 50 words:\\n\\n \n",
    "{text}\n",
    "\"\"\"\n",
    "\n",
    "# Ask the 7B model to respond\n",
    "response = llama(prompt)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3fe9d5ce-9daa-4f32-8030-f587a8545d51",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "8627474c-1566-4573-89bf-d8722f26e0ba",
   "metadata": {},
   "source": [
    "#### Response from Code Llama 7B Instruct model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "19c45fb4-8f87-44df-9035-92540f301164",
   "metadata": {
    "height": 216
   },
   "outputs": [],
   "source": [
    "from utils import llama\n",
    "with open(\"TheVelveteenRabbit.txt\", 'r', encoding='utf-8') as file:\n",
    "    text = file.read()\n",
    "\n",
    "prompt=f\"\"\"\n",
    "Give me a summary of the following text in 50 words:\\n\\n \n",
    "{text}\n",
    "\"\"\"\n",
    "response = code_llama(prompt)\n",
    "print(response)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bde454d9-378a-4822-9f70-a36a4b3994c6",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "5e80ce43-2dd9-4906-a842-f710926f59a5",
   "metadata": {},
   "source": [
    "### Thoughts on Code Llama's summarization performance\n",
    "\n",
    "Note that while the Code Llama model could handle the longer text, the output here isn't that great - the response is very repetitive.\n",
    "- Code Llama's primary skill is writing code.\n",
    "- Experiment to see if you can prompt the Code Llama model to improve its output.\n",
    "- You may need to trade off performance and input text size depending on your task.\n",
    "- You could ask Llama 2 70B chat to help you evaluate how well the Code Llama model is doing!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fd75b045-d5b6-4148-afbf-c0335dcd95d1",
   "metadata": {
    "height": 29
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.14"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
