{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Build text generation apps\n",
    "\n",
    "You've seen so far through this curriculum that there are core concepts like prompts and even a whole discipline called \"prompt engineering\". Many tools you can interact with like ChatGPT, Office 365, Microsoft Power Platform and more, support you using prompts to accomplish something.\n",
    "\n",
    "For you to add such an experience to an app, you need to understand concepts like prompts, completions and choose a library to work with. That's exactly what you'll learn in this chapter.\n",
    "\n",
    "## Introduction\n",
    "\n",
    "In this chapter, you will:\n",
    "\n",
    "- Learn about the openai library and its core concepts.\n",
    "- Build a text generation app using openai.\n",
    "- Understand how to use concepts like prompt, temperature, and tokens to build a text generation app.\n",
    "\n",
    "## Learning goals\n",
    "\n",
    "At the end of this lesson, you'll be able to:\n",
    "\n",
    "- Explain what a text generation app is.\n",
    "- Build a text generation app using openai.\n",
    "- Configure your app to use more or less tokens and also change the temperature, for a varied output.\n",
    "\n",
    "## What is a text generation app?\n",
    "\n",
    "Normally when you build an app it has some kind of interface like the following:\n",
    "\n",
    "- Command-based. Console apps are typical apps where you type a command and it carries out a task. For example, `git` is a command-based app.\n",
    "- User interface (UI). Some apps have graphical user interfaces (GUIs) where you click buttons, input text, select options and more.\n",
    "\n",
    "### Console and UI apps are limited\n",
    "\n",
    "Compare it to a command-based app where you type a command: \n",
    "\n",
    "- **It's limited**. You can't just type any command, only the ones that the app supports.\n",
    "- **Language specific**. Some apps support many languages, but by default the app is built for a specific language, even if you can add more language support. \n",
    "\n",
    "### Benefits of text generation apps\n",
    "\n",
    "So how is a text generation app different?\n",
    "\n",
    "In a text generation app, you have more flexibility, you're not limited to a set of commands or a specific input language. Instead, you can use natural language to interact with the app. Another benefit is that because you're already interacting with a data source that has been trained on a vast corpus of information, whereas a traditional app might be limited on what's in a database. \n",
    "\n",
    "### What can I build with a text generation app?\n",
    "\n",
    "There are many things you can build. For example:\n",
    "\n",
    "- **A chatbot**. A chatbot answering questions about topics, like your company and its products could be a good match.\n",
    "- **Helper**. LLMs are great at things like summarizing text, getting insights from text, producing text like resumes and more.\n",
    "- **Code assistant**. Depending on the language model you use, you can build a code assistant that helps you write code. For example, you can use a product like GitHub Copilot as well as ChatGPT to help you write code.\n",
    "\n",
    "## How can I get started?\n",
    "\n",
    "Well, you need to find a way to integrate with an LLM which usually entails the following two approaches:\n",
    "\n",
    "- Use an API. Here you're constructing web requests with your prompt and get generated text back.\n",
    "- Use a library. Libraries help encapsulate the API calls and make them easier to use.\n",
    "\n",
    "## Libraries/SDKs\n",
    "\n",
    "There are a few well known libraries for working with LLMs like:\n",
    "\n",
    "- **openai**, this library makes it easy to connect to your model and send in prompts.\n",
    "\n",
    "Then there are libraries that operate on a higher level like:\n",
    "\n",
    "- **Langchain**. Langchain is well known and supports Python.\n",
    "- **Semantic Kernel**. Semantic Kernel is a library by Microsoft supporting the languages C#, Python, and Java.\n",
    "\n",
    "## First app using GitHub Models Playground and Azure AI Inference SDK\n",
    "\n",
    "Let's see how we can build our first app, what libraries we need, how much is required and so on.\n",
    "\n",
    "### What is GitHub Models?\n",
    "\n",
    "Welcome to [GitHub Models](https://github.com/marketplace/models?WT.mc_id=academic-105485-koreyst)! We've got everything fired up and ready for you to explore different AI Models hosted on Azure AI, all accessible via a playground on GitHub or seamlessly in your favorite code IDE, for free to try.\n",
    "\n",
    "### What do I need?\n",
    "\n",
    "* A GitHub Account: [github.com/signup](https://github.com/signup?WT.mc_id=academic-105485-koreyst)\n",
    "* Sign Up for GitHub Models: [github.com/marketplace/models/waitlist](https://GitHub.com/marketplace/models/waitlist?WT.mc_id=academic-105485-koreyst)\n",
    "\n",
    "Lets get started!\n",
    "\n",
    "### Find a model and test it\n",
    "\n",
    "Navigate to [GitHub Models in the Marketplace](https://github.com/marketplace/models?WT.mc_id=academic-105485-koreyst)\n",
    "\n",
    "![GitHub Models main screen showing a list of model cards such as Cohere, Meta llama, Mistral and GPT models](../images/GithubModelsMainScreen.png?WT.mc_id=academic-105485-koreyst)\n",
    "\n",
    "Choose a model - for example [Open AI GPT-4o](https://github.com/marketplace/models/azure-openai/gpt-4o?WT.mc_id=academic-105485-koreyst)\n",
    "\n",
    "Here you will see the model card. You can:\n",
    "* Interact with the model right there by entering a message in the text box\n",
    "* You can read details about the model in the readme, Evaluation, Transparency and License tabs\n",
    "* As well as review the 'About' section for the model access on the right\n",
    "\n",
    "![GitHub Models GPT-4o Model Card](../images/GithubModels-modelcard.png?WT.mc_id=academic-105485-koreyst)\n",
    "\n",
    "But we will go straight to the playground by clicking the ['Playground' button, top right](https://github.com/marketplace/models/azure-openai/gpt-4o/playground?WT.mc_id=academic-105485-koreyst). You can interact with the model here, add system prompts and change parameter details - but also get all the code you need to run this from anywhere. Available as of September 2024: Python, Javascript, C# and REST.\n",
    "\n",
    "![GitHub Models PLayground experience with code and languages shown](../images/GithubModels-plagroundcode.png?WT.mc_id=academic-105485-koreyst)  \n",
    "\n",
    "\n",
    "### Lets use the model in our own IDE\n",
    "\n",
    "Two options here:\n",
    "1. **GitHub Codespaces** - seamless integration with Codespaces and no token needed to get started\n",
    "2. **VS Code (or any favorite IDE)** - need to gain a [Personal Access Token from GitHub](https://github.com/settings/tokens?WT.mc_id=academic-105485-koreyst)\n",
    "\n",
    "\n",
    "Either way, instructions are provided via the 'Get started' green button on the top right.\n",
    "\n",
    "![Get Started screen showing you how to access Codespaces or use a personal access token to setup in your own IDE](../images/GithubModels-getstarted.png?WT.mc_id=academic-105485-koreyst)\n",
    "\n",
    "### 1.Codespaces \n",
    "\n",
    "* From the 'Get started' window choose \"Run codespace\"\n",
    "* Create a new codespace (or use an existing)\n",
    "* VS Code will open in your browser with a set of sample notebooks in multiple languages you can try\n",
    "* Run the sample ```./githubmodels-app.py```. \n",
    "\n",
    "> Note: In codespaces there is no need to set the Github Token variable, skip this step\n",
    "\n",
    "**Now move to 'Generate Text' section below to continue this assignment**\n",
    "\n",
    "### 2. VS Code (or any favorite IDE)\n",
    "\n",
    "From the 'Get started' green button you have all the information you need to run in your favorite IDE. This example will show VS Code\n",
    "\n",
    "* Select the language and SDK - in this example we choose Python and Azure AI Inference SDK\n",
    "* Create a personal access token on GitHub. This sits in the Developer Settings section. You do not need to give any permissions to the token. Note that the token will be sent to a Microsoft service.\n",
    "* Create an environment variable to store your Github personal access token - samples available in bash, powershell and windows command prompt\n",
    "* Install dependencies: ```pip install azure-ai-inference```\n",
    "* Copy basic sample code into a .py file\n",
    "* navigate to where your code is saved and run the file: ```python filename.py```\n",
    "\n",
    "Don't forget by using the Azure AI Inference SDK, you can easily experiment with different models by modifying the value of `model_name` in the code. \n",
    "\n",
    "The following models are available in the GitHub Models service as of September 2024:\n",
    "\n",
    "* AI21 Labs: AI21-Jamba-1.5-Large, AI21-Jamba-1.5-Mini, AI21-Jamba-Instruct\n",
    "* Cohere: Cohere-Command-R, Cohere-Command-R-Plus, Cohere-Embed-v3-Multilingual, Cohere-Embed-v3-English\n",
    "* Meta: Meta-Llama-3-70B-Instruct, Meta-Llama-3-8B-Instruct, Meta-Llama-3.1-405B-Instruct, Meta-Llama-3.1-70B-Instruct, Meta-Llama-3.1-8B-Instruct\n",
    "* Mistral AI: Mistral-Large, Mistral-Large-2407, Mistral-Nemo, Mistral-Small\n",
    "* Microsoft: Phi-3-mini-4k-instruct, Phi-3.5-mini-128k-instruct, Phi-3-small-4k-instruct, Phi-3-small-128k-instruct, Phi-3-medium-4k-instruct, Phi-3-medium-128k-instruct, Phi-3.5-vision-128k-instruct\n",
    "* OpenAI: OpenAI-GPT-4o, Open-AI-GPT-4o-mini, OpenAI-Textembedding-3-large, OpenAI-Textembedding-3-small\n",
    "\n",
    "\n",
    "\n",
    "**Now move to 'Generate Text' section below to continue this assignment**\n",
    "\n",
    "## Generate text with ChatCompletions\n",
    "\n",
    "The way to generate text is to use the `ChatCompletionsClient` class. \n",
    "In `samples/python/azure_ai_inference/basic.py`, in the response section of code, update the code the user role by changing the content parameter to below:\n",
    "\n",
    "```python\n",
    "\n",
    "response = client.complete(\n",
    "    messages=[\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are a helpful assistant.\",\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": \"Complete the following: Once upon a time there was a\",\n",
    "        },\n",
    "    ],\n",
    "    model=model_name,\n",
    "    # Optional parameters\n",
    "    temperature=1.,\n",
    "    max_tokens=1000,\n",
    "    top_p=1.    \n",
    ")\n",
    "\n",
    "```\n",
    "\n",
    "Run the updated file to see the output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Different types of prompts, for different things\n",
    "\n",
    "Now you've seen how to generate text using a prompt. You even have a program up and running that you can modify and change to generate different types of text. \n",
    "\n",
    "Prompts can be used for all sorts of tasks. For example:\n",
    "\n",
    "- **Generate a type of text**. For example, you can generate a poem, questions for a quiz etc.\n",
    "- **Lookup information**. You can use prompts to look for information like the following example 'What does CORS mean in web development?'.\n",
    "- **Generate code**. You can use prompts to generate code, for example developing a regular expression used to validate emails or why not generate an entire program, like a web app?  \n",
    "\n",
    "## Exercise: a recipe generator\n",
    "\n",
    "Imagine you have ingredients at home and you want to cook something. For that, you need a recipe. A way to find recipes is to use a search engine or you could use an LLM to do so.\n",
    "\n",
    "You could write a prompt like so:\n",
    "\n",
    "> \"Show me 5 recipes for a dish with the following ingredients: chicken, potatoes, and carrots. Per recipe, list all the ingredients used\"\n",
    "\n",
    "Given the above prompt, you might get a response similar to:\n",
    "\n",
    "```output\n",
    "1. Roasted Chicken and Vegetables: \n",
    "Ingredients: \n",
    "- 4 chicken thighs\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 2 tablespoons olive oil\n",
    "- 2 cloves garlic, minced\n",
    "- 1 teaspoon dried thyme\n",
    "- 1 teaspoon dried oregano\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "2. Chicken and Potato Stew: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 1 onion, diced\n",
    "- 2 cloves garlic, minced\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 teaspoon dried oregano\n",
    "- 1 teaspoon dried thyme\n",
    "- 1 cup chicken broth\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "3. Chicken and Potato Bake: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 onion, diced\n",
    "- 2 cloves garlic, minced\n",
    "- 1 teaspoon dried oregano\n",
    "- 1 teaspoon dried thyme\n",
    "- 1 cup chicken broth\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "4. Chicken and Potato Soup: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 1 onion, diced\n",
    "- 2 cloves garlic, minced\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 teaspoon dried oregano\n",
    "- 1 teaspoon dried thyme\n",
    "- 4 cups chicken broth\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "5. Chicken and Potato Hash: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 onion, diced\n",
    "- 2 cloves garlic, minced\n",
    "- 1 teaspoon dried oregano\n",
    "```\n",
    "\n",
    "This outcome is great, I know what to cook. At this point, what could be useful improvements are:\n",
    "\n",
    "- Filtering out ingredients I don't like or am allergic to.\n",
    "- Produce a shopping list, in case I don't have all the ingredients at home.\n",
    "\n",
    "For the above cases, let's add an additional prompt:\n",
    "\n",
    "> \"Please remove recipes with garlic as I'm allergic and replace it with something else. Also, please produce a shopping list for the recipes, considering I already have chicken, potatoes and carrots at home.\"\n",
    "\n",
    "Now you have a new result, namely:\n",
    "\n",
    "```output\n",
    "1. Roasted Chicken and Vegetables: \n",
    "Ingredients: \n",
    "- 4 chicken thighs\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 2 tablespoons olive oil\n",
    "- 1 teaspoon dried thyme\n",
    "- 1 teaspoon dried oregano\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "2. Chicken and Potato Stew: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 1 onion, diced\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 teaspoon dried oregano\n",
    "- 1 teaspoon dried thyme\n",
    "- 1 cup chicken broth\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "3. Chicken and Potato Bake: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 onion, diced\n",
    "- 1 teaspoon dried oregano\n",
    "- 1 teaspoon dried thyme\n",
    "- 1 cup chicken broth\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "4. Chicken and Potato Soup: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 1 onion, diced\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 teaspoon dried oregano\n",
    "- 1 teaspoon dried thyme\n",
    "- 4 cups chicken broth\n",
    "- Salt and pepper, to taste\n",
    "\n",
    "5. Chicken and Potato Hash: \n",
    "Ingredients: \n",
    "- 2 tablespoons olive oil\n",
    "- 2 chicken breasts, cut into cubes\n",
    "- 2 potatoes, cut into cubes\n",
    "- 2 carrots, cut into cubes\n",
    "- 1 onion, diced\n",
    "- 1 teaspoon dried oregano\n",
    "\n",
    "Shopping List: \n",
    "- Olive oil\n",
    "- Onion\n",
    "- Thyme\n",
    "- Oregano\n",
    "- Salt\n",
    "- Pepper\n",
    "```\n",
    "\n",
    "That's your five recipes, with no garlic mentioned and you also have a shopping list considering what you already have at home. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exercise - build a recipe generator\n",
    "\n",
    "Now that we have played out a scenario, let's write code to match the demonstrated scenario. To do so, follow these steps:\n",
    "\n",
    "1. Use the existing file as a starting point\n",
    "1. Create a `prompt` variable and change the sample code as below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "ename": "ModuleNotFoundError",
     "evalue": "No module named 'azure'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mModuleNotFoundError\u001b[0m                       Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[1], line 2\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mos\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mazure\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01minference\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatCompletionsClient\n\u001b[1;32m      3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mazure\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01minference\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mmodels\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m SystemMessage, UserMessage\n\u001b[1;32m      4\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mazure\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcore\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcredentials\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AzureKeyCredential\n",
      "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'azure'"
     ]
    }
   ],
   "source": [
    "import os\n",
    "from azure.ai.inference import ChatCompletionsClient\n",
    "from azure.ai.inference.models import SystemMessage, UserMessage\n",
    "from azure.core.credentials import AzureKeyCredential\n",
    "\n",
    "token = os.environ[\"GITHUB_TOKEN\"]\n",
    "endpoint = \"https://models.inference.ai.azure.com\"\n",
    "\n",
    "model_name = \"gpt-4o\"\n",
    "\n",
    "client = ChatCompletionsClient(\n",
    "    endpoint=endpoint,\n",
    "    credential=AzureKeyCredential(token),\n",
    ")\n",
    "\n",
    "prompt = \"Show me 5 recipes for a dish with the following ingredients: chicken, potatoes, and carrots. Per recipe, list all the ingredients used\"\n",
    "\n",
    "response = client.complete(\n",
    "    messages=[\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are a helpful assistant.\",\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": prompt,\n",
    "        },\n",
    "    ],\n",
    "    model=model_name,\n",
    "    # Optional parameters\n",
    "    temperature=1.,\n",
    "    max_tokens=1000,\n",
    "    top_p=1.    \n",
    ")\n",
    "\n",
    "print(response.choices[0].message.content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you now run the code, you should see an output similar to:\n",
    "\n",
    "```output\n",
    "### Recipe 1: Classic Chicken Stew\n",
    "#### Ingredients:\n",
    "- 2 lbs chicken thighs or drumsticks, skinless\n",
    "- 4 cups chicken broth\n",
    "- 4 medium potatoes, peeled and diced\n",
    "- 4 large carrots, peeled and sliced\n",
    "- 1 large onion, chopped\n",
    "- 2 cloves garlic, minced\n",
    "- 2 celery stalks, sliced\n",
    "- 1 tsp dried thyme\n",
    "- 1 tsp dried rosemary\n",
    "- Salt and pepper to taste\n",
    "- 2 tbsp olive oil\n",
    "- 2 tbsp flour (optional, for thickening)\n",
    "\n",
    "### Recipe 2: Chicken and Vegetable Roast\n",
    "#### Ingredients:\n",
    "- 4 chicken breasts or thighs\n",
    "- 4 medium potatoes, cut into wedges\n",
    "- 4 large carrots, cut into sticks\n",
    "- 1 large onion, cut into wedges\n",
    "- 3 cloves garlic, minced\n",
    "- 1/4 cup olive oil \n",
    "- 1 tsp paprika\n",
    "- 1 tsp dried oregano\n",
    "- Salt and pepper to taste\n",
    "- Juice of 1 lemon\n",
    "- Fresh parsley, chopped (for garnish)\n",
    "(continued ...)\n",
    "```\n",
    "\n",
    "> NOTE, your LLM is nondeterministic, so you might get different results every time you run the program.\n",
    "\n",
    "Great, let's see how we can improve things. To  improve things, we want to make sure the code is flexible, so ingredients and number of recipes can be improved and changed. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. Let's change the code in the following way:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from azure.ai.inference import ChatCompletionsClient\n",
    "from azure.ai.inference.models import SystemMessage, UserMessage\n",
    "from azure.core.credentials import AzureKeyCredential\n",
    "\n",
    "token = os.environ[\"GITHUB_TOKEN\"]\n",
    "endpoint = \"https://models.inference.ai.azure.com\"\n",
    "\n",
    "model_name = \"gpt-4o\"\n",
    "\n",
    "client = ChatCompletionsClient(\n",
    "    endpoint=endpoint,\n",
    "    credential=AzureKeyCredential(token),\n",
    ")\n",
    "\n",
    "no_recipes = input(\"No of recipes (for example, 5): \")\n",
    "\n",
    "ingredients = input(\"List of ingredients (for example, chicken, potatoes, and carrots): \")\n",
    "\n",
    "# interpolate the number of recipes into the prompt an ingredients\n",
    "prompt = f\"Show me {no_recipes} recipes for a dish with the following ingredients: {ingredients}. Per recipe, list all the ingredients used\"\n",
    "\n",
    "response = client.complete(\n",
    "    messages=[\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are a helpful assistant.\",\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": prompt,\n",
    "        },\n",
    "    ],\n",
    "    model=model_name,\n",
    "    # Optional parameters\n",
    "    temperature=1.,\n",
    "    max_tokens=1000,\n",
    "    top_p=1.    \n",
    ")\n",
    "\n",
    "print(response.choices[0].message.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Taking the code for a test run, could look like this:\n",
    "    \n",
    "```output\n",
    "No of recipes (for example, 5): 2\n",
    "List of ingredients (for example, chicken, potatoes, and carrots): milk, strawberries\n",
    "\n",
    "Sure! Here are two recipes featuring milk and strawberries:\n",
    "\n",
    "### Recipe 1: Strawberry Milkshake\n",
    "\n",
    "#### Ingredients:\n",
    "- 1 cup milk\n",
    "- 1 cup strawberries, hulled and sliced\n",
    "- 2 tablespoons sugar (optional, to taste)\n",
    "- 1/2 teaspoon vanilla extract\n",
    "- 5-6 ice cubes\n",
    "\n",
    "#### Instructions:\n",
    "1. Combine the milk, strawberries, sugar (if using), and vanilla extract in a blender.\n",
    "2. Blend on high until smooth and creamy.\n",
    "3. Add the ice cubes and blend again until the ice is fully crushed and the milkshake is frothy.\n",
    "4. Pour into a glass and serve immediately.\n",
    "\n",
    "### Recipe 2: Strawberry Panna Cotta\n",
    "\n",
    "#### Ingredients:\n",
    "- 1 cup milk\n",
    "- 1 cup strawberries, hulled and pureed\n",
    "- 1/4 cup sugar\n",
    "- 1 teaspoon vanilla extract\n",
    "- 1 envelope unflavored gelatin (about 2 1/2 teaspoons)\n",
    "- 2 tablespoons cold water\n",
    "- 1 cup heavy cream\n",
    "\n",
    "#### Instructions:\n",
    "1. Sprinkle the gelatin over the cold water in a small bowl and let it stand for about 5-10 minutes to soften.\n",
    "2. In a saucepan, combine the milk, heavy cream, and sugar. Cook over medium heat, stirring frequently until the sugar is dissolved and the mixture begins to simmer. Do not let it boil.\n",
    "3. Remove the saucepan from the heat and stir in the softened gelatin until completely dissolved.\n",
    "4. Stir in the vanilla extract and allow the mixture to cool slightly.\n",
    "5. Divide the mixture evenly into serving cups or molds and refrigerate for at least 4 hours or until set.\n",
    "6. To prepare the strawberry puree, blend the strawberries until smooth.\n",
    "7. Once the panna cotta is set, spoon the strawberry puree over the top of each panna cotta.\n",
    "8. Serve chilled.\n",
    "\n",
    "Enjoy these delightful recipes!\n",
    "```\n",
    "\n",
    "### Improve by adding filter and shopping list\n",
    "\n",
    "We now have a working app capable of producing recipes and it's flexible as it relies on inputs from the user, both on the number of recipes but also the ingredients used.\n",
    "\n",
    "To further improve it, we want to add the following:\n",
    "\n",
    "- **Filter out ingredients**. We want to be able to filter out ingredients we don't like or are allergic to. To accomplish this change, we can edit our existing prompt and add a filter condition to the end of it like so:\n",
    "\n",
    "    ```python\n",
    "    filter = input(\"Filter (for example, vegetarian, vegan, or gluten-free: \")\n",
    "\n",
    "    prompt = f\"Show me {no_recipes} recipes for a dish with the following ingredients: {ingredients}. Per recipe, list all the ingredients used, no {filter}\"\n",
    "    ```\n",
    "\n",
    "    Above, we add `{filter}` to the end of the prompt and we also capture the filter value from the user.\n",
    "\n",
    "    An example input of running the program can now look like so:\n",
    "    \n",
    "    ```output    \n",
    "    No of recipes (for example, 5): 2\n",
    "    List of ingredients (for example, chicken, potatoes, and carrots): onion, milk\n",
    "    Filter (for example, vegetarian, vegan, or gluten-free: no milk\n",
    "    Certainly! Here are two recipes using onion but omitting milk:\n",
    "    \n",
    "    ### Recipe 1: Caramelized Onions\n",
    "    \n",
    "    #### Ingredients:\n",
    "    - 4 large onions, thinly sliced\n",
    "    - 2 tablespoons olive oil\n",
    "    - 1 tablespoon butter\n",
    "    - 1 teaspoon salt\n",
    "    - 1 teaspoon sugar (optional)\n",
    "    - 1 tablespoon balsamic vinegar (optional)\n",
    "    \n",
    "    #### Instructions:\n",
    "    1. Heat the olive oil and butter in a large skillet over medium heat until the butter is melted.\n",
    "    2. Add the onions and stir to coat them with the oil and butter mixture.\n",
    "    3. Add salt (and sugar if using) to the onions.\n",
    "    4. Cook the onions, stirring occasionally, for about 45 minutes to an hour until they are golden brown and caramelized.\n",
    "    5. If using, add balsamic vinegar during the last 5 minutes of cooking.\n",
    "    6. Remove from heat and serve as a topping for burgers, steak, or as a side dish.\n",
    "    \n",
    "    ### Recipe 2: French Onion Soup\n",
    "    \n",
    "    #### Ingredients:\n",
    "    - 4 large onions, thinly sliced\n",
    "    - 3 tablespoons unsalted butter\n",
    "    - 2 cloves garlic, minced\n",
    "    - 1 teaspoon sugar\n",
    "    - 1 teaspoon salt\n",
    "    - 1/4 cup dry white wine (optional)\n",
    "    - 4 cups beef broth\n",
    "    - 4 cups chicken broth\n",
    "    - 1 bay leaf\n",
    "    - 1 teaspoon fresh thyme, chopped (or 1/2 teaspoon dried thyme)\n",
    "    - 1 baguette, sliced\n",
    "    - 2 cups Gruyère cheese, grated\n",
    "    \n",
    "    #### Instructions:\n",
    "    1. Melt the butter in a large pot over medium heat.\n",
    "    2. Add the onions, garlic, sugar, and salt, and cook, stirring frequently, until the onions are deeply caramelized (about 30-35 minutes).\n",
    "    3. If using, add the white wine and cook until it evaporates, about 3-5 minutes.\n",
    "    4. Add the beef and chicken broths, bay leaf, and thyme. Bring to a simmer and cook for another 30 minutes. Remove the bay leaf.\n",
    "    5. Preheat the oven to 400°F (200°C).\n",
    "    6. Place the baguette slices on a baking sheet and toast them in the preheated oven until golden brown, about 5 minutes.\n",
    "    7. Ladle the soup into oven-safe bowls and place a slice of toasted baguette on top of each bowl.\n",
    "    8. Sprinkle the grated Gruyère cheese generously over the baguette slices.\n",
    "    9. Place the bowls under the broiler until the cheese is melted and bubbly, about 3-5 minutes.\n",
    "    10. Serve hot.\n",
    "    \n",
    "    Enjoy your delicious onion dishes!\n",
    "    ```\n",
    "    \n",
    "- **Produce a shopping list**. We want to produce a shopping list, considering what we already have at home.\n",
    "\n",
    "    For this functionality, we could either try to solve everything in one prompt or we could split it up into two prompts. Let's try the latter approach. Here we're suggesting adding an additional prompt, but for that to work, we need to add the result of the former prompt as context to the latter prompt. \n",
    "\n",
    "    Locate the part in the code that prints out the result from the first prompt and add the following code below:\n",
    "    \n",
    "    ```python\n",
    "    old_prompt_result = response.choices[0].message.content\n",
    "    prompt = \"Produce a shopping list for the generated recipes and please don't include ingredients that I already have.\"\n",
    "        \n",
    "    new_prompt = f\"{old_prompt_result} {prompt}\"\n",
    "    \n",
    "    response = client.complete(\n",
    "        messages=[\n",
    "            {\n",
    "                \"role\": \"system\",\n",
    "                \"content\": \"You are a helpful assistant.\",\n",
    "            },\n",
    "            {\n",
    "                \"role\": \"user\",\n",
    "                \"content\": new_prompt,\n",
    "            },\n",
    "        ],\n",
    "        model=model_name,\n",
    "        # Optional parameters\n",
    "        temperature=1.,\n",
    "        max_tokens=1200,\n",
    "        top_p=1.    \n",
    "    )\n",
    "        \n",
    "    # print response\n",
    "    print(\"Shopping list:\")\n",
    "    print(response.choices[0].message.content)\n",
    "    ```\n",
    "\n",
    "\n",
    "    Note the following:\n",
    "\n",
    "    - We're constructing a new prompt by adding the result from the first prompt to the new prompt: \n",
    "    \n",
    "        ```python\n",
    "        new_prompt = f\"{old_prompt_result} {prompt}\"\n",
    "        messages = [{\"role\": \"user\", \"content\": new_prompt}]\n",
    "        ```\n",
    "\n",
    "    - We make a new request, but also considering the number of tokens we asked for in the first prompt, so this time we say `max_tokens` is 1200. **A word on token length**. We should consider how many tokens we need to generate the text we want. Tokens cost money, so where possible, we should try to be economical with the number of tokens we use. For example, can we phrase the prompt so that we can use less tokens?\n",
    "\n",
    "        ```python\n",
    "        response = client.complete(\n",
    "            messages=[\n",
    "                {\n",
    "                    \"role\": \"system\",\n",
    "                    \"content\": \"You are a helpful assistant.\",\n",
    "                },\n",
    "                {\n",
    "                    \"role\": \"user\",\n",
    "                    \"content\": new_prompt,\n",
    "                },\n",
    "            ],\n",
    "            model=model_name,\n",
    "            # Optional parameters\n",
    "            temperature=1.,\n",
    "            max_tokens=1200,\n",
    "            top_p=1.    \n",
    "        )    \n",
    "        ```  \n",
    "\n",
    "        Taking this code for a spin, we now arrive at the following output:\n",
    "\n",
    "        ```output\n",
    "        No of recipes (for example, 5): 1\n",
    "        List of ingredients (for example, chicken, potatoes, and carrots): strawberry, milk\n",
    "        Filter (for example, vegetarian, vegan, or gluten-free): nuts\n",
    "        \n",
    "        Certainly! Here's a simple and delicious recipe for a strawberry milkshake using strawberry and milk as primary ingredients:\n",
    "        \n",
    "        ### Strawberry Milkshake\n",
    "        \n",
    "        #### Ingredients:\n",
    "        - 1 cup fresh strawberries, hulled\n",
    "        - 1 cup cold milk\n",
    "        - 1 tablespoon honey or sugar (optional, to taste)\n",
    "        - 1/2 teaspoon vanilla extract (optional)\n",
    "        - 3-4 ice cubes\n",
    "        \n",
    "        #### Instructions:\n",
    "        1. Wash and hull the strawberries, then slice them in half.\n",
    "        2. In a blender, combine the strawberries, cold milk, honey or sugar (if using), vanilla extract (if using), and ice cubes.\n",
    "        3. Blend until smooth and frothy.\n",
    "        4. Pour the milkshake into a glass.\n",
    "        5. Serve immediately and enjoy your refreshing strawberry milkshake!\n",
    "        \n",
    "        This recipe is nut-free and makes for a delightful and quick treat!\n",
    "        Shopping list:\n",
    "        Sure! Here’s the shopping list for the Strawberry Milkshake recipe based on the ingredients provided. Please adjust based on what you already have at home:\n",
    "        \n",
    "        ### Shopping List:\n",
    "        - Fresh strawberries (1 cup)\n",
    "        - Milk (1 cup)\n",
    "        \n",
    "        Optional:\n",
    "        - Honey or sugar (1 tablespoon)\n",
    "        - Vanilla extract (1/2 teaspoon)\n",
    "        - Ice cubes (3-4)\n",
    "        \n",
    "        Feel free to omit the optional ingredients if you prefer or if you already have them on hand. Enjoy your delicious strawberry milkshake!\n",
    "        ```\n",
    "        \n",
    "- **Experimenting with temperature**. Temperature is something we haven't mentioned so far but is an important context for how our program performs. The higher the temperature value the more random the output will be. Conversely the lower the temperature value the more predictable the output will be. Consider whether you want variation in your output or not.\n",
    "\n",
    "   To alter the temperature, you can use the `temperature` parameter. For example, if you want to use a temperature of 0.5, you would do:\n",
    "\n",
    "```python\n",
    "    response = client.complete(\n",
    "        messages=[\n",
    "            {\n",
    "                \"role\": \"system\",\n",
    "                \"content\": \"You are a helpful assistant.\",\n",
    "            },\n",
    "            {\n",
    "                \"role\": \"user\",\n",
    "                \"content\": new_prompt,\n",
    "            },\n",
    "        ],\n",
    "        model=model_name,\n",
    "        # Optional parameters\n",
    "        temperature=0.5,\n",
    "        max_tokens=1200,\n",
    "        top_p=1.    \n",
    "    )\n",
    "```\n",
    "\n",
    "   > Note, the closer to 1.0, the more varied the output.\n",
    "\n",
    "\n",
    "## Assignment\n",
    "\n",
    "For this assignment, you can choose what to build.\n",
    "\n",
    "Here are some suggestions:\n",
    "\n",
    "- Tweak the recipe generator app to improve it further. Play around with temperature values, and the prompts to see what you can come up with.\n",
    "- Build a \"study buddy\". This app should be able to answer questions about a topic for example Python, you could have prompts like \"What is a certain topic in Python?\", or you could have a prompt that says, show me code for a certain topic etc.\n",
    "- History bot, make history come alive, instruct the bot to play a certain historical character and ask it questions about its life and times. \n",
    "\n",
    "## Solution\n",
    "\n",
    "### Study buddy\n",
    "\n",
    "- \"You're an expert on the Python language\n",
    "\n",
    "    Suggest a beginner lesson for Python in the following format:\n",
    "    \n",
    "    Format:\n",
    "    - concepts:\n",
    "    - brief explanation of the lesson:\n",
    "    - exercise in code with solutions\"\n",
    "\n",
    "Above is a starter prompt, see how you can use it and tweak it to your liking.\n",
    "\n",
    "### History bot\n",
    "\n",
    "Here's some prompts you could be using:\n",
    "\n",
    "- \"You are Abe Lincoln, tell me about yourself in 3 sentences, and respond using grammar and words like Abe would have used\"\n",
    "- \"You are Abe Lincoln, respond using grammar and words like Abe would have used:\n",
    "\n",
    "   Tell me about your greatest accomplishments, in 300 words:\"\n",
    "\n",
    "## Knowledge check\n",
    "\n",
    "What does the concept temperature do?\n",
    "\n",
    "1. It controls how random the output is.\n",
    "1. It controls how big the response is.\n",
    "1. It controls how many tokens are used.\n",
    "\n",
    "A: 1\n",
    "\n",
    "What's a good way to store secrets like API keys?\n",
    "\n",
    "1. In code.\n",
    "1. In a file.\n",
    "1. In environment variables.\n",
    "\n",
    "A: 3, because environment variables are not stored in code and can be loaded from the code. "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
