##

:wave: Welcome to Beyond ChatGPT!!

For a step-by-step YouTube video walkthrough, watch this! [Deploying Chainlit app on Hugging Face](https://www.youtube.com/live/pRbbZcL0NMI?si=NAYhMZ_suAY84f06&t=2119) ![Beyond ChatGPT: Build Your First LLM Application](https://github.com/AI-Maker-Space/Beyond-ChatGPT/assets/48775140/cb7a74b8-28af-4d12-a008-8f5a51d47b4c) ## 🤖 Your First LLM App > If you need an introduction to `git`, or information on how to set up API keys for the tools we'll be using in this repository - check out our [Interactive Dev Environment for LLM Development](https://github.com/AI-Maker-Space/Interactive-Dev-Environment-for-LLM-Development/tree/main) which has everything you'd need to get started in this repository! In this repository, we'll walk you through the steps to create a Large Language Model (LLM) application using Chainlit, then containerize it using Docker, and finally deploy it on Huggingface Spaces. Are you ready? Let's get started!
🖥️ Accessing "gpt-3.5-turbo" (ChatGPT) like a developer 1. Head to [this notebook](https://colab.research.google.com/drive/1mOzbgf4a2SP5qQj33ZxTz2a01-5eXqk2?usp=sharing) and follow along with the instructions! 2. Complete the notebook and try out your own system/assistant messages! That's it! Head to the next step and start building your application!
🏗️ Building Your First LLM App 1. Clone [this](https://github.com/AI-Maker-Space/Beyond-ChatGPT/tree/main) repo. ``` bash git clone https://github.com/AI-Maker-Space/Beyond-ChatGPT.git ``` 2. Navigate inside this repo ``` bash cd Beyond-ChatGPT ``` 3. Install the packages required for this python envirnoment in `requirements.txt`. ``` bash pip install -r requirements.txt ``` 4. Open your `.env` file. Replace the `###` in your `.env` file with your OpenAI Key and save the file. ``` bash OPENAI_API_KEY=sk-### ``` 5. Let's try deploying it locally. Make sure you're in the python environment where you installed Chainlit and OpenAI. Run the app using Chainlit. This may take a minute to run. ``` bash chainlit run app.py -w ```

Great work! Let's see if we can interact with our chatbot.

Awesome! Time to throw it into a docker container and prepare it for shipping!
🐳 Containerizing our App 1. Let's build the Docker image. We'll tag our image as `llm-app` using the `-t` parameter. The `.` at the end means we want all of the files in our current directory to be added to our image. ``` bash docker build -t llm-app . ``` 2. Run and test the Docker image locally using the `run` command. The `-p`parameter connects our **host port #** to the left of the `:` to our **container port #** on the right. ``` bash docker run -p 7860:7860 llm-app ``` 3. Visit http://localhost:7860 in your browser to see if the app runs correctly.

Great! Time to ship!
🚀 Deploying Your First LLM App 1. Let's create a new Huggingface Space. Navigate to [Huggingface](https://huggingface.co) and click on your profile picture on the top right. Then click on `New Space`.

2. Setup your space as shown below: - Owner: Your username - Space Name: `llm-app` - License: `Openrail` - Select the Space SDK: `Docker` - Docker Template: `Blank` - Space Hardware: `CPU basic - 2 vCPU - 16 GB - Free` - Repo type: `Public`

3. You should see something like this. We're now ready to send our files to our Huggingface Space. After cloning, move your files to this repo and push it along with your docker file. You DO NOT need to create a Dockerfile. Make sure NOT TO push your `.env` file. This should automatically be ignored.

4. After pushing all files, navigate to the settings in the top right to add your OpenAI API key.

5. Scroll down to `Variables and secrets` and click on `New secret` on the top right.

6. Set the name to `OPENAI_API_KEY` and add your OpenAI key under `Value`. Click save.

7. To ensure your key is being used, we recommend you `Restart this Space`.

8. Congratulations! You just deployed your first LLM! 🚀🚀🚀 Get on linkedin and post your results and experience! Make sure to tag us at #AIMakerspace !

### That's it for now! And so it begins.... :)