---
title: Demo Beyond Chatgpt
emoji: ๐ฅ
colorFrom: green
colorTo: pink
sdk: docker
pinned: false
license: openrail
---
> If you need an introduction to `git`, or information on how to set up API keys for the tools we'll be using in this repository - check out our [Interactive Dev Environment for LLM Development](https://github.com/AI-Maker-Space/Interactive-Dev-Environment-for-LLM-Development/tree/main) which has everything you'd need to get started in this repository!
In this repository, we'll walk you through the steps to create a Large Language Model (LLM) application using Chainlit, then containerize it using Docker, and finally deploy it on Huggingface Spaces.
- ๐ค Breakout Room #1:
1. Getting Started
2. Setting Environment Variables
3. Using the OpenAI Python Library
4. Prompt Engineering Principles
5. Testing Your Prompt
Complete the notebook in this repository, or head to [this notebook](https://colab.research.google.com/drive/1VMyF3WOCETYbRx01z99QBjycB4cRwquv?usp=sharing) and follow along with the instructions!
- ๐ค Breakout Room #2:
1. ๐๏ธ Building Your First LLM App
3. ๐ณ Containerizing our App
4. ๐ Deploying Your First LLM App
๐๏ธ Building Your First LLM App
1. Clone [this](https://github.com/AI-Maker-Space/Beyond-ChatGPT/tree/main) repo.
``` bash
git clone https://github.com/AI-Maker-Space/Beyond-ChatGPT.git
```
2. Navigate inside this repo
``` bash
cd Beyond-ChatGPT
```
3. Install the packages required for this python envirnoment in `requirements.txt`.
``` bash
pip install -r requirements.txt
```
4. Open your `.env` file. Replace the `###` in your `.env` file with your OpenAI Key and save the file.
``` bash
OPENAI_API_KEY=sk-###
```
5. Let's try deploying it locally. Make sure you're in the python environment where you installed Chainlit and OpenAI. Run the app using Chainlit. This may take a minute to run.
``` bash
chainlit run app.py -w
```
๐ณ Containerizing our App
1. Let's build the Docker image. We'll tag our image as `llm-app` using the `-t` parameter. The `.` at the end means we want all of the files in our current directory to be added to our image.
``` bash
docker build -t llm-app .
```
2. Run and test the Docker image locally using the `run` command. The `-p`parameter connects our **host port #** to the left of the `:` to our **container port #** on the right.
``` bash
docker run -p 7860:7860 llm-app
```
3. Visit http://localhost:7860 in your browser to see if the app runs correctly.
๐ Deploying Your First LLM App
1. Let's create a new Huggingface Space. Navigate to [Huggingface](https://huggingface.co) and click on your profile picture on the top right. Then click on `New Space`.