Spaces:
Sleeping
Sleeping
File size: 3,714 Bytes
b7b555e 2dbc823 b7b555e 2636575 d5684b3 2636575 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
title: Backend
emoji: 🐢
colorFrom: pink
colorTo: blue
sdk: docker
pinned: false
license: mit
app_port: 8000
---
This is a [LlamaIndex](https://www.llamaindex.ai/) project using [FastAPI](https://fastapi.tiangolo.com/) bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama).
## Getting Started
First, setup the environment with poetry:
> **_Note:_** This step is not needed if you are using the dev-container.
```
poetry install
poetry shell
```
Then check the parameters that have been pre-configured in the `.env` file in this directory. (E.g. you might need to configure an `OPENAI_API_KEY` if you're using OpenAI as model provider).
If you are using any tools or data sources, you can update their config files in the `config` folder.
Second, generate the embeddings of the documents in the `./data` directory (if this folder exists - otherwise, skip this step):
```
poetry run generate
```
Third, run the development server:
```
python main.py
```
The example provides two different API endpoints:
1. `/api/chat` - a streaming chat endpoint
2. `/api/chat/request` - a non-streaming chat endpoint
You can test the streaming endpoint with the following curl request:
```
curl --location 'localhost:8000/api/chat' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'
```
And for the non-streaming endpoint run:
```
curl --location 'localhost:8000/api/chat/request' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'
```
You can start editing the API endpoints by modifying `app/api/routers/chat.py`. The endpoints auto-update as you save the file. You can delete the endpoint you're not using.
Open [http://localhost:8000/docs](http://localhost:8000/docs) with your browser to see the Swagger UI of the API.
The API allows CORS for all origins to simplify development. You can change this behavior by setting the `ENVIRONMENT` environment variable to `prod`:
```
ENVIRONMENT=prod python main.py
```
## Local Postgres database setup
To setup a local postgres database, run:
1. Build the docker image:
```bash
make build-postgres
```
2. Start the docker container:
```bash
make run-postgres
```
## Running Migrations
To generate new migrations, run:
```bash
alembic revision --autogenerate -m "<your_comment>"
```
To locally verify your changes, run:
```bash
alembic upgrade head
```
## Using Docker
1. Build an image for the FastAPI app:
```
docker build -t <your_backend_image_name> .
```
2. Generate embeddings:
Parse the data and generate the vector embeddings if the `./data` folder exists - otherwise, skip this step:
```
docker run \
--rm \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/data:/app/data \ # Use your local folder to read the data
-v $(pwd)/storage:/app/storage \ # Use your file system to store the vector database
<your_backend_image_name> \
poetry run generate
```
3. Start the API:
```
docker run \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/storage:/app/storage \ # Use your file system to store gea vector database
-p 8000:8000 \
<your_backend_image_name>
```
## Learn More
To learn more about LlamaIndex, take a look at the following resources:
- [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex.
You can check out [the LlamaIndex GitHub repository](https://github.com/run-llama/llama_index) - your feedback and contributions are welcome!
|