backend / README.md
praneethys's picture
apis for generate p&l (#9)
a4ef0c7 verified
|
raw
history blame
No virus
3.73 kB
metadata
title: Backend
emoji: 🐢
colorFrom: pink
colorTo: blue
sdk: docker
pinned: false
license: mit
app_port: 8000

This is a LlamaIndex project using FastAPI bootstrapped with create-llama.

Getting Started

First, setup the environment with poetry:

Note: This step is not needed if you are using the dev-container.

poetry install
poetry shell

Then check the parameters that have been pre-configured in the .env file in this directory. (E.g. you might need to configure an OPENAI_API_KEY if you're using OpenAI as model provider).

If you are using any tools or data sources, you can update their config files in the config folder.

Second, generate the embeddings of the documents in the ./data directory (if this folder exists - otherwise, skip this step):

poetry run generate

Third, run the development server:

python main.py

The example provides two different API endpoints:

  1. /api/chat - a streaming chat endpoint
  2. /api/chat/request - a non-streaming chat endpoint

You can test the streaming endpoint with the following curl request:

curl --location 'localhost:8000/api/chat' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'

And for the non-streaming endpoint run:

curl --location 'localhost:8000/api/chat/request' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'

You can start editing the API endpoints by modifying app/api/routers/chat.py. The endpoints auto-update as you save the file. You can delete the endpoint you're not using.

Open http://localhost:8000/docs with your browser to see the Swagger UI of the API.

The API allows CORS for all origins to simplify development. You can change this behavior by setting the ENVIRONMENT environment variable to prod:

ENVIRONMENT=prod python main.py

Local Postgres database setup

To setup a local postgres database, run:

  1. Build the docker image:
make build-postgres
  1. Start the docker container:
make run-postgres

Running Migrations

To generate new migrations, run:

make generate-migrations migration_title="<name_for_migration>"

To locally verify your changes, run:

make run-migrations

Using Docker

  1. Build an image for the FastAPI app:
docker build -t <your_backend_image_name> .
  1. Generate embeddings:

Parse the data and generate the vector embeddings if the ./data folder exists - otherwise, skip this step:

docker run \
  --rm \
  -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
  -v $(pwd)/config:/app/config \
  -v $(pwd)/data:/app/data \ # Use your local folder to read the data
  -v $(pwd)/storage:/app/storage \ # Use your file system to store the vector database
  <your_backend_image_name> \
  poetry run generate
  1. Start the API:
docker run \
  -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
  -v $(pwd)/config:/app/config \
  -v $(pwd)/storage:/app/storage \ # Use your file system to store gea vector database
  -p 8000:8000 \
  <your_backend_image_name>

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndex GitHub repository - your feedback and contributions are welcome!