frontend / README.md
Praneeth Yerrapragada
docs: add app_port to readme
5e29b5f
|
raw
history blame
2.25 kB
metadata
title: Frontend
emoji: 🐢
colorFrom: pink
colorTo: blue
sdk: docker
pinned: false
license: mit
app_port: 3000

This is a LlamaIndex project using Next.js bootstrapped with create-llama.

Getting Started

First, install the dependencies:

npm install

Second, generate the embeddings of the documents in the ./data directory (if this folder exists - otherwise, skip this step):

npm run generate

Third, run the development server:

npm run dev

Open http://localhost:3000 with your browser to see the result.

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

This project uses next/font to automatically optimize and load Inter, a custom Google Font.

Using Docker

  1. Build an image for the Next.js app:
docker build -t <your_app_image_name> .
  1. Generate embeddings:

Parse the data and generate the vector embeddings if the ./data folder exists - otherwise, skip this step:

docker run \
  --rm \
  -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
  -v $(pwd)/config:/app/config \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
  <your_app_image_name> \
  npm run generate
  1. Start the app:
docker run \
  --rm \
  -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
  -v $(pwd)/config:/app/config \
  -v $(pwd)/cache:/app/cache \ # Use your file system to store gea vector database
  -p 3000:3000 \
  <your_app_image_name>

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!