Spaces:
Sleeping
title: Frontend
emoji: 🐢
colorFrom: pink
colorTo: blue
sdk: docker
pinned: false
license: mit
app_port: 3000
This is a LlamaIndex project using Next.js bootstrapped with create-llama
.
Getting Started
First, install the dependencies:
npm install
Second, generate the embeddings of the documents in the ./data
directory (if this folder exists - otherwise, skip this step):
npm run generate
Third, run the development server:
npm run dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Inter, a custom Google Font.
Using Docker
- Build an image for the Next.js app:
docker build -t <your_app_image_name> .
- Generate embeddings:
Parse the data and generate the vector embeddings if the ./data
folder exists - otherwise, skip this step:
docker run \
--rm \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/data:/app/data \
-v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
<your_app_image_name> \
npm run generate
- Start the app:
docker run \
--rm \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/cache:/app/cache \ # Use your file system to store gea vector database
-p 3000:3000 \
<your_app_image_name>
Learn More
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!