ai-playground
The repo currently consists out of
- forum-gpt/data-creation: a package for data creation and manipulation
- forum-gpt/evaluation-app: a simple evaluation app
- forum-gpt/training: saved axolotl training configurations
Setup
Use Node >= 20
with npm >= 10
.
npm ci
Quick start Evaluation App
Set OPEN_API_KEY
in your environment variables.
You can set an arbitrary value like foobar
in case you don't intend to use Open AI's GPT models, e.g. export OPEN_API_KEY=foobar
.
Configure the models to chat with in bots.config.json
npm run build
npm run start
Open app at localhost:5173.
Deploy a model in Runpod
The Evaluation App works against Open AI's API.
We recommend vllm
for deploying own models.
A simple configuration may look like this:
- Docker Image Name:
vllm/vllm-openai:latest
- Container Start Command:
--model mistralai/Mistral-7B-Instruct-v0.1
.- The model name can be derived from HuggingFace
- In case you are using a private model, add an environment variable named
HUGGING_FACE_HUB_TOKEN
to your pod with your token
- Expose HTTP Ports:
8000
- Disk sizes: Whatever is appropriate, e.g. 2x
50
GB - Volume Mount Path:
/root/.cache/huggingface
.- Recommended mount when using vllm images to avoid downloading the model whenever the pod is restarted
Use this Runpod link to start with a configuration for Mistral-7B-Instruct-v0.2 model. You can use "Edit Pod Template" to adjust the template before using it.
Once the pod is started the first time, it will get a random id assigned by Runpod, e.g. g9q3ycbfk2yorr
.
Configure the pod in bots.config.json
id
must be unique between podstype: runpod
modelId
must be the same as used in the Container Start Command aboverunpodId
is the id assigned by Runpod
In case of Mistral
based models, disable the system prompt with systemPrompt: null
as these models don't support it.
- Downloads last month
- 1