Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ai-playground

The repo currently consists out of

Setup

Use Node >= 20 with npm >= 10.

npm ci

Quick start Evaluation App

Set OPEN_API_KEY in your environment variables. You can set an arbitrary value like foobar in case you don't intend to use Open AI's GPT models, e.g. export OPEN_API_KEY=foobar.

Configure the models to chat with in bots.config.json

npm run build
npm run start

Open app at localhost:5173.

Deploy a model in Runpod

The Evaluation App works against Open AI's API. We recommend vllm for deploying own models.

A simple configuration may look like this:

  • Docker Image Name: vllm/vllm-openai:latest
  • Container Start Command: --model mistralai/Mistral-7B-Instruct-v0.1.
    • The model name can be derived from HuggingFace
    • In case you are using a private model, add an environment variable named HUGGING_FACE_HUB_TOKEN to your pod with your token
  • Expose HTTP Ports: 8000
  • Disk sizes: Whatever is appropriate, e.g. 2x 50 GB
  • Volume Mount Path: /root/.cache/huggingface.
    • Recommended mount when using vllm images to avoid downloading the model whenever the pod is restarted

Use this Runpod link to start with a configuration for Mistral-7B-Instruct-v0.2 model. You can use "Edit Pod Template" to adjust the template before using it.

Once the pod is started the first time, it will get a random id assigned by Runpod, e.g. g9q3ycbfk2yorr.

Configure the pod in bots.config.json

  • id must be unique between pods
  • type: runpod
  • modelId must be the same as used in the Container Start Command above
  • runpodId is the id assigned by Runpod

In case of Mistral based models, disable the system prompt with systemPrompt: null as these models don't support it.

Downloads last month
1