Edit model card

Mistral trained with the airoboros dataset!

image/png

Actual dataset is airoboros 2.2, but it seems to have been replaced on hf with 2.2.1.

Prompt Format:

USER: <prompt>
ASSISTANT:

TruthfulQA:

hf-causal-experimental (pretrained=/home/teknium/dakota/lm-evaluation-harness/airoboros2.2-mistral/,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
|    Task     |Version|Metric|Value |   |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc|      1|mc1   |0.3562|±  |0.0168|
|             |       |mc2   |0.5217|±  |0.0156|

Wandb training charts: https://wandb.ai/teknium1/airoboros-mistral-7b/runs/airoboros-mistral-1?workspace=user-teknium1

More info to come

Downloads last month
8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.

Dataset used to train teknium/airoboros-mistral2.2-7b

Space using teknium/airoboros-mistral2.2-7b 1