teknium's picture
Update README.md
731cc2e
|
raw
history blame
936 Bytes
metadata
base_model: mistralai/Mistral-7b-V0.1
tags:
  - llama-2
  - instruct
  - finetune
  - alpaca
  - gpt4
  - synthetic data
  - distillation
datasets:
  - jondurbin/airoboros-2.2.1
model-index:
  - name: airoboros2.2-mistral-7b
    results: []
license: mit
language:
  - en

Mistral trained with the airoboros dataset!

image/png

Actual dataset is airoboros 2.2, but it seems to have been replaced on hf with 2.2.1.

TruthfulQA:

hf-causal-experimental (pretrained=/home/teknium/dakota/lm-evaluation-harness/airoboros2.2-mistral/,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
|    Task     |Version|Metric|Value |   |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc|      1|mc1   |0.3562|±  |0.0168|
|             |       |mc2   |0.5217|±  |0.0156|

More info to come