Text Generation
Transformers
PyTorch
English
mistral
code
Eval Results
Inference Endpoints
text-generation-inference
Edit model card

speechless-code-mistral-7b-v2.0

Code: https://github.com/uukuguy/speechless

Use the following dataset to fine-tune mistralai/Mistral-7B-v0.1 in order to improve the model's reasoning and planning abilities.

Total 343,370 samples 603 MB

  • jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 21,923 samples.
  • Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 62,973 samples.
  • garage-bAInd/Open-Platypus: 100%, 22,760 samples.
  • WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,077 samples
  • TokenBender/python_eval_instruct_51k: “python” in output .39,596 samples
  • OpenHermes code block in output 18,969 samples
  • CollectiveCognition-2023-09-27 200 samples
  • ise-uiuc/Magicoder-OSS-Instruct-75K 75,197 samples
  • meta-math/MetaMathQA 20% 395K 71,706 samples

HumanEval

Metric Value
humaneval-python

Big Code Models Leaderboard

CodeLlama-34B-Python: 53.29

CodeLlama-34B-Instruct: 50.79

CodeLlama-13B-Instruct: 50.6

CodeLlama-34B: 45.11

CodeLlama-13B-Python: 42.89

CodeLlama-13B: 35.07

lm-evaluation-harness

Open LLM Leaderboard

Metric Value
ARC
HellaSwag
MMLU
TruthfulQA
Average
Downloads last month
3,571

Datasets used to train uukuguy/speechless-code-mistral-7b-v2.0

Evaluation results