speechless-mistral-hermes-code-7b

Code: https://github.com/uukuguy/speechless

Use the following dataset to fine-tune mistralai/Mistral-7B-v0.1 in order to improve the model's reasoning and planning abilities.

Total 986k samples.

  • teknium/OpenHermes-2.5
  • TokenBender/python_eval_instruct_51k
  • Spider
  • codefuse-ai/Evol-instruction-66k

How to Prompt the Model

This model accepts the Alpaca instruction format.

For example:

You are an intelligent programming assistant.

### Instruction:
Implement a linked list in C++

### Response:

HumanEval

Metric Value
humaneval-python

lm-evaluation-harness

{'ARC (acc_norm)': ,
'HellaSwag (acc_norm)': ,
'MMLU (acc)': ,
'TruthfulQA (mc2)': ,
'Winoground (acc)': ,
'GSM8K (acc)': ,
'DROP (f1)': ,
'Open LLM Score': }

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg.
ARC (25-shot)
HellaSwag (10-shot)
MMLU (5-shot)
TruthfulQA (0-shot)
Winogrande (5-shot)
GSM8K (5-shot)
DROP (3-shot)
Downloads last month
84
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for uukuguy/speechless-mistral-hermes-code-7b

Quantizations
3 models

Datasets used to train uukuguy/speechless-mistral-hermes-code-7b

Spaces using uukuguy/speechless-mistral-hermes-code-7b 6

Evaluation results