Text Generation
Transformers
Safetensors
English
mistral
llama-2
code
Eval Results
Inference Endpoints
text-generation-inference
Edit model card

speechless-thoughts-mistral-7b-v1.0

code

speechless-thoughts-mistral-7b-v1.0 is fine-tuned as a baseline of the speechless-sparsetral-16x7b-MoE.

learning_rate=2e-4
lora_r=64
lora_alpha=16
model_max_length=8192

The specific datasets (speechless-thoughts-252k) are as follows:

  • jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples.
  • Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples.
  • garage-bAInd/Open-Platypus: 100%, 24,926 samples.
  • WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples
  • TokenBender/python_eval_instruct_51k: “python” in output .40,309 samples
  • Spider: 8,659 samples
  • codefuse-ai/Evol-Instruction-66k: 100%, 66,862 samples

Alpaca Prompt Format

### Instruction:
<instruction>
### Response:

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name_or_path="uukuguy/speechless-thoughts-mistral-7b-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
  model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True).eval()

  system = ""Below is an instruction that describes a task.\nWrite a response that appropriately completes the request.\n\n""
  prompt = f"{system}\n\n### Instruction:\n{instruction}\n\n### Response:"

  inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
  pred = model.generate(**inputs, max_length=4096, do_sample=True, top_k=50, top_p=0.99, temperature=0.9, num_return_sequences=1)
  print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))

HumanEval

Metric Value
humaneval-python

lm-evaluation-harness

{'ARC (acc_norm)': ,
'HellaSwag (acc_norm)': ,
'MMLU (acc)': ,
'TruthfulQA (mc2)': ,
'Winoground (acc)': ,
'GSM8K (acc)': ,
'DROP (f1)': ,
'Open LLM Score': }

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 59.36
ARC (25-shot) 58.53
HellaSwag (10-shot) 81.25
MMLU (5-shot) 54.59
TruthfulQA (0-shot) 48.09
Winogrande (5-shot) 78.14
GSM8K (5-shot) 35.18
Downloads last month
2,325
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Datasets used to train uukuguy/speechless-thoughts-mistral-7b-v1.0

Evaluation results