afrideva's picture
Upload README.md with huggingface_hub
66c0b2b
|
raw
history blame
2.8 kB
metadata
base_model: Locutusque/TinyMistral-248M
datasets:
  - Skylion007/openwebtext
inference: false
language:
  - en
license: apache-2.0
model_creator: Locutusque
model_name: TinyMistral-248M
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

Locutusque/TinyMistral-248M-GGUF

Quantized GGUF model files for TinyMistral-248M from Locutusque

Original Model Card:

A pre-trained language model, based on the Mistral 7B model, has been scaled down to approximately 248 million parameters. Currently, this model has been trained on 2,120,000 examples. The batch size will remain low for future epochs. This model isn't intended for direct use but for fine-tuning on a downstream task. This model should have a context length of around 32,768 tokens.

During evaluation on InstructMix, this model achieved an average perplexity score of 6.3. More training sessions are planned for this model.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 24.18
ARC (25-shot) 20.82
HellaSwag (10-shot) 26.98
MMLU (5-shot) 23.11
TruthfulQA (0-shot) 46.89
Winogrande (5-shot) 50.75
GSM8K (5-shot) 0.0
DROP (3-shot) 0.74