Edit model card

smol_llama-101M-GQA

banner

A small 101M param (total) decoder model. This is the first version of the model.

  • 768 hidden size, 6 layers
  • GQA (24 heads, 8 key-value), context length 1024
  • train-from-scratch

Features

Some cool anecdotes about this model:

  • this model was pretrained on one GPU for 5 compute-days. You can DIY pretrain too!
  • 0% of the training data (to our knowledge) comes from OpenAI synthetic generation

Notes

This checkpoint is the 'raw' pre-trained model and has not been tuned to a more specific task. It should be fine-tuned before use in most cases.

Checkpoints & Links

  • smol-er 81M parameter checkpoint with in/out embeddings tied: here
  • Fine-tuned on pypi to generate Python code - link
  • For the chat version of this model, please see here

Citation Info

If you find this experiment useful and would like to add some words to your .bib file, it would make us happy.

@misc {beespoke_data_2023,
    author       = { {Peter Szemraj and Vincent Haines} },
    title        = { smol_llama-101M-GQA (Revision 9c9c090) },
    year         = 2023,
    url          = { https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA },
    doi          = { 10.57967/hf/1440 },
    publisher    = { Hugging Face }
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.32
ARC (25-shot) 23.55
HellaSwag (10-shot) 28.77
MMLU (5-shot) 24.24
TruthfulQA (0-shot) 45.76
Winogrande (5-shot) 50.67
GSM8K (5-shot) 0.83
DROP (3-shot) 3.39
Downloads last month
5,909
Safetensors
Model size
101M params
Tensor type
F32
·

Datasets used to train BEE-spoke-data/smol_llama-101M-GQA

Collection including BEE-spoke-data/smol_llama-101M-GQA