Pico Decoder Medium

pico-decoder-medium is a 181M parameter model in the pico-decoder suite, balancing scale and analyzability. Built with pico-train and instrumented with pico-analyze, it enables detailed studies of layer-wise learning behavior during language model pretraining.

πŸ”§ Model Details

Field Value
Architecture Decoder-only transformer (LLaMA-style)
Parameters 181M
Layers 12
Hidden Size 768
Feed Forward Size 3072
Attention Heads 12
Key/Value Heads 4

πŸ“š Training

  • Dataset: pretokenized-dolma
  • Training steps: 200,000
  • Batch size: 1024
  • Sequence length: 2048
  • Optimizer: AdamW
  • Learning rate schedule: Linear decay with warmup
  • Compute: 16 A100-SXM4-80GB GPUs

πŸ“ˆ Evaluation and Analysis

This model supports fine-grained analysis using pico-analyze. This tool enables researchers to understand how learning unfolds over training, even at very small scales.

We also evaluate perplexity of the model on the pico-paloma-tinsy dataset.

πŸ“„ Citation

@software{pico2025,
    author = {Diehl Martinez, Richard},
    title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
    year = {2025},
    url = {https://github.com/pico-lm}
}
Downloads last month
14
Safetensors
Model size
181M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train pico-lm/pico-decoder-medium

Collection including pico-lm/pico-decoder-medium