Pico Decoder Large

pico-decoder-large is the largest model (570M) in the current pico-decoder suite. It is a full-scale research model designed for in-depth interpretability studies of transformer learning. Trained with pico-train and fully compatible with pico-analyze, it offers rich checkpointing and analytical insight into large-scale LM behavior.

πŸ”§ Model Details

Field Value
Architecture Decoder-only transformer (LLaMA-style)
Parameters 570M
Layers 12
Hidden Size 1536
Feed Forward Size 6144
Attention Heads 12
Key/Value Heads 4

πŸ“š Training

  • Dataset: pretokenized-dolma
  • Training steps: 200,000
  • Batch size: 1024
  • Sequence length: 2048
  • Optimizer: AdamW
  • Learning rate schedule: Linear decay with warmup
  • Compute: 16 A100-SXM4-80GB GPUs

πŸ“ˆ Evaluation and Analysis

This model supports fine-grained analysis using pico-analyze. This tool enables researchers to understand how learning unfolds over training, even at very small scales.

We also evaluate perplexity of the model on the pico-paloma-tinsy dataset.

πŸ“„ Citation

@software{pico2025,
    author = {Diehl Martinez, Richard},
    title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
    year = {2025},
    url = {https://github.com/pico-lm}
}
Downloads last month
13
Safetensors
Model size
570M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train pico-lm/pico-decoder-large

Collection including pico-lm/pico-decoder-large