ThomasTheMaker commited on
Commit
952b8db
·
verified ·
1 Parent(s): 381645f

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -170
README.md DELETED
@@ -1,170 +0,0 @@
1
- # 🚀 **Pico Train**
2
-
3
- Pico Train is a lightweight framework for training language models—from tiny-scale (~1M parameters) to mid-scale (~1B parameters)—with built-in rich checkpointing that captures activations, gradients, and model states, enabling detailed learning dynamics research.
4
-
5
- Our **suite of pre-trained models** is already publicly available on our [Hugging Face organization](https://huggingface.co/pico-lm), and a dedicated companion library for advanced analysis—[**pico-analyze**](https://github.com/pico-lm/pico-analyze)—is fully released for deeper checkpoint studies.
6
-
7
- > For a **detailed run-through**, check out the **full tutorial** on our website at [picolm.io](https://picolm.io).
8
-
9
- ---
10
-
11
- ## **Key Features**
12
-
13
- 1. **Pico Decoder: LLAMA-style Transformer Architecture**
14
- - RMSNorm, RoPE, multi-head self-attention with KV-cache, and SwiGLU activations
15
- - Currently supports the **pico-decoder** model, with future expansions planned (pico-diffusion, pico-statespace, etc.)
16
-
17
- 2. **Comprehensive Checkpoints**
18
- - Saves model states, optimizer states, and training metadata
19
- - Enriched with **activation and gradient** snapshots for interpretability
20
-
21
- 3. **Focused Scale Range**
22
- - Optimized to train models from **1M to 1B parameters**, where learning dynamics research is most viable
23
-
24
- 4. **Clean, Pre-tokenized Data**
25
- - Uses a pre-tokenized, pre-shuffled version of [Dolma](https://allenai.org/dolma) that we make available on [Hugging Face](https://huggingface.co/datasets/pico-lm/pretokenized-dolma)
26
- - Facilitates training models using identical data for **consistency** and **comparability**
27
-
28
- 6. **Research Ready**
29
- - Minimal, well-documented code suitable for **forking and tailoring**
30
- - Logs essential metrics (e.g. perplexity) throughout training
31
- - Works seamlessly with [pico-analyze](https://github.com/pico-lm/pico-analyze) for advanced post-training interpretation
32
-
33
- ---
34
-
35
- ## **Training Philosophy**
36
-
37
- All models in the Pico suite (both pre-trained and user-trained):
38
-
39
- - Employ **identical architectures** and **optimizer settings**
40
- - **Share** the same data order and tokens
41
- - Automatically log **rich checkpoint data** (including activations, gradients)
42
- - Facilitate **direct cross-scale comparisons**
43
-
44
- This uniformity means you can isolate model size as the primary variable, giving you clearer insights into **how model capacity affects learning**.
45
-
46
- ---
47
-
48
- ## **Resources**
49
-
50
- - **Pre-trained Models** (1M–1B parameters), publicly hosted on [Hugging Face](https://huggingface.co/pico-lm)
51
- - **Pre-tokenized Datasets** for straightforward streaming-based training
52
- - **Extensive Checkpoints** logging activation and gradient snapshots
53
- - **Evaluation Metrics** (perplexity and more) tracked at each checkpoint
54
-
55
- ---
56
-
57
- ## **Core Components**
58
-
59
- - **Pico-Decoder Model**
60
- - LLAMA-style auto-regressive transformer
61
- - RMSNorm
62
- - RoPE (Rotary Positional Embeddings)
63
- - Multi-head attention with KV-cache
64
- - SwiGLU activation
65
-
66
- *Future plans include additional architectures like pico-diffusion and pico-statespace.*
67
-
68
- - **Training & Checkpointing**
69
- - Automatic storage of model and optimizer states
70
- - Periodic hooks for saving **learning dynamics** (activations, gradients)
71
- - Optional logging to Weights & Biases
72
-
73
- - **Config-Driven Setup**
74
- - Specify architecture, optimizer, dataset, and logging settings in YAML
75
- - Straightforward to extend or modify
76
-
77
- ---
78
-
79
- ## **Quick Start**
80
-
81
- 1. **Clone the Repository**
82
-
83
- ```bash
84
- git clone https://github.com/pico-lm/pico-train
85
- cd pico-train
86
- ```
87
-
88
- 2. **Configure Environment**
89
-
90
- Create a `.env` file at the root with your Hugging Face and Weights & Biases tokens:
91
- ```bash
92
- export HF_TOKEN=your_huggingface_token
93
- export WANDB_API_KEY=your_wandb_key
94
- ```
95
-
96
- 3. **Install Dependencies**
97
-
98
- ```bash
99
- source setup.sh
100
- ```
101
- This script checks your environment, installs necessary tools, and sets up a Poetry virtual environment.
102
-
103
- 4. **Train Your Model Suite**
104
-
105
- - Edit (or create) a config file (e.g., `configs/demo.yaml`) to specify your architecture and training preferences.
106
- - Then run:
107
- ```bash
108
- poetry run train --config_path configs/demo.yaml
109
- ```
110
- - This launches training, automatically checkpointing states and saving learning dynamics data.
111
-
112
- 5. **Explore Checkpoints**
113
- - By default, checkpoints are stored under `runs/YOUR_RUN_NAME/checkpoints/`.
114
- - Each checkpoint contains:
115
- - **Model state** (PyTorch + Hugging Face formats)
116
- - **Optimizer state**
117
- - **Gradients and activations** for interpretability
118
- - **Evaluation logs** (e.g. perplexity) and metrics
119
-
120
- ---
121
-
122
- ## **Repository Structure**
123
-
124
- - **`src/model/pico_decoder.py`**
125
- - Core LLAMA-style decoder implementation (attention, RMSNorm, RoPE, etc.)
126
-
127
- - **`src/training/trainer.py`**
128
- - Main training loop
129
- - Manages distributed and multi-node settings
130
- - Collects/logs metrics
131
- - Orchestrates checkpoint saving
132
-
133
- - **`src/checkpointing`**
134
- - Logic for saving model states, gradients, activations
135
- - Tools for uploading checkpoints to Hugging Face
136
-
137
- - **`src/config`**
138
- - Flexible Dataclass-based config system (model and training hyperparameters, checkpointing, logging)
139
-
140
- - **`configs/demo.yaml`**
141
- - Example config with default values for quick experimentation
142
-
143
- ---
144
-
145
- ## **Advanced Analysis with Pico Analyze**
146
-
147
- For deeper checkpoint analysis—comparing gradients, tracking representation shifts, measuring sparsity—use our companion repository [**pico-analyze**](https://github.com/pico-lm/pico-analyze). It automatically processes **pico-train** checkpoints and applies advanced metrics like **CKA**, **PWCCA**, **Gini**, **Hoyer**, and more to reveal **how** your models learn over time.
148
-
149
- ---
150
-
151
- ## **License**
152
-
153
- Pico is open-source under the [Apache License 2.0](LICENSE).
154
-
155
- ---
156
-
157
- ## **Citation**
158
-
159
- If you use **Pico** in your research, please cite:
160
-
161
- ```bibtex
162
- @software{pico2025,
163
- author = {Diehl Martinez, Richard},
164
- title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
165
- year = {2025},
166
- url = {https://github.com/pico-lm}
167
- }
168
- ```
169
-
170
- **Happy Training!** For more information and tutorials, visit our website at [picolm.io](https://picolm.io).