davanstrien HF Staff Claude Opus 4.5 commited on
Commit
3eb3121
·
1 Parent(s): 1cdb76a

Update README for streaming training script

Browse files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

Files changed (1) hide show
  1. README.md +57 -305
README.md CHANGED
@@ -3,345 +3,97 @@ viewer: false
3
  tags:
4
  - uv-script
5
  - training
6
- - vlm
7
  - unsloth
8
- - iconclass
9
  - fine-tuning
 
10
  ---
11
 
12
- # VLM Training with Unsloth
13
 
14
- Fine-tune Vision-Language Models efficiently using [Unsloth](https://github.com/unslothai/unsloth) - get 2x faster training with lower memory usage!
15
 
16
- ## 🎨 Example: Iconclass VLM
17
 
18
- This directory contains scripts for fine-tuning VLMs to generate [Iconclass](https://iconclass.org) metadata codes from artwork images. Iconclass is a hierarchical classification system used in art history and cultural heritage.
19
 
20
- ### What You'll Train
21
 
22
- Given an artwork image, the model outputs structured JSON:
23
-
24
- ```json
25
- {
26
- "iconclass-codes": ["25H213", "25H216", "25I"]
27
- }
28
- ```
29
-
30
- Where codes represent:
31
- - `25H213`: river
32
- - `25H216`: waterfall
33
- - `25I`: city-view with man-made constructions
34
-
35
- ## 🚀 Quick Start
36
-
37
- ### Option 1: Run on HF Jobs (Recommended)
38
-
39
- ```bash
40
- # Set your HF token
41
- export HF_TOKEN=your_token_here
42
-
43
- # Submit training job
44
- python submit_training_job.py
45
- ```
46
-
47
- That's it! Your model will train on cloud GPUs and automatically push to the Hub.
48
-
49
- ### Option 2: Run Locally (Requires GPU)
50
 
51
  ```bash
52
- # Install UV (if not already installed)
53
- curl -LsSf https://astral.sh/uv/install.sh | sh
54
-
55
- # Run training
56
- uv run iconclass-vlm-sft.py \
57
- --base-model Qwen/Qwen3-VL-8B-Instruct \
58
- --dataset davanstrien/iconclass-vlm-sft \
59
- --output-model your-username/iconclass-vlm
60
- ```
61
-
62
- ### Option 3: Quick Test (100 steps)
63
-
64
- ```bash
65
- uv run iconclass-vlm-sft.py \
66
- --base-model Qwen/Qwen3-VL-8B-Instruct \
67
- --dataset davanstrien/iconclass-vlm-sft \
68
- --output-model your-username/iconclass-vlm-test \
69
- --max-steps 100
70
- ```
71
-
72
- ## 📋 Requirements
73
-
74
- ### For HF Jobs
75
- - Hugging Face account with Jobs access
76
- - HF token with write permissions
77
-
78
- ### For Local Training
79
- - CUDA-capable GPU (A100 recommended, A10G works)
80
- - 40GB+ VRAM for 8B models (with 4-bit quantization)
81
- - Python 3.11+
82
- - [UV](https://docs.astral.sh/uv/) installed
83
-
84
- ## 🎛️ Configuration
85
 
86
- ### Quick Config via Python Script
87
-
88
- Edit `submit_training_job.py`:
89
-
90
- ```python
91
- # Model and dataset
92
- BASE_MODEL = "Qwen/Qwen3-VL-8B-Instruct"
93
- DATASET = "davanstrien/iconclass-vlm-sft"
94
- OUTPUT_MODEL = "your-username/iconclass-vlm"
95
-
96
- # Training settings
97
- BATCH_SIZE = 2
98
- GRADIENT_ACCUMULATION = 8
99
- LEARNING_RATE = 2e-5
100
- MAX_STEPS = None # Auto-calculate for 1 epoch
101
-
102
- # LoRA settings
103
- LORA_R = 16
104
- LORA_ALPHA = 32
105
-
106
- # GPU
107
- GPU_FLAVOR = "a100-large" # or "a100", "a10g-large"
108
  ```
109
 
110
- ### Full CLI Options
111
 
112
- ```bash
113
- uv run iconclass-vlm-sft.py --help
114
- ```
115
 
116
- Key arguments:
117
 
118
  | Argument | Default | Description |
119
  |----------|---------|-------------|
120
- | `--base-model` | Required | Base VLM (e.g., Qwen/Qwen3-VL-8B-Instruct) |
121
- | `--dataset` | Required | Training dataset on HF Hub |
122
- | `--output-model` | Required | Where to push your model |
123
- | `--lora-r` | 16 | LoRA rank (higher = more capacity) |
124
- | `--lora-alpha` | 32 | LoRA alpha (usually 2×r) |
125
- | `--learning-rate` | 2e-5 | Learning rate |
126
- | `--batch-size` | 2 | Per-device batch size |
127
- | `--gradient-accumulation` | 8 | Gradient accumulation steps |
128
- | `--max-steps` | Auto | Total training steps |
129
- | `--num-epochs` | 1.0 | Epochs (if max-steps not set) |
130
-
131
- ## 🏗️ Architecture
132
-
133
- ### What Makes This Fast?
134
-
135
- 1. **Unsloth Optimizations**: 2x faster training through:
136
- - Optimized CUDA kernels
137
- - Better memory management
138
- - Efficient gradient checkpointing
139
-
140
- 2. **4-bit Quantization**:
141
- - Loads model in 4-bit precision
142
- - Dramatically reduces VRAM usage
143
- - Minimal impact on quality with LoRA
144
-
145
- 3. **LoRA (Low-Rank Adaptation)**:
146
- - Only trains 0.1-1% of parameters
147
- - Much faster than full fine-tuning
148
- - Easy to merge back or share
149
-
150
- ### Training Flow
151
-
152
- ```
153
- Dataset (HF Hub)
154
-
155
- FastVisionModel.from_pretrained (4-bit)
156
-
157
- Apply LoRA adapters
158
-
159
- SFTTrainer (Unsloth-optimized)
160
-
161
- Push to Hub with model card
162
- ```
163
-
164
- ## 📊 Expected Performance
165
-
166
- ### Training Time (Qwen3-VL-8B on A100)
167
-
168
- | Dataset Size | Batch Config | Time | Cost (est.) |
169
- |--------------|--------------|------|-------------|
170
- | 44K samples | BS=2, GA=8 | ~4h | $16 |
171
- | 10K samples | BS=2, GA=8 | ~1h | $4 |
172
- | 1K samples | BS=2, GA=8 | ~10min | $0.70 |
173
-
174
- *BS = Batch Size, GA = Gradient Accumulation*
175
-
176
- ### GPU Requirements
177
 
178
- | Model Size | Min GPU | Recommended | VRAM Usage |
179
- |------------|---------|-------------|------------|
180
- | 3B-4B | A10G | A100 | ~20GB |
181
- | 7B-8B | A100 | A100 | ~35GB |
182
- | 13B+ | A100 (80GB) | A100 (80GB) | ~60GB |
183
 
184
- ## 🔍 Monitoring Your Job
 
 
 
185
 
186
- ### Via CLI
187
 
188
- ```bash
189
- # Check status
190
- hfjobs status your-job-id
191
-
192
- # Stream logs
193
- hfjobs logs your-job-id --follow
194
-
195
- # List all jobs
196
- hfjobs list
197
- ```
198
-
199
- ### Via Python
200
-
201
- ```python
202
- from huggingface_hub import HfApi
203
-
204
- api = HfApi()
205
- job = api.get_job("your-job-id")
206
-
207
- print(job.status)
208
- print(job.logs())
209
- ```
210
-
211
- ### Via Web
212
-
213
- Your job URL: `https://huggingface.co/jobs/your-username/your-job-id`
214
-
215
- ## 🎯 Using Your Fine-Tuned Model
216
-
217
- ```python
218
- from unsloth import FastVisionModel
219
- from PIL import Image
220
-
221
- # Load your model
222
- model, tokenizer = FastVisionModel.from_pretrained(
223
- model_name="your-username/iconclass-vlm",
224
- load_in_4bit=True,
225
- max_seq_length=2048,
226
- )
227
- FastVisionModel.for_inference(model)
228
-
229
- # Prepare input
230
- image = Image.open("artwork.jpg")
231
- prompt = "Extract ICONCLASS labels for this image."
232
-
233
- messages = [
234
- {
235
- "role": "user",
236
- "content": [
237
- {"type": "image"},
238
- {"type": "text", "text": prompt},
239
- ],
240
- }
241
- ]
242
-
243
- # Apply chat template
244
- inputs = tokenizer.apply_chat_template(
245
- messages,
246
- add_generation_prompt=True,
247
- return_tensors="pt",
248
- ).to("cuda")
249
-
250
- # Generate
251
- outputs = model.generate(
252
- **inputs,
253
- max_new_tokens=256,
254
- temperature=0.7,
255
- top_p=0.9,
256
- )
257
-
258
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
259
- print(response)
260
- # {"iconclass-codes": ["31A235", "31A24(+1)", "61B(+54)"]}
261
- ```
262
-
263
- ## 📦 Files in This Directory
264
-
265
- | File | Purpose |
266
- |------|---------|
267
- | `iconclass-vlm-sft.py` | Main training script (UV script) |
268
- | `submit_training_job.py` | Helper to submit HF Jobs |
269
- | `README.md` | This file |
270
-
271
- ## 🛠️ Troubleshooting
272
-
273
- ### Out of Memory?
274
-
275
- Reduce batch size or increase gradient accumulation:
276
- ```bash
277
- --batch-size 1 --gradient-accumulation 16
278
- ```
279
-
280
- ### Training Too Slow?
281
-
282
- Increase batch size if you have VRAM:
283
- ```bash
284
- --batch-size 4 --gradient-accumulation 4
285
- ```
286
-
287
- ### Model Not Learning?
288
 
289
- Try adjusting learning rate:
290
- ```bash
291
- --learning-rate 5e-5 # Higher
292
- --learning-rate 1e-5 # Lower
293
- ```
294
 
295
- Or increase LoRA rank:
296
  ```bash
297
- --lora-r 32 --lora-alpha 64
298
- ```
299
 
300
- ### Jobs Failing?
 
301
 
302
- Check logs:
303
- ```bash
304
- hfjobs logs your-job-id
305
  ```
306
 
307
- Common issues:
308
- - HF_TOKEN not set correctly
309
- - Output model repo doesn't exist (create it first)
310
- - GPU out of memory (reduce batch size)
311
 
312
- ## 🔗 Related Resources
 
 
313
 
314
- - **Unsloth**: https://github.com/unslothai/unsloth
315
- - **Unsloth Docs**: https://docs.unsloth.ai/
316
- - **TRL**: https://github.com/huggingface/trl
317
- - **HF Jobs**: https://huggingface.co/docs/hub/spaces-sdks-jobs
318
- - **UV**: https://docs.astral.sh/uv/
319
- - **Iconclass**: https://iconclass.org
320
- - **Blog Post**: https://danielvanstrien.xyz/posts/2025/iconclass-vlm-sft/
321
-
322
- ## 💡 Tips
323
-
324
- 1. **Start Small**: Test with `--max-steps 100` before full training
325
- 2. **Use Wandb**: Add `--report-to wandb` for better monitoring
326
- 3. **Save Often**: Use `--save-steps 50` for checkpoints
327
- 4. **Multiple GPUs**: Script automatically uses all available GPUs
328
- 5. **Resume Training**: Load from checkpoint with `--resume-from-checkpoint`
329
-
330
- ## 📝 Citation
331
 
332
- If you use this training setup, please cite:
333
 
334
- ```bibtex
335
- @misc{iconclass-vlm-training,
336
- author = {Daniel van Strien},
337
- title = {Efficient VLM Fine-tuning with Unsloth for Art History},
338
- year = {2025},
339
- publisher = {GitHub},
340
- howpublished = {\url{https://github.com/davanstrien/uv-scripts}}
341
- }
342
- ```
343
 
344
  ---
345
 
346
- Made with 🦥 [Unsloth](https://github.com/unslothai/unsloth)
347
- Powered by 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
 
3
  tags:
4
  - uv-script
5
  - training
 
6
  - unsloth
7
+ - streaming
8
  - fine-tuning
9
+ - llm
10
  ---
11
 
12
+ # Streaming LLM Training with Unsloth
13
 
14
+ Train on massive datasets without downloading anything - data streams directly from the Hub.
15
 
16
+ ## 🦥 Latin LLM Example
17
 
18
+ Teaches Qwen Latin using 1.47M texts from FineWeb-2, streamed directly from the Hub.
19
 
20
+ **Blog post:** [Train on Massive Datasets Without Downloading](https://danielvanstrien.xyz/posts/2026/hf-streaming-unsloth/train-massive-datasets-without-downloading.html)
21
 
22
+ ### Quick Start
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ```bash
25
+ # Run on HF Jobs (recommended - 2x faster streaming)
26
+ hf jobs uv run latin-llm-streaming.py \
27
+ --flavor a100-large \
28
+ --timeout 2h \
29
+ --secrets HF_TOKEN \
30
+ -- \
31
+ --max-steps 500 \
32
+ --output-repo your-username/qwen-latin
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
+ # Run locally
35
+ uv run latin-llm-streaming.py \
36
+ --max-steps 100 \
37
+ --output-repo your-username/qwen-latin-test
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ```
39
 
40
+ ### Why Streaming?
41
 
42
+ - **No disk space needed** - train on TB-scale datasets without downloading
43
+ - **Works everywhere** - Colab, Kaggle, HF Jobs
44
+ - **Any language** - FineWeb-2 has 90+ languages available
45
 
46
+ ### Options
47
 
48
  | Argument | Default | Description |
49
  |----------|---------|-------------|
50
+ | `--base-model` | `unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit` | Base model |
51
+ | `--max-steps` | 500 | Training steps |
52
+ | `--batch-size` | 4 | Per-device batch size |
53
+ | `--gradient-accumulation` | 4 | Gradient accumulation steps |
54
+ | `--learning-rate` | 2e-4 | Learning rate |
55
+ | `--output-repo` | Required | Where to push model |
56
+ | `--wandb-project` | None | Wandb project for logging |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
+ ### Performance
 
 
 
 
59
 
60
+ | Environment | Speed | Why |
61
+ |-------------|-------|-----|
62
+ | Colab A100 | ~0.36 it/s | Network latency |
63
+ | HF Jobs A100 | ~0.74 it/s | Co-located compute |
64
 
65
+ Streaming is ~2x faster on HF Jobs because compute is co-located with the data.
66
 
67
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
+ ## 🚀 Running on HF Jobs
 
 
 
 
70
 
 
71
  ```bash
72
+ # Basic usage
73
+ hf jobs uv run latin-llm-streaming.py --flavor a100-large --secrets HF_TOKEN
74
 
75
+ # With timeout for long training
76
+ hf jobs uv run latin-llm-streaming.py --flavor a100-large --timeout 2h --secrets HF_TOKEN
77
 
78
+ # Pass script arguments after --
79
+ hf jobs uv run latin-llm-streaming.py --flavor a100-large -- --max-steps 1000 --batch-size 8
 
80
  ```
81
 
82
+ ### Available Flavors
 
 
 
83
 
84
+ - `a100-large` - 80GB VRAM (recommended)
85
+ - `a10g-large` - 24GB VRAM
86
+ - `t4-small` - 16GB VRAM
87
 
88
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
+ ## 🔗 Resources
91
 
92
+ - [Unsloth](https://github.com/unslothai/unsloth) - 2x faster training
93
+ - [HF Jobs Docs](https://huggingface.co/docs/huggingface_hub/guides/jobs)
94
+ - [Datasets Streaming](https://huggingface.co/docs/datasets/stream)
95
+ - [Streaming Datasets Blog](https://huggingface.co/blog/streaming-datasets)
 
 
 
 
 
96
 
97
  ---
98
 
99
+ Made with 🦥 [Unsloth](https://github.com/unslothai/unsloth)