|
|
--- |
|
|
viewer: false |
|
|
tags: |
|
|
- uv-script |
|
|
- training |
|
|
- unsloth |
|
|
- streaming |
|
|
- fine-tuning |
|
|
- llm |
|
|
--- |
|
|
|
|
|
# Streaming LLM Training with Unsloth |
|
|
|
|
|
Train on massive datasets without downloading anything - data streams directly from the Hub. |
|
|
|
|
|
## 🦥 Latin LLM Example |
|
|
|
|
|
Teaches Qwen Latin using 1.47M texts from FineWeb-2, streamed directly from the Hub. |
|
|
|
|
|
**Blog post:** [Train on Massive Datasets Without Downloading](https://danielvanstrien.xyz/posts/2026/hf-streaming-unsloth/train-massive-datasets-without-downloading.html) |
|
|
|
|
|
### Quick Start |
|
|
|
|
|
```bash |
|
|
# Run on HF Jobs (recommended - 2x faster streaming) |
|
|
hf jobs uv run latin-llm-streaming.py \ |
|
|
--flavor a100-large \ |
|
|
--timeout 2h \ |
|
|
--secrets HF_TOKEN \ |
|
|
-- \ |
|
|
--max-steps 500 \ |
|
|
--output-repo your-username/qwen-latin |
|
|
|
|
|
# Run locally |
|
|
uv run latin-llm-streaming.py \ |
|
|
--max-steps 100 \ |
|
|
--output-repo your-username/qwen-latin-test |
|
|
``` |
|
|
|
|
|
### Why Streaming? |
|
|
|
|
|
- **No disk space needed** - train on TB-scale datasets without downloading |
|
|
- **Works everywhere** - Colab, Kaggle, HF Jobs |
|
|
- **Any language** - FineWeb-2 has 90+ languages available |
|
|
|
|
|
### Options |
|
|
|
|
|
| Argument | Default | Description | |
|
|
|----------|---------|-------------| |
|
|
| `--base-model` | `unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit` | Base model | |
|
|
| `--max-steps` | 500 | Training steps | |
|
|
| `--batch-size` | 4 | Per-device batch size | |
|
|
| `--gradient-accumulation` | 4 | Gradient accumulation steps | |
|
|
| `--learning-rate` | 2e-4 | Learning rate | |
|
|
| `--output-repo` | Required | Where to push model | |
|
|
| `--wandb-project` | None | Wandb project for logging | |
|
|
|
|
|
### Performance |
|
|
|
|
|
| Environment | Speed | Why | |
|
|
|-------------|-------|-----| |
|
|
| Colab A100 | ~0.36 it/s | Network latency | |
|
|
| HF Jobs A100 | ~0.74 it/s | Co-located compute | |
|
|
|
|
|
Streaming is ~2x faster on HF Jobs because compute is co-located with the data. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🚀 Running on HF Jobs |
|
|
|
|
|
```bash |
|
|
# Basic usage |
|
|
hf jobs uv run latin-llm-streaming.py --flavor a100-large --secrets HF_TOKEN |
|
|
|
|
|
# With timeout for long training |
|
|
hf jobs uv run latin-llm-streaming.py --flavor a100-large --timeout 2h --secrets HF_TOKEN |
|
|
|
|
|
# Pass script arguments after -- |
|
|
hf jobs uv run latin-llm-streaming.py --flavor a100-large -- --max-steps 1000 --batch-size 8 |
|
|
``` |
|
|
|
|
|
### Available Flavors |
|
|
|
|
|
- `a100-large` - 80GB VRAM (recommended) |
|
|
- `a10g-large` - 24GB VRAM |
|
|
- `t4-small` - 16GB VRAM |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔗 Resources |
|
|
|
|
|
- [Unsloth](https://github.com/unslothai/unsloth) - 2x faster training |
|
|
- [HF Jobs Docs](https://huggingface.co/docs/huggingface_hub/guides/jobs) |
|
|
- [Datasets Streaming](https://huggingface.co/docs/datasets/stream) |
|
|
- [Streaming Datasets Blog](https://huggingface.co/blog/streaming-datasets) |
|
|
|
|
|
--- |
|
|
|
|
|
Made with 🦥 [Unsloth](https://github.com/unslothai/unsloth) |
|
|
|