File size: 2,700 Bytes
a942e46
 
 
 
 
 
3eb3121
a942e46
3eb3121
a942e46
 
3eb3121
eb70165
3eb3121
eb70165
3eb3121
eb70165
3eb3121
eb70165
3eb3121
eb70165
3eb3121
eb70165
 
3eb3121
 
 
 
 
 
 
 
eb70165
3eb3121
 
 
 
eb70165
 
3eb3121
eb70165
3eb3121
 
 
eb70165
3eb3121
eb70165
 
 
3eb3121
 
 
 
 
 
 
eb70165
3eb3121
eb70165
3eb3121
 
 
 
eb70165
3eb3121
eb70165
3eb3121
eb70165
3eb3121
eb70165
 
3eb3121
 
eb70165
3eb3121
 
eb70165
3eb3121
 
eb70165
 
3eb3121
eb70165
3eb3121
 
 
eb70165
3eb3121
eb70165
3eb3121
eb70165
3eb3121
 
 
 
eb70165
 
 
3eb3121
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
viewer: false
tags:
- uv-script
- training
- unsloth
- streaming
- fine-tuning
- llm
---

# Streaming LLM Training with Unsloth

Train on massive datasets without downloading anything - data streams directly from the Hub.

## 🦥 Latin LLM Example

Teaches Qwen Latin using 1.47M texts from FineWeb-2, streamed directly from the Hub.

**Blog post:** [Train on Massive Datasets Without Downloading](https://danielvanstrien.xyz/posts/2026/hf-streaming-unsloth/train-massive-datasets-without-downloading.html)

### Quick Start

```bash
# Run on HF Jobs (recommended - 2x faster streaming)
hf jobs uv run latin-llm-streaming.py \
  --flavor a100-large \
  --timeout 2h \
  --secrets HF_TOKEN \
  -- \
  --max-steps 500 \
  --output-repo your-username/qwen-latin

# Run locally
uv run latin-llm-streaming.py \
  --max-steps 100 \
  --output-repo your-username/qwen-latin-test
```

### Why Streaming?

- **No disk space needed** - train on TB-scale datasets without downloading
- **Works everywhere** - Colab, Kaggle, HF Jobs
- **Any language** - FineWeb-2 has 90+ languages available

### Options

| Argument | Default | Description |
|----------|---------|-------------|
| `--base-model` | `unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit` | Base model |
| `--max-steps` | 500 | Training steps |
| `--batch-size` | 4 | Per-device batch size |
| `--gradient-accumulation` | 4 | Gradient accumulation steps |
| `--learning-rate` | 2e-4 | Learning rate |
| `--output-repo` | Required | Where to push model |
| `--wandb-project` | None | Wandb project for logging |

### Performance

| Environment | Speed | Why |
|-------------|-------|-----|
| Colab A100 | ~0.36 it/s | Network latency |
| HF Jobs A100 | ~0.74 it/s | Co-located compute |

Streaming is ~2x faster on HF Jobs because compute is co-located with the data.

---

## 🚀 Running on HF Jobs

```bash
# Basic usage
hf jobs uv run latin-llm-streaming.py --flavor a100-large --secrets HF_TOKEN

# With timeout for long training
hf jobs uv run latin-llm-streaming.py --flavor a100-large --timeout 2h --secrets HF_TOKEN

# Pass script arguments after --
hf jobs uv run latin-llm-streaming.py --flavor a100-large -- --max-steps 1000 --batch-size 8
```

### Available Flavors

- `a100-large` - 80GB VRAM (recommended)
- `a10g-large` - 24GB VRAM
- `t4-small` - 16GB VRAM

---

## 🔗 Resources

- [Unsloth](https://github.com/unslothai/unsloth) - 2x faster training
- [HF Jobs Docs](https://huggingface.co/docs/huggingface_hub/guides/jobs)
- [Datasets Streaming](https://huggingface.co/docs/datasets/stream)
- [Streaming Datasets Blog](https://huggingface.co/blog/streaming-datasets)

---

Made with 🦥 [Unsloth](https://github.com/unslothai/unsloth)