evalstate commited on
Commit
6527162
Β·
0 Parent(s):

trl v1 (missing hf merges)

Browse files
trl/SKILL.md ADDED
@@ -0,0 +1,443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: trl
3
+ description: This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO, KTO, reward modeling, and PPO training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, and model persistence. Should be invoked for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.
4
+ license: Complete terms in LICENSE.txt
5
+ ---
6
+
7
+ # TRL Training on Hugging Face Jobs
8
+
9
+ ## Overview
10
+
11
+ Train language models using TRL (Transformer Reinforcement Learning) on fully managed Hugging Face infrastructure. No local GPU setup requiredβ€”models train on cloud GPUs and results are automatically saved to the Hugging Face Hub.
12
+
13
+ **TRL provides multiple training methods:**
14
+ - **SFT** (Supervised Fine-Tuning) - Standard instruction tuning
15
+ - **DPO** (Direct Preference Optimization) - Alignment from preference data
16
+ - **GRPO** (Group Relative Policy Optimization) - Online RL training
17
+ - **KTO** (Kahneman-Tversky Optimization) - Preference tuning without paired data
18
+ - **Reward Modeling** - Train reward models for RLHF
19
+ - **PPO** (Proximal Policy Optimization) - Classic RLHF method
20
+
21
+ **For detailed TRL method documentation:**
22
+ ```python
23
+ hf_doc_search("your query", product="trl")
24
+ hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer") # SFT
25
+ hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer") # DPO
26
+ # etc.
27
+ ```
28
+
29
+ **See also:** `references/training_methods.md` for method overviews and selection guidance
30
+
31
+ ## When to Use This Skill
32
+
33
+ Use this skill when users want to:
34
+ - Fine-tune language models on cloud GPUs without local infrastructure
35
+ - Train with TRL methods (SFT, DPO, GRPO, KTO, etc.)
36
+ - Run training jobs on Hugging Face Jobs infrastructure
37
+ - Convert trained models to GGUF for local deployment (Ollama, LM Studio, llama.cpp)
38
+ - Ensure trained models are permanently saved to the Hub
39
+ - Use modern workflows with optimized defaults
40
+
41
+ ## Key Directives
42
+
43
+ When assisting with training jobs:
44
+
45
+ 1. **Submit jobs directly with inline scripts** - The `script` parameter accepts Python code directly. Do NOT save to local files unless the user explicitly requests it. Pass the script content as a string to `hf_jobs()`.
46
+
47
+ 2. **Always include Trackio** - Every training script should include Trackio for real-time monitoring. Use example scripts in `scripts/` as templates.
48
+
49
+ 3. **Provide job details after submission** - After submitting, provide job ID, monitoring URL, estimated time, and note that the user can request status checks later.
50
+
51
+ 4. **Use example scripts as templates** - Reference `scripts/train_sft_example.py`, `scripts/train_dpo_example.py`, etc. as starting points.
52
+
53
+ ## Prerequisites Checklist
54
+
55
+ Before starting any training job, verify:
56
+
57
+ ### βœ… **Account & Authentication**
58
+ - Hugging Face Account with [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan (Jobs require paid plan)
59
+ - Authenticated login: Check with `mcp__huggingface__hf_whoami()`
60
+ - **HF_TOKEN for Hub Push** ⚠️ CRITICAL - Training environment is ephemeral, must push to Hub or ALL training results are lost
61
+ - Token must have write permissions and is automatically available as `$HF_TOKEN` in job secrets
62
+
63
+ ### βœ… **Dataset Requirements**
64
+ - Dataset must exist on Hub or be loadable via `datasets.load_dataset()`
65
+ - Format must match training method (SFT: "messages"/text/prompt-completion; DPO: chosen/rejected; GRPO: prompt-only)
66
+ - Use `scripts/validate_dataset.py` to verify format or `hf_doc_fetch("https://huggingface.co/docs/trl/dataset_formats")` for complete reference
67
+ - Size appropriate for hardware (Demo: 50-100 examples on t4-small; Production: 1K-10K+ on a10g-large/a100-large)
68
+
69
+ ### ⚠️ **Critical Settings**
70
+ - **Timeout must exceed expected training time** - Default 30min is TOO SHORT for most training. Minimum recommended: 1-2 hours. Job fails and loses all progress if timeout is exceeded.
71
+ - **Hub push must be enabled** - Config: `push_to_hub=True`, `hub_model_id="username/model-name"`; Job: `secrets={"HF_TOKEN": "$HF_TOKEN"}`
72
+
73
+ ## Asynchronous Job Guidelines
74
+
75
+ **⚠️ IMPORTANT: Training jobs run asynchronously and can take hours**
76
+
77
+ ### Action Required
78
+
79
+ **When user requests training:**
80
+ 1. **Create the training script** with Trackio included (use `scripts/train_sft_example.py` as template)
81
+ 2. **Submit immediately** using `hf_jobs()` MCP tool with script content inline - don't save to file unless user requests
82
+ 3. **Report submission** with job ID, monitoring URL, and estimated time
83
+ 4. **Wait for user** to request status checks - don't poll automatically
84
+
85
+ ### Ground Rules
86
+ - **Jobs run in background** - Submission returns immediately; training continues independently
87
+ - **Initial logs delayed** - Can take 30-60 seconds for logs to appear
88
+ - **User checks status** - Wait for user to request status updates
89
+ - **Avoid polling** - Check logs only on user request; provide monitoring links instead
90
+
91
+ ### After Submission
92
+
93
+ **Provide to user:**
94
+ - βœ… Job ID and monitoring URL
95
+ - βœ… Expected completion time
96
+ - βœ… Trackio dashboard URL
97
+ - βœ… Note that user can request status checks later
98
+
99
+ **Example Response:**
100
+ ```
101
+ βœ… Job submitted successfully!
102
+
103
+ Job ID: abc123xyz
104
+ Monitor: https://huggingface.co/jobs/username/abc123xyz
105
+
106
+ Expected time: ~2 hours
107
+ Estimated cost: ~$10
108
+
109
+ The job is running in the background. Ask me to check status/logs when ready!
110
+ ```
111
+
112
+ ## Quick Start: Three Approaches
113
+
114
+ ### Approach 1: TRL Jobs Package (Easiestβ€”Recommended for Beginners)
115
+
116
+ The `trl-jobs` package provides optimized defaults and one-liner training:
117
+
118
+ ```bash
119
+ # Install (users only, not needed for this environment)
120
+ pip install trl-jobs
121
+
122
+ # Train with SFT (simplest possible)
123
+ trl-jobs sft \
124
+ --model_name Qwen/Qwen2.5-0.5B \
125
+ --dataset_name trl-lib/Capybara
126
+ ```
127
+
128
+ **Benefits:** Pre-configured settings, automatic Trackio integration, automatic Hub push, one-line commands
129
+ **When to use:** User is new to training, standard scenarios, quick experimentation
130
+ **Repository:** https://github.com/huggingface/trl-jobs
131
+
132
+ ### Approach 2: UV Scripts (Recommended for Custom Training)
133
+
134
+ UV scripts use PEP 723 inline dependencies for clean, self-contained training. **Submit script content directly inline:**
135
+
136
+ ```python
137
+ hf_jobs("uv", {
138
+ "script": """
139
+ # /// script
140
+ # dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
141
+ # ///
142
+
143
+ from datasets import load_dataset
144
+ from peft import LoraConfig
145
+ from trl import SFTTrainer, SFTConfig
146
+ import trackio
147
+
148
+ trackio.init(project="my-training", space_id="username/my-dashboard")
149
+
150
+ dataset = load_dataset("trl-lib/Capybara", split="train")
151
+
152
+ trainer = SFTTrainer(
153
+ model="Qwen/Qwen2.5-0.5B",
154
+ train_dataset=dataset,
155
+ peft_config=LoraConfig(r=16, lora_alpha=32),
156
+ args=SFTConfig(
157
+ output_dir="my-model",
158
+ push_to_hub=True,
159
+ hub_model_id="username/my-model",
160
+ num_train_epochs=3,
161
+ report_to="trackio",
162
+ )
163
+ )
164
+
165
+ trainer.train()
166
+ trainer.push_to_hub()
167
+ trackio.finish()
168
+ """,
169
+ "flavor": "a10g-large",
170
+ "timeout": "2h",
171
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
172
+ })
173
+ ```
174
+
175
+ **Benefits:** Clean code, dependencies declared inline (PEP 723), no file saving required
176
+ **When to use:** Custom training logic, full control over training
177
+ **See:** `references/uv_scripts_guide.md` for complete UV scripts guide
178
+
179
+ ### Approach 3: TRL Maintained Scripts (Run Official Examples)
180
+
181
+ TRL provides battle-tested scripts for all methods. Can be run from URLs:
182
+
183
+ ```python
184
+ hf_jobs("uv", {
185
+ "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/sft.py",
186
+ "script_args": [
187
+ "--model_name_or_path", "Qwen/Qwen2.5-0.5B",
188
+ "--dataset_name", "trl-lib/Capybara",
189
+ "--output_dir", "my-model",
190
+ "--push_to_hub",
191
+ "--hub_model_id", "username/my-model"
192
+ ],
193
+ "flavor": "a10g-large",
194
+ "timeout": "2h",
195
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
196
+ })
197
+ ```
198
+
199
+ **Benefits:** No code to write, maintained by TRL team, production-tested
200
+ **When to use:** Standard TRL training, quick experiments, don't need custom code
201
+ **Available:** sft.py, dpo.py, grpo.py, kto.py, reward.py, ppo.py - https://github.com/huggingface/trl/tree/main/examples/scripts
202
+
203
+ ### Finding More UV Scripts on Hub
204
+
205
+ The `uv-scripts` organization provides ready-to-use UV scripts stored as datasets on Hugging Face Hub:
206
+
207
+ ```python
208
+ # Discover available UV script collections
209
+ dataset_search({"author": "uv-scripts", "sort": "downloads", "limit": 20})
210
+
211
+ # Explore a specific collection
212
+ hub_repo_details(["uv-scripts/classification"], repo_type="dataset", include_readme=True)
213
+ ```
214
+
215
+ **Popular collections:** ocr, classification, synthetic-data, vllm, dataset-creation
216
+
217
+ ## Hardware Selection
218
+
219
+ | Model Size | Recommended Hardware | Cost (approx/hr) | Use Case |
220
+ |------------|---------------------|------------------|----------|
221
+ | <1B params | `t4-small` | ~$0.75 | Demos, quick tests |
222
+ | 1-3B params | `t4-medium`, `l4x1` | ~$1.50-2.50 | Development |
223
+ | 3-7B params | `a10g-small`, `a10g-large` | ~$3.50-5.00 | Production training |
224
+ | 7-13B params | `a10g-large`, `a100-large` | ~$5-10 | Large models (use LoRA) |
225
+ | 13B+ params | `a100-large`, `a10g-largex2` | ~$10-20 | Very large (use LoRA) |
226
+
227
+ **GPU Flavors:** cpu-basic/upgrade/performance/xl, t4-small/medium, l4x1/x4, a10g-small/large/largex2/largex4, a100-large, h100/h100x8
228
+
229
+ **Guidelines:**
230
+ - Use **LoRA/PEFT** for models >7B to reduce memory
231
+ - Multi-GPU automatically handled by TRL/Accelerate
232
+ - Start with smaller hardware for testing
233
+
234
+ **See:** `references/hardware_guide.md` for detailed specifications
235
+
236
+ ## Critical: Saving Results to Hub
237
+
238
+ **⚠️ EPHEMERAL ENVIRONMENTβ€”MUST PUSH TO HUB**
239
+
240
+ The Jobs environment is temporary. All files are deleted when the job ends. If the model isn't pushed to Hub, **ALL TRAINING IS LOST**.
241
+
242
+ ### Required Configuration
243
+
244
+ **In training script/config:**
245
+ ```python
246
+ SFTConfig(
247
+ push_to_hub=True,
248
+ hub_model_id="username/model-name", # MUST specify
249
+ hub_strategy="every_save", # Optional: push checkpoints
250
+ )
251
+ ```
252
+
253
+ **In job submission:**
254
+ ```python
255
+ {
256
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"} # Enables authentication
257
+ }
258
+ ```
259
+
260
+ ### Verification Checklist
261
+
262
+ Before submitting:
263
+ - [ ] `push_to_hub=True` set in config
264
+ - [ ] `hub_model_id` includes username/repo-name
265
+ - [ ] `secrets` parameter includes HF_TOKEN
266
+ - [ ] User has write access to target repo
267
+
268
+ **See:** `references/hub_saving.md` for detailed troubleshooting
269
+
270
+ ## Timeout Management
271
+
272
+ **⚠️ DEFAULT: 30 MINUTESβ€”TOO SHORT FOR TRAINING**
273
+
274
+ ### Setting Timeouts
275
+
276
+ ```python
277
+ {
278
+ "timeout": "2h" # 2 hours (formats: "90m", "2h", "1.5h", or seconds as integer)
279
+ }
280
+ ```
281
+
282
+ ### Timeout Guidelines
283
+
284
+ | Scenario | Recommended | Notes |
285
+ |----------|-------------|-------|
286
+ | Quick demo (50-100 examples) | 10-30 min | Verify setup |
287
+ | Development training | 1-2 hours | Small datasets |
288
+ | Production (3-7B model) | 4-6 hours | Full datasets |
289
+ | Large model with LoRA | 3-6 hours | Depends on dataset |
290
+
291
+ **Always add 20-30% buffer** for model/dataset loading, checkpoint saving, Hub push operations, and network delays.
292
+
293
+ **On timeout:** Job killed immediately, all unsaved progress lost, must restart from beginning
294
+
295
+ ## Cost Estimation
296
+
297
+ **Offer to estimate cost when planning jobs with known parameters.** Use `scripts/estimate_cost.py`:
298
+
299
+ ```bash
300
+ python scripts/estimate_cost.py \
301
+ --model meta-llama/Llama-2-7b-hf \
302
+ --dataset trl-lib/Capybara \
303
+ --hardware a10g-large \
304
+ --dataset-size 16000 \
305
+ --epochs 3
306
+ ```
307
+
308
+ Output includes estimated time, cost, recommended timeout (with buffer), and optimization suggestions.
309
+
310
+ **When to offer:** User planning a job, asks about cost/time, choosing hardware, job will run >1 hour or cost >$5
311
+
312
+ ## Example Training Scripts
313
+
314
+ **Production-ready templates with all best practices:**
315
+
316
+ - **`scripts/train_sft_example.py`** - Complete SFT training with Trackio, LoRA, checkpoints
317
+ - **`scripts/train_dpo_example.py`** - DPO training for preference learning
318
+ - **`scripts/train_grpo_example.py`** - GRPO training for online RL
319
+
320
+ These scripts demonstrate proper Hub saving, Trackio integration, checkpoint management, and optimized parameters. Pass their content inline to `hf_jobs()` or use as templates for custom scripts.
321
+
322
+ ## Monitoring and Tracking
323
+
324
+ **Trackio** provides real-time metrics visualization. See `references/trackio_guide.md` for complete setup guide.
325
+
326
+ **Key points:**
327
+ - Add `"trackio"` to dependencies
328
+ - Initialize with `trackio.init(project="name", space_id="username/dashboard")`
329
+ - Configure trainer with `report_to="trackio"`
330
+ - Call `trackio.finish()` after training
331
+
332
+ **Alternative:** Use `report_to="tensorboard"` for simpler setup (logs saved with model to Hub)
333
+
334
+ ### Check Job Status
335
+
336
+ ```python
337
+ # List all jobs
338
+ hf_jobs("ps")
339
+
340
+ # Inspect specific job
341
+ hf_jobs("inspect", {"job_id": "your-job-id"})
342
+
343
+ # View logs
344
+ hf_jobs("logs", {"job_id": "your-job-id"})
345
+ ```
346
+
347
+ **Remember:** Wait for user to request status checks. Avoid polling repeatedly.
348
+
349
+ ## Converting Models to GGUF
350
+
351
+ After training, convert models to **GGUF format** for use with llama.cpp, Ollama, LM Studio, and other local inference tools.
352
+
353
+ **What is GGUF:**
354
+ - Optimized for CPU/GPU inference with llama.cpp
355
+ - Supports quantization (4-bit, 5-bit, 8-bit) to reduce model size
356
+ - Compatible with Ollama, LM Studio, Jan, GPT4All, llama.cpp
357
+ - Typically 2-8GB for 7B models (vs 14GB unquantized)
358
+
359
+ **When to convert:**
360
+ - Running models locally with Ollama or LM Studio
361
+ - Reducing model size with quantization
362
+ - Deploying to edge devices
363
+ - Sharing models for local-first use
364
+
365
+ **See:** `references/gguf_conversion.md` for complete conversion guide, including production-ready conversion script, quantization options, hardware requirements, usage examples, and troubleshooting.
366
+
367
+ **Quick conversion:**
368
+ ```python
369
+ hf_jobs("uv", {
370
+ "script": "<see references/gguf_conversion.md for complete script>",
371
+ "flavor": "a10g-large",
372
+ "timeout": "45m",
373
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"},
374
+ "env": {
375
+ "ADAPTER_MODEL": "username/my-finetuned-model",
376
+ "BASE_MODEL": "Qwen/Qwen2.5-0.5B",
377
+ "OUTPUT_REPO": "username/my-model-gguf"
378
+ }
379
+ })
380
+ ```
381
+
382
+ ## Common Training Patterns
383
+
384
+ See `references/training_patterns.md` for detailed examples including:
385
+ - Quick demo (5-10 minutes)
386
+ - Production with checkpoints
387
+ - Multi-GPU training
388
+ - DPO training (preference learning)
389
+ - GRPO training (online RL)
390
+
391
+ ## Troubleshooting
392
+
393
+ **Common issues:**
394
+ - Job times out β†’ Increase timeout, reduce epochs/dataset, use smaller model/LoRA
395
+ - Model not saved to Hub β†’ Check push_to_hub=True, hub_model_id, secrets=HF_TOKEN
396
+ - Out of Memory (OOM) β†’ Reduce batch size, increase gradient accumulation, enable LoRA, use larger GPU
397
+ - Dataset format error β†’ Check format docs, validate dataset with `scripts/validate_dataset.py`
398
+ - Import/module errors β†’ Add PEP 723 header with dependencies, verify format
399
+ - Authentication errors β†’ Check `mcp__huggingface__hf_whoami()`, token permissions, secrets parameter
400
+
401
+ **See:** `references/troubleshooting.md` for complete troubleshooting guide
402
+
403
+ ## Resources
404
+
405
+ ### References (In This Skill)
406
+ - `references/training_methods.md` - Overview of SFT, DPO, GRPO, KTO, PPO, Reward Modeling
407
+ - `references/training_patterns.md` - Common training patterns and examples
408
+ - `references/gguf_conversion.md` - Complete GGUF conversion guide
409
+ - `references/trackio_guide.md` - Trackio monitoring setup
410
+ - `references/uv_scripts_guide.md` - Complete UV scripts guide
411
+ - `references/hardware_guide.md` - Hardware specs and selection
412
+ - `references/hub_saving.md` - Hub authentication troubleshooting
413
+ - `references/troubleshooting.md` - Common issues and solutions
414
+
415
+ ### Scripts (In This Skill)
416
+ - `scripts/train_sft_example.py` - Production SFT template
417
+ - `scripts/train_dpo_example.py` - Production DPO template
418
+ - `scripts/train_grpo_example.py` - Production GRPO template
419
+ - `scripts/validate_dataset.py` - Validate dataset format before training
420
+ - `scripts/estimate_cost.py` - Estimate time and cost (offer when appropriate)
421
+ - `scripts/convert_to_gguf.py` - Complete GGUF conversion script
422
+
423
+ ### External Links
424
+ - [TRL Documentation](https://huggingface.co/docs/trl)
425
+ - [TRL Jobs Training Guide](https://huggingface.co/docs/trl/en/jobs_training)
426
+ - [TRL Jobs Package](https://github.com/huggingface/trl-jobs)
427
+ - [HF Jobs Documentation](https://huggingface.co/docs/huggingface_hub/guides/jobs)
428
+ - [TRL Example Scripts](https://github.com/huggingface/trl/tree/main/examples/scripts)
429
+ - [UV Scripts Guide](https://docs.astral.sh/uv/guides/scripts/)
430
+ - [UV Scripts Organization](https://huggingface.co/uv-scripts)
431
+
432
+ ## Key Takeaways
433
+
434
+ 1. **Submit scripts inline** - The `script` parameter accepts Python code directly; no file saving required unless user requests
435
+ 2. **Jobs are asynchronous** - Don't wait/poll; let user check when ready
436
+ 3. **Always set timeout** - Default 30 min is insufficient; minimum 1-2 hours recommended
437
+ 4. **Always enable Hub push** - Environment is ephemeral; without push, all results lost
438
+ 5. **Include Trackio** - Use example scripts as templates for real-time monitoring
439
+ 6. **Offer cost estimation** - When parameters are known, use `scripts/estimate_cost.py`
440
+ 7. **Three approaches available:** TRL Jobs package (easiest), UV scripts (custom, modern), TRL maintained scripts (official examples)
441
+ 8. **Use doc-fetch/doc-search** for latest TRL documentation
442
+ 9. **Validate dataset format** before training with `scripts/validate_dataset.py`
443
+ 10. **Choose appropriate hardware** for model size; use LoRA for models >7B
trl/references/gguf_conversion.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GGUF Conversion Guide
2
+
3
+ After training models with TRL on Hugging Face Jobs, convert them to **GGUF format** for use with llama.cpp, Ollama, LM Studio, and other local inference tools.
4
+
5
+ **This guide provides production-ready, tested code.** All required dependencies are included in the examples below. No additional troubleshooting should be needed when following the templates exactly.
6
+
7
+ ## What is GGUF?
8
+
9
+ **GGUF** (GPT-Generated Unified Format):
10
+ - Optimized format for CPU/GPU inference with llama.cpp
11
+ - Supports quantization (4-bit, 5-bit, 8-bit) to reduce model size
12
+ - Compatible with: Ollama, LM Studio, Jan, GPT4All, llama.cpp
13
+ - Typically 2-8GB for 7B models (vs 14GB unquantized)
14
+
15
+ ## When to Convert to GGUF
16
+
17
+ **Convert when:**
18
+ - Running models locally with Ollama or LM Studio
19
+ - Using CPU-optimized inference
20
+ - Reducing model size with quantization
21
+ - Deploying to edge devices
22
+ - Sharing models for local-first use
23
+
24
+ ## Conversion Process
25
+
26
+ **The conversion requires:**
27
+ 1. **Merge LoRA adapter** with base model (if using PEFT)
28
+ 2. **Convert to GGUF** format using llama.cpp
29
+ 3. **Quantize** to different bit depths (optional but recommended)
30
+ 4. **Upload** GGUF files to Hub
31
+
32
+ ## GGUF Conversion Script Template
33
+
34
+ See `scripts/convert_to_gguf.py` for a complete, production-ready conversion script.
35
+
36
+ **Quick conversion job:**
37
+
38
+ ```python
39
+ hf_jobs("uv", {
40
+ "script": """
41
+ # /// script
42
+ # dependencies = [
43
+ # "transformers>=4.36.0",
44
+ # "peft>=0.7.0",
45
+ # "torch>=2.0.0",
46
+ # "huggingface_hub>=0.20.0",
47
+ # "sentencepiece>=0.1.99",
48
+ # "protobuf>=3.20.0",
49
+ # "numpy",
50
+ # "gguf",
51
+ # ]
52
+ # ///
53
+
54
+ import os
55
+ import torch
56
+ import subprocess
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+ from peft import PeftModel
59
+ from huggingface_hub import HfApi
60
+
61
+ # Configuration from environment
62
+ ADAPTER_MODEL = os.environ.get("ADAPTER_MODEL", "username/my-model")
63
+ BASE_MODEL = os.environ.get("BASE_MODEL", "Qwen/Qwen2.5-0.5B")
64
+ OUTPUT_REPO = os.environ.get("OUTPUT_REPO", "username/my-model-gguf")
65
+
66
+ print("πŸ”„ Converting to GGUF...")
67
+
68
+ # Step 1: Load and merge
69
+ print("Loading base model...")
70
+ base = AutoModelForCausalLM.from_pretrained(
71
+ BASE_MODEL,
72
+ dtype=torch.float16,
73
+ device_map="auto",
74
+ trust_remote_code=True
75
+ )
76
+
77
+ print("Loading adapter...")
78
+ model = PeftModel.from_pretrained(base, ADAPTER_MODEL)
79
+
80
+ print("Merging...")
81
+ merged = model.merge_and_unload()
82
+
83
+ # Save merged model
84
+ merged_dir = "/tmp/merged"
85
+ merged.save_pretrained(merged_dir, safe_serialization=True)
86
+ tokenizer = AutoTokenizer.from_pretrained(ADAPTER_MODEL)
87
+ tokenizer.save_pretrained(merged_dir)
88
+
89
+ # Step 2: Install build tools and clone llama.cpp
90
+ print("Setting up llama.cpp...")
91
+ subprocess.run(["apt-get", "update", "-qq"], check=True, capture_output=True)
92
+ subprocess.run(["apt-get", "install", "-y", "-qq", "build-essential", "cmake"], check=True, capture_output=True)
93
+
94
+ subprocess.run([
95
+ "git", "clone",
96
+ "https://github.com/ggerganov/llama.cpp.git",
97
+ "/tmp/llama.cpp"
98
+ ], check=True)
99
+
100
+ subprocess.run([
101
+ "pip", "install", "-r",
102
+ "/tmp/llama.cpp/requirements.txt"
103
+ ], check=True)
104
+
105
+ # Convert to GGUF
106
+ print("Converting to GGUF...")
107
+ subprocess.run([
108
+ "python", "/tmp/llama.cpp/convert_hf_to_gguf.py",
109
+ merged_dir,
110
+ "--outfile", "/tmp/model-f16.gguf",
111
+ "--outtype", "f16"
112
+ ], check=True)
113
+
114
+ # Step 3: Build quantization tool with CMake
115
+ print("Building quantization tool...")
116
+ os.makedirs("/tmp/llama.cpp/build", exist_ok=True)
117
+
118
+ subprocess.run([
119
+ "cmake", "-B", "/tmp/llama.cpp/build", "-S", "/tmp/llama.cpp",
120
+ "-DGGML_CUDA=OFF"
121
+ ], check=True)
122
+
123
+ subprocess.run([
124
+ "cmake", "--build", "/tmp/llama.cpp/build",
125
+ "--target", "llama-quantize", "-j", "4"
126
+ ], check=True)
127
+
128
+ quantize = "/tmp/llama.cpp/build/bin/llama-quantize"
129
+ quants = ["Q4_K_M", "Q5_K_M", "Q8_0"]
130
+
131
+ for q in quants:
132
+ print(f"Creating {q} quantization...")
133
+ subprocess.run([
134
+ quantize,
135
+ "/tmp/model-f16.gguf",
136
+ f"/tmp/model-{q.lower()}.gguf",
137
+ q
138
+ ], check=True)
139
+
140
+ # Step 4: Upload
141
+ print("Uploading to Hub...")
142
+ api = HfApi()
143
+ api.create_repo(OUTPUT_REPO, repo_type="model", exist_ok=True)
144
+
145
+ for q in ["f16"] + [q.lower() for q in quants]:
146
+ api.upload_file(
147
+ path_or_fileobj=f"/tmp/model-{q}.gguf",
148
+ path_in_repo=f"model-{q}.gguf",
149
+ repo_id=OUTPUT_REPO
150
+ )
151
+
152
+ print(f"βœ… Done! Models at: https://huggingface.co/{OUTPUT_REPO}")
153
+ """,
154
+ "flavor": "a10g-large",
155
+ "timeout": "45m",
156
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"},
157
+ "env": {
158
+ "ADAPTER_MODEL": "username/my-finetuned-model",
159
+ "BASE_MODEL": "Qwen/Qwen2.5-0.5B",
160
+ "OUTPUT_REPO": "username/my-model-gguf"
161
+ }
162
+ })
163
+ ```
164
+
165
+ ## Quantization Options
166
+
167
+ Common quantization formats (from smallest to largest):
168
+
169
+ | Format | Size | Quality | Use Case |
170
+ |--------|------|---------|----------|
171
+ | **Q4_K_M** | ~300MB | Good | **Recommended** - best balance of size/quality |
172
+ | **Q5_K_M** | ~350MB | Better | Higher quality, slightly larger |
173
+ | **Q8_0** | ~500MB | Very High | Near-original quality |
174
+ | **F16** | ~1GB | Original | Full precision, largest file |
175
+
176
+ **Recommendation:** Create Q4_K_M, Q5_K_M, and Q8_0 versions to give users options.
177
+
178
+ ## Hardware Requirements
179
+
180
+ **For conversion:**
181
+ - Small models (<1B): CPU-basic works, but slow
182
+ - Medium models (1-7B): a10g-large recommended
183
+ - Large models (7B+): a10g-large or a100-large
184
+
185
+ **Time estimates:**
186
+ - 0.5B model: ~15-25 minutes on A10G
187
+ - 3B model: ~30-45 minutes on A10G
188
+ - 7B model: ~45-60 minutes on A10G
189
+
190
+ ## Using GGUF Models
191
+
192
+ **GGUF models work on both CPU and GPU.** They're optimized for CPU inference but can also leverage GPU acceleration when available.
193
+
194
+ **With Ollama (auto-detects GPU):**
195
+ ```bash
196
+ # Download GGUF
197
+ huggingface-cli download username/my-model-gguf model-q4_k_m.gguf
198
+
199
+ # Create Modelfile
200
+ echo "FROM ./model-q4_k_m.gguf" > Modelfile
201
+
202
+ # Create and run (uses GPU automatically if available)
203
+ ollama create my-model -f Modelfile
204
+ ollama run my-model
205
+ ```
206
+
207
+ **With llama.cpp:**
208
+ ```bash
209
+ # CPU only
210
+ ./llama-cli -m model-q4_k_m.gguf -p "Your prompt"
211
+
212
+ # With GPU acceleration (offload 32 layers to GPU)
213
+ ./llama-cli -m model-q4_k_m.gguf -ngl 32 -p "Your prompt"
214
+ ```
215
+
216
+ **With LM Studio:**
217
+ 1. Download the `.gguf` file
218
+ 2. Import into LM Studio
219
+ 3. Start chatting
220
+
221
+ ## Best Practices
222
+
223
+ 1. **Always create multiple quantizations** - Give users choice of size/quality
224
+ 2. **Include README** - Document which quantization to use for what purpose
225
+ 3. **Test the GGUF** - Run a quick inference test before uploading
226
+ 4. **Use A10G GPU** - Much faster than CPU for loading/merging large models
227
+ 5. **Clean up temp files** - Conversion creates large intermediate files
228
+
229
+ ## Common Issues
230
+
231
+ **Out of memory during merge:**
232
+ - Use larger GPU (a10g-large or a100-large)
233
+ - Load with `device_map="auto"` for automatic device placement
234
+ - Use `dtype=torch.float16` or `torch.bfloat16` instead of float32
235
+
236
+ **Conversion fails with architecture error:**
237
+ - Ensure llama.cpp supports the model architecture
238
+ - Check that model uses standard architecture (Qwen, Llama, Mistral, etc.)
239
+ - Some newer models require latest llama.cpp from main branch
240
+ - Check llama.cpp issues/docs for model support
241
+
242
+ **GGUF file doesn't work with llama.cpp:**
243
+ - Verify llama.cpp version compatibility
244
+ - Download latest llama.cpp: `git clone https://github.com/ggerganov/llama.cpp.git`
245
+ - Rebuild llama.cpp after updating: `make clean && make`
246
+
247
+ **Quantization fails:**
248
+ - Ensure the `llama-quantize` tool was built: `make llama-quantize`
249
+ - Check that FP16 GGUF was created successfully before quantizing
250
+ - Some quantization types require specific llama.cpp versions
251
+
252
+ **Upload fails or times out:**
253
+ - Large models (>2GB) may need longer timeout
254
+ - Use `api.upload_file()` with `commit_message` for better tracking
255
+ - Consider uploading quantized versions separately
256
+
257
+ **See:** `scripts/convert_to_gguf.py` for complete, production-ready conversion script with all dependencies included.
trl/references/hardware_guide.md ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hardware Selection Guide
2
+
3
+ Choosing the right hardware (flavor) is critical for cost-effective training.
4
+
5
+ ## Available Hardware
6
+
7
+ ### CPU
8
+ - `cpu-basic` - Basic CPU, testing only
9
+ - `cpu-upgrade` - Enhanced CPU
10
+
11
+ **Use cases:** Dataset validation, preprocessing, testing scripts
12
+ **Not recommended for training:** Too slow for any meaningful training
13
+
14
+ ### GPU Options
15
+
16
+ | Flavor | GPU | Memory | Use Case | Cost/hour |
17
+ |--------|-----|--------|----------|-----------|
18
+ | `t4-small` | NVIDIA T4 | 16GB | <1B models, demos | ~$0.50-1 |
19
+ | `t4-medium` | NVIDIA T4 | 16GB | 1-3B models, development | ~$1-2 |
20
+ | `l4x1` | NVIDIA L4 | 24GB | 3-7B models, efficient training | ~$2-3 |
21
+ | `l4x4` | 4x NVIDIA L4 | 96GB | Multi-GPU training | ~$8-12 |
22
+ | `a10g-small` | NVIDIA A10G | 24GB | 3-7B models, production | ~$3-4 |
23
+ | `a10g-large` | NVIDIA A10G | 24GB | 7-13B models | ~$4-6 |
24
+ | `a10g-largex2` | 2x NVIDIA A10G | 48GB | Multi-GPU, large models | ~$8-12 |
25
+ | `a10g-largex4` | 4x NVIDIA A10G | 96GB | Multi-GPU, very large models | ~$16-24 |
26
+ | `a100-large` | NVIDIA A100 | 40GB | 13B+ models, fast training | ~$8-12 |
27
+
28
+ ### TPU Options
29
+
30
+ | Flavor | Type | Use Case |
31
+ |--------|------|----------|
32
+ | `v5e-1x1` | TPU v5e | Small TPU workloads |
33
+ | `v5e-2x2` | 4x TPU v5e | Medium TPU workloads |
34
+ | `v5e-2x4` | 8x TPU v5e | Large TPU workloads |
35
+
36
+ **Note:** TPUs require TPU-optimized code. Most TRL training uses GPUs.
37
+
38
+ ## Selection Guidelines
39
+
40
+ ### By Model Size
41
+
42
+ **Tiny Models (<1B parameters)**
43
+ - **Recommended:** `t4-small`
44
+ - **Example:** Qwen2.5-0.5B, TinyLlama
45
+ - **Batch size:** 4-8
46
+ - **Training time:** 1-2 hours for 1K examples
47
+
48
+ **Small Models (1-3B parameters)**
49
+ - **Recommended:** `t4-medium` or `a10g-small`
50
+ - **Example:** Qwen2.5-1.5B, Phi-2
51
+ - **Batch size:** 2-4
52
+ - **Training time:** 2-4 hours for 10K examples
53
+
54
+ **Medium Models (3-7B parameters)**
55
+ - **Recommended:** `a10g-small` or `a10g-large`
56
+ - **Example:** Qwen2.5-7B, Mistral-7B
57
+ - **Batch size:** 1-2 (or LoRA with 4-8)
58
+ - **Training time:** 4-8 hours for 10K examples
59
+
60
+ **Large Models (7-13B parameters)**
61
+ - **Recommended:** `a10g-large` or `a100-large`
62
+ - **Example:** Llama-3-8B, Mixtral-8x7B (with LoRA)
63
+ - **Batch size:** 1 (full fine-tuning) or 2-4 (LoRA)
64
+ - **Training time:** 6-12 hours for 10K examples
65
+ - **Note:** Always use LoRA/PEFT
66
+
67
+ **Very Large Models (13B+ parameters)**
68
+ - **Recommended:** `a100-large` with LoRA
69
+ - **Example:** Llama-3-13B, Llama-3-70B (LoRA only)
70
+ - **Batch size:** 1-2 with LoRA
71
+ - **Training time:** 8-24 hours for 10K examples
72
+ - **Note:** Full fine-tuning not feasible, use LoRA/PEFT
73
+
74
+ ### By Budget
75
+
76
+ **Minimal Budget (<$5 total)**
77
+ - Use `t4-small`
78
+ - Train on subset of data (100-500 examples)
79
+ - Limit to 1-2 epochs
80
+ - Use small model (<1B)
81
+
82
+ **Small Budget ($5-20)**
83
+ - Use `t4-medium` or `a10g-small`
84
+ - Train on 1K-5K examples
85
+ - 2-3 epochs
86
+ - Model up to 3B parameters
87
+
88
+ **Medium Budget ($20-50)**
89
+ - Use `a10g-small` or `a10g-large`
90
+ - Train on 5K-20K examples
91
+ - 3-5 epochs
92
+ - Model up to 7B parameters
93
+
94
+ **Large Budget ($50-200)**
95
+ - Use `a10g-large` or `a100-large`
96
+ - Full dataset training
97
+ - Multiple epochs
98
+ - Model up to 13B parameters with LoRA
99
+
100
+ ### By Training Type
101
+
102
+ **Quick Demo/Experiment**
103
+ - `t4-small`
104
+ - 50-100 examples
105
+ - 5-10 steps
106
+ - ~10-15 minutes
107
+
108
+ **Development/Iteration**
109
+ - `t4-medium` or `a10g-small`
110
+ - 1K examples
111
+ - 1 epoch
112
+ - ~30-60 minutes
113
+
114
+ **Production Training**
115
+ - `a10g-large` or `a100-large`
116
+ - Full dataset
117
+ - 3-5 epochs
118
+ - 4-12 hours
119
+
120
+ **Research/Experimentation**
121
+ - `a100-large`
122
+ - Multiple runs
123
+ - Various hyperparameters
124
+ - Budget for 20-50 hours
125
+
126
+ ## Memory Considerations
127
+
128
+ ### Estimating Memory Requirements
129
+
130
+ **Full fine-tuning:**
131
+ ```
132
+ Memory (GB) β‰ˆ (Model params in billions) Γ— 20
133
+ ```
134
+
135
+ **LoRA fine-tuning:**
136
+ ```
137
+ Memory (GB) β‰ˆ (Model params in billions) Γ— 4
138
+ ```
139
+
140
+ **Examples:**
141
+ - Qwen2.5-0.5B full: ~10GB βœ… fits t4-small
142
+ - Qwen2.5-1.5B full: ~30GB ❌ exceeds most GPUs
143
+ - Qwen2.5-1.5B LoRA: ~6GB βœ… fits t4-small
144
+ - Qwen2.5-7B full: ~140GB ❌ not feasible
145
+ - Qwen2.5-7B LoRA: ~28GB βœ… fits a10g-large
146
+
147
+ ### Memory Optimization
148
+
149
+ If hitting memory limits:
150
+
151
+ 1. **Use LoRA/PEFT**
152
+ ```python
153
+ peft_config=LoraConfig(r=16, lora_alpha=32)
154
+ ```
155
+
156
+ 2. **Reduce batch size**
157
+ ```python
158
+ per_device_train_batch_size=1
159
+ ```
160
+
161
+ 3. **Increase gradient accumulation**
162
+ ```python
163
+ gradient_accumulation_steps=8 # Effective batch size = 1Γ—8
164
+ ```
165
+
166
+ 4. **Enable gradient checkpointing**
167
+ ```python
168
+ gradient_checkpointing=True
169
+ ```
170
+
171
+ 5. **Use mixed precision**
172
+ ```python
173
+ bf16=True # or fp16=True
174
+ ```
175
+
176
+ 6. **Upgrade to larger GPU**
177
+ - t4 β†’ a10g β†’ a100
178
+
179
+ ## Cost Estimation
180
+
181
+ ### Formula
182
+
183
+ ```
184
+ Total Cost = (Hours of training) Γ— (Cost per hour)
185
+ ```
186
+
187
+ ### Example Calculations
188
+
189
+ **Quick demo:**
190
+ - Hardware: t4-small ($0.75/hour)
191
+ - Time: 15 minutes (0.25 hours)
192
+ - Cost: $0.19
193
+
194
+ **Development training:**
195
+ - Hardware: a10g-small ($3.50/hour)
196
+ - Time: 2 hours
197
+ - Cost: $7.00
198
+
199
+ **Production training:**
200
+ - Hardware: a10g-large ($5/hour)
201
+ - Time: 6 hours
202
+ - Cost: $30.00
203
+
204
+ **Large model with LoRA:**
205
+ - Hardware: a100-large ($10/hour)
206
+ - Time: 8 hours
207
+ - Cost: $80.00
208
+
209
+ ### Cost Optimization Tips
210
+
211
+ 1. **Start small:** Test on t4-small with subset
212
+ 2. **Use LoRA:** 4-5x cheaper than full fine-tuning
213
+ 3. **Optimize hyperparameters:** Fewer epochs if possible
214
+ 4. **Set appropriate timeout:** Don't waste compute on stalled jobs
215
+ 5. **Use checkpointing:** Resume if job fails
216
+ 6. **Monitor costs:** Check running jobs regularly
217
+
218
+ ## Multi-GPU Training
219
+
220
+ TRL automatically handles multi-GPU training with Accelerate when using multi-GPU flavors.
221
+
222
+ **Multi-GPU flavors:**
223
+ - `l4x4` - 4x L4 GPUs
224
+ - `a10g-largex2` - 2x A10G GPUs
225
+ - `a10g-largex4` - 4x A10G GPUs
226
+
227
+ **When to use:**
228
+ - Models >13B parameters
229
+ - Need faster training (linear speedup)
230
+ - Large datasets (>50K examples)
231
+
232
+ **Example:**
233
+ ```python
234
+ hf_jobs("uv", {
235
+ "script": "train.py",
236
+ "flavor": "a10g-largex2", # 2 GPUs
237
+ "timeout": "4h",
238
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
239
+ })
240
+ ```
241
+
242
+ No code changes neededβ€”TRL/Accelerate handles distribution automatically.
243
+
244
+ ## Choosing Between Options
245
+
246
+ ### a10g vs a100
247
+
248
+ **Choose a10g when:**
249
+ - Model <13B parameters
250
+ - Budget conscious
251
+ - Training time not critical
252
+
253
+ **Choose a100 when:**
254
+ - Model 13B+ parameters
255
+ - Need fastest training
256
+ - Memory requirements high
257
+ - Budget allows
258
+
259
+ ### Single vs Multi-GPU
260
+
261
+ **Choose single GPU when:**
262
+ - Model <7B parameters
263
+ - Budget constrained
264
+ - Simpler debugging
265
+
266
+ **Choose multi-GPU when:**
267
+ - Model >13B parameters
268
+ - Need faster training
269
+ - Large batch sizes required
270
+ - Cost-effective for large jobs
271
+
272
+ ## Quick Reference
273
+
274
+ ```python
275
+ # Model size β†’ Hardware selection
276
+ HARDWARE_MAP = {
277
+ "<1B": "t4-small",
278
+ "1-3B": "a10g-small",
279
+ "3-7B": "a10g-large",
280
+ "7-13B": "a10g-large (LoRA) or a100-large",
281
+ ">13B": "a100-large (LoRA required)"
282
+ }
283
+ ```
trl/references/hub_saving.md ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Saving Training Results to Hugging Face Hub
2
+
3
+ **⚠️ CRITICAL:** Training environments are ephemeral. ALL results are lost when a job completes unless pushed to the Hub.
4
+
5
+ ## Why Hub Push is Required
6
+
7
+ When running on Hugging Face Jobs:
8
+ - Environment is temporary
9
+ - All files deleted on job completion
10
+ - No local disk persistence
11
+ - Cannot access results after job ends
12
+
13
+ **Without Hub push, training is completely wasted.**
14
+
15
+ ## Required Configuration
16
+
17
+ ### 1. Training Configuration
18
+
19
+ In your SFTConfig or trainer config:
20
+
21
+ ```python
22
+ SFTConfig(
23
+ push_to_hub=True, # Enable Hub push
24
+ hub_model_id="username/model-name", # Target repository
25
+ )
26
+ ```
27
+
28
+ ### 2. Job Configuration
29
+
30
+ When submitting the job:
31
+
32
+ ```python
33
+ hf_jobs("uv", {
34
+ "script": "train.py",
35
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"} # Provide authentication
36
+ })
37
+ ```
38
+
39
+ **The `$HF_TOKEN` placeholder is automatically replaced with your Hugging Face token.**
40
+
41
+ ## Complete Example
42
+
43
+ ```python
44
+ # train.py
45
+ # /// script
46
+ # dependencies = ["trl"]
47
+ # ///
48
+
49
+ from trl import SFTTrainer, SFTConfig
50
+ from datasets import load_dataset
51
+
52
+ dataset = load_dataset("trl-lib/Capybara", split="train")
53
+
54
+ # Configure with Hub push
55
+ config = SFTConfig(
56
+ output_dir="my-model",
57
+ num_train_epochs=3,
58
+
59
+ # βœ… CRITICAL: Hub push configuration
60
+ push_to_hub=True,
61
+ hub_model_id="myusername/my-trained-model",
62
+
63
+ # Optional: Push strategy
64
+ push_to_hub_model_id="myusername/my-trained-model",
65
+ push_to_hub_organization=None,
66
+ push_to_hub_token=None, # Uses environment token
67
+ )
68
+
69
+ trainer = SFTTrainer(
70
+ model="Qwen/Qwen2.5-0.5B",
71
+ train_dataset=dataset,
72
+ args=config,
73
+ )
74
+
75
+ trainer.train()
76
+
77
+ # βœ… Push final model
78
+ trainer.push_to_hub()
79
+
80
+ print("βœ… Model saved to: https://huggingface.co/myusername/my-trained-model")
81
+ ```
82
+
83
+ **Submit with authentication:**
84
+
85
+ ```python
86
+ hf_jobs("uv", {
87
+ "script": "train.py",
88
+ "flavor": "a10g-large",
89
+ "timeout": "2h",
90
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"} # βœ… Required!
91
+ })
92
+ ```
93
+
94
+ ## What Gets Saved
95
+
96
+ When `push_to_hub=True`:
97
+
98
+ 1. **Model weights** - Final trained parameters
99
+ 2. **Tokenizer** - Associated tokenizer
100
+ 3. **Configuration** - Model config (config.json)
101
+ 4. **Training arguments** - Hyperparameters used
102
+ 5. **Model card** - Auto-generated documentation
103
+ 6. **Checkpoints** - If `save_strategy="steps"` enabled
104
+
105
+ ## Checkpoint Saving
106
+
107
+ Save intermediate checkpoints during training:
108
+
109
+ ```python
110
+ SFTConfig(
111
+ output_dir="my-model",
112
+ push_to_hub=True,
113
+ hub_model_id="username/my-model",
114
+
115
+ # Checkpoint configuration
116
+ save_strategy="steps",
117
+ save_steps=100, # Save every 100 steps
118
+ save_total_limit=3, # Keep only last 3 checkpoints
119
+ )
120
+ ```
121
+
122
+ **Benefits:**
123
+ - Resume training if job fails
124
+ - Compare checkpoint performance
125
+ - Use intermediate models
126
+
127
+ **Checkpoints are pushed to:** `username/my-model` (same repo)
128
+
129
+ ## Authentication Methods
130
+
131
+ ### Method 1: Automatic Token (Recommended)
132
+
133
+ ```python
134
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
135
+ ```
136
+
137
+ Uses your logged-in Hugging Face token automatically.
138
+
139
+ ### Method 2: Explicit Token
140
+
141
+ ```python
142
+ "secrets": {"HF_TOKEN": "hf_abc123..."}
143
+ ```
144
+
145
+ Provide token explicitly (not recommended for security).
146
+
147
+ ### Method 3: Environment Variable
148
+
149
+ ```python
150
+ "env": {"HF_TOKEN": "hf_abc123..."}
151
+ ```
152
+
153
+ Pass as regular environment variable (less secure than secrets).
154
+
155
+ **Always prefer Method 1** for security and convenience.
156
+
157
+ ## Verification Checklist
158
+
159
+ Before submitting any training job, verify:
160
+
161
+ - [ ] `push_to_hub=True` in training config
162
+ - [ ] `hub_model_id` is specified (format: `username/model-name`)
163
+ - [ ] `secrets={"HF_TOKEN": "$HF_TOKEN"}` in job config
164
+ - [ ] Repository name doesn't conflict with existing repos
165
+ - [ ] You have write access to the target namespace
166
+
167
+ ## Repository Setup
168
+
169
+ ### Automatic Creation
170
+
171
+ If repository doesn't exist, it's created automatically when first pushing.
172
+
173
+ ### Manual Creation
174
+
175
+ Create repository before training:
176
+
177
+ ```python
178
+ from huggingface_hub import HfApi
179
+
180
+ api = HfApi()
181
+ api.create_repo(
182
+ repo_id="username/model-name",
183
+ repo_type="model",
184
+ private=False, # or True for private repo
185
+ )
186
+ ```
187
+
188
+ ### Repository Naming
189
+
190
+ **Valid names:**
191
+ - `username/my-model`
192
+ - `username/model-name`
193
+ - `organization/model-name`
194
+
195
+ **Invalid names:**
196
+ - `model-name` (missing username)
197
+ - `username/model name` (spaces not allowed)
198
+ - `username/MODEL` (uppercase discouraged)
199
+
200
+ ## Troubleshooting
201
+
202
+ ### Error: 401 Unauthorized
203
+
204
+ **Cause:** HF_TOKEN not provided or invalid
205
+
206
+ **Solutions:**
207
+ 1. Verify `secrets={"HF_TOKEN": "$HF_TOKEN"}` in job config
208
+ 2. Check you're logged in: `huggingface-cli whoami`
209
+ 3. Re-login: `huggingface-cli login`
210
+
211
+ ### Error: 403 Forbidden
212
+
213
+ **Cause:** No write access to repository
214
+
215
+ **Solutions:**
216
+ 1. Check repository namespace matches your username
217
+ 2. Verify you're a member of organization (if using org namespace)
218
+ 3. Check repository isn't private (if accessing org repo)
219
+
220
+ ### Error: Repository not found
221
+
222
+ **Cause:** Repository doesn't exist and auto-creation failed
223
+
224
+ **Solutions:**
225
+ 1. Manually create repository first
226
+ 2. Check repository name format
227
+ 3. Verify namespace exists
228
+
229
+ ### Error: Push failed during training
230
+
231
+ **Cause:** Network issues or Hub unavailable
232
+
233
+ **Solutions:**
234
+ 1. Training continues but final push fails
235
+ 2. Checkpoints may be saved
236
+ 3. Re-run push manually after job completes
237
+
238
+ ### Issue: Model saved but not visible
239
+
240
+ **Possible causes:**
241
+ 1. Repository is privateβ€”check https://huggingface.co/username
242
+ 2. Wrong namespaceβ€”verify `hub_model_id` matches login
243
+ 3. Push still in progressβ€”wait a few minutes
244
+
245
+ ## Manual Push After Training
246
+
247
+ If training completes but push fails, push manually:
248
+
249
+ ```python
250
+ from transformers import AutoModel, AutoTokenizer
251
+
252
+ # Load from local checkpoint
253
+ model = AutoModel.from_pretrained("./output_dir")
254
+ tokenizer = AutoTokenizer.from_pretrained("./output_dir")
255
+
256
+ # Push to Hub
257
+ model.push_to_hub("username/model-name", token="hf_abc123...")
258
+ tokenizer.push_to_hub("username/model-name", token="hf_abc123...")
259
+ ```
260
+
261
+ **Note:** Only possible if job hasn't completed (files still exist).
262
+
263
+ ## Best Practices
264
+
265
+ 1. **Always enable `push_to_hub=True`**
266
+ 2. **Use checkpoint saving** for long training runs
267
+ 3. **Verify Hub push** in logs before job completes
268
+ 4. **Set appropriate `save_total_limit`** to avoid excessive checkpoints
269
+ 5. **Use descriptive repo names** (e.g., `qwen-capybara-sft` not `model1`)
270
+ 6. **Add model card** with training details
271
+ 7. **Tag models** with relevant tags (e.g., `text-generation`, `fine-tuned`)
272
+
273
+ ## Monitoring Push Progress
274
+
275
+ Check logs for push progress:
276
+
277
+ ```python
278
+ hf_jobs("logs", {"job_id": "your-job-id"})
279
+ ```
280
+
281
+ **Look for:**
282
+ ```
283
+ Pushing model to username/model-name...
284
+ Upload file pytorch_model.bin: 100%
285
+ βœ… Model pushed successfully
286
+ ```
287
+
288
+ ## Example: Full Production Setup
289
+
290
+ ```python
291
+ # production_train.py
292
+ # /// script
293
+ # dependencies = ["trl>=0.12.0", "peft>=0.7.0"]
294
+ # ///
295
+
296
+ from datasets import load_dataset
297
+ from peft import LoraConfig
298
+ from trl import SFTTrainer, SFTConfig
299
+ import os
300
+
301
+ # Verify token is available
302
+ assert "HF_TOKEN" in os.environ, "HF_TOKEN not found in environment!"
303
+
304
+ # Load dataset
305
+ dataset = load_dataset("trl-lib/Capybara", split="train")
306
+ print(f"βœ… Dataset loaded: {len(dataset)} examples")
307
+
308
+ # Configure with comprehensive Hub settings
309
+ config = SFTConfig(
310
+ output_dir="qwen-capybara-sft",
311
+
312
+ # Hub configuration
313
+ push_to_hub=True,
314
+ hub_model_id="myusername/qwen-capybara-sft",
315
+ hub_strategy="checkpoint", # Push checkpoints
316
+
317
+ # Checkpoint configuration
318
+ save_strategy="steps",
319
+ save_steps=100,
320
+ save_total_limit=3,
321
+
322
+ # Training settings
323
+ num_train_epochs=3,
324
+ per_device_train_batch_size=4,
325
+
326
+ # Logging
327
+ logging_steps=10,
328
+ logging_first_step=True,
329
+ )
330
+
331
+ # Train with LoRA
332
+ trainer = SFTTrainer(
333
+ model="Qwen/Qwen2.5-0.5B",
334
+ train_dataset=dataset,
335
+ args=config,
336
+ peft_config=LoraConfig(r=16, lora_alpha=32),
337
+ )
338
+
339
+ print("πŸš€ Starting training...")
340
+ trainer.train()
341
+
342
+ print("πŸ’Ύ Pushing final model to Hub...")
343
+ trainer.push_to_hub()
344
+
345
+ print("βœ… Training complete!")
346
+ print(f"Model available at: https://huggingface.co/myusername/qwen-capybara-sft")
347
+ ```
348
+
349
+ **Submit:**
350
+
351
+ ```python
352
+ hf_jobs("uv", {
353
+ "script": "production_train.py",
354
+ "flavor": "a10g-large",
355
+ "timeout": "6h",
356
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
357
+ })
358
+ ```
359
+
360
+ ## Key Takeaway
361
+
362
+ **Without `push_to_hub=True` and `secrets={"HF_TOKEN": "$HF_TOKEN"}`, all training results are permanently lost.**
363
+
364
+ Always verify both are configured before submitting any training job.
trl/references/trackio_guide.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Trackio Integration for TRL Training
2
+
3
+ **Trackio** is a local-first experiment tracking library that provides real-time metrics visualization via a Gradio dashboard.
4
+
5
+ ⚠️ **IMPORTANT**: Trackio is local-first, which means:
6
+ - It runs a dashboard on the machine where training happens
7
+ - For Jobs training, sync to a Hugging Face Space to view metrics
8
+ - Without a Space, metrics are only accessible during the job (then lost)
9
+
10
+ ## Setting Up Trackio for Jobs
11
+
12
+ **Step 1: Add trackio dependency**
13
+ ```python
14
+ # /// script
15
+ # dependencies = [
16
+ # "trl>=0.12.0",
17
+ # "trackio", # Required!
18
+ # ]
19
+ # ///
20
+ ```
21
+
22
+ **Step 2: Create a Trackio Space (one-time setup)**
23
+
24
+ **Option A: Let Trackio auto-create (Recommended)**
25
+ Pass a `space_id` to `trackio.init()` and Trackio will automatically create the Space if it doesn't exist.
26
+
27
+ **Option B: Create manually**
28
+ - Create Space via Hub UI at https://huggingface.co/new-space
29
+ - Select Gradio SDK
30
+ - OR use command: `huggingface-cli repo create my-trackio-dashboard --type space --space_sdk gradio`
31
+
32
+ **Step 3: Initialize Trackio with space_id**
33
+ ```python
34
+ import trackio
35
+
36
+ trackio.init(
37
+ project="my-training",
38
+ space_id="username/my-trackio-dashboard", # CRITICAL for Jobs!
39
+ config={
40
+ "model": "Qwen/Qwen2.5-0.5B",
41
+ "dataset": "trl-lib/Capybara",
42
+ "learning_rate": 2e-5,
43
+ }
44
+ )
45
+ ```
46
+
47
+ **Step 4: Configure TRL to use Trackio**
48
+ ```python
49
+ SFTConfig(
50
+ report_to="trackio",
51
+ # ... other config
52
+ )
53
+ ```
54
+
55
+ **Step 5: Finish tracking**
56
+ ```python
57
+ trainer.train()
58
+ trackio.finish() # Ensures final metrics are synced
59
+ ```
60
+
61
+ ## What Trackio Tracks
62
+
63
+ Trackio automatically logs:
64
+ - βœ… Training loss
65
+ - βœ… Learning rate
66
+ - βœ… GPU utilization
67
+ - βœ… Memory usage
68
+ - βœ… Training throughput
69
+ - βœ… Custom metrics
70
+
71
+ ## How It Works with Jobs
72
+
73
+ 1. **Training runs** β†’ Metrics logged to local SQLite DB
74
+ 2. **Every 5 minutes** β†’ Trackio syncs DB to HF Dataset (Parquet)
75
+ 3. **Space dashboard** β†’ Reads from Dataset, displays metrics in real-time
76
+ 4. **Job completes** β†’ Final sync ensures all metrics persisted
77
+
78
+ ## Viewing the Dashboard
79
+
80
+ After starting training:
81
+ 1. Navigate to the Space: `https://huggingface.co/spaces/username/my-trackio-dashboard`
82
+ 2. The Gradio dashboard shows all tracked experiments
83
+ 3. Filter by project, compare runs, view charts with smoothing
84
+
85
+ ## Alternative: TensorBoard (Simpler for Jobs)
86
+
87
+ For simpler setup without needing a Space:
88
+ ```python
89
+ SFTConfig(
90
+ report_to="tensorboard", # Logs saved with model to Hub
91
+ )
92
+ ```
93
+
94
+ TensorBoard logs are automatically saved with the model and viewable via TensorBoard locally after downloading.
95
+
96
+ ## Recommendation
97
+
98
+ - **Trackio**: Best for real-time monitoring during long training runs
99
+ - **TensorBoard**: Best for post-training analysis, simpler setup
100
+ - **Weights & Biases**: Best for team collaboration, requires account
trl/references/training_methods.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TRL Training Methods Overview
2
+
3
+ TRL (Transformer Reinforcement Learning) provides multiple training methods for fine-tuning and aligning language models. This reference provides a brief overview of each method.
4
+
5
+ ## Supervised Fine-Tuning (SFT)
6
+
7
+ **What it is:** Standard instruction tuning with supervised learning on demonstration data.
8
+
9
+ **When to use:**
10
+ - Initial fine-tuning of base models on task-specific data
11
+ - Teaching new capabilities or domains
12
+ - Most common starting point for fine-tuning
13
+
14
+ **Dataset format:** Conversational format with "messages" field, OR text field, OR prompt/completion pairs
15
+
16
+ **Example:**
17
+ ```python
18
+ from trl import SFTTrainer, SFTConfig
19
+
20
+ trainer = SFTTrainer(
21
+ model="Qwen/Qwen2.5-0.5B",
22
+ train_dataset=dataset,
23
+ args=SFTConfig(
24
+ output_dir="my-model",
25
+ push_to_hub=True,
26
+ hub_model_id="username/my-model",
27
+ )
28
+ )
29
+ trainer.train()
30
+ ```
31
+
32
+ **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer")`
33
+
34
+ ## Direct Preference Optimization (DPO)
35
+
36
+ **What it is:** Alignment method that trains directly on preference pairs (chosen vs rejected responses) without requiring a reward model.
37
+
38
+ **When to use:**
39
+ - Aligning models to human preferences
40
+ - Improving response quality after SFT
41
+ - Have paired preference data (chosen/rejected responses)
42
+
43
+ **Dataset format:** Preference pairs with "chosen" and "rejected" fields
44
+
45
+ **Example:**
46
+ ```python
47
+ from trl import DPOTrainer, DPOConfig
48
+
49
+ trainer = DPOTrainer(
50
+ model="Qwen/Qwen2.5-0.5B-Instruct", # Use instruct model
51
+ train_dataset=dataset,
52
+ args=DPOConfig(
53
+ output_dir="dpo-model",
54
+ beta=0.1, # KL penalty coefficient
55
+ )
56
+ )
57
+ trainer.train()
58
+ ```
59
+
60
+ **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
61
+
62
+ ## Group Relative Policy Optimization (GRPO)
63
+
64
+ **What it is:** Online RL method that optimizes relative to group performance, useful for tasks with verifiable rewards.
65
+
66
+ **When to use:**
67
+ - Tasks with automatic reward signals (code execution, math verification)
68
+ - Online learning scenarios
69
+ - When DPO offline data is insufficient
70
+
71
+ **Dataset format:** Prompt-only format (model generates responses, reward computed online)
72
+
73
+ **Example:**
74
+ ```python
75
+ # Use TRL maintained script
76
+ hf_jobs("uv", {
77
+ "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/grpo.py",
78
+ "script_args": [
79
+ "--model_name_or_path", "Qwen/Qwen2.5-0.5B-Instruct",
80
+ "--dataset_name", "trl-lib/math_shepherd",
81
+ "--output_dir", "grpo-model"
82
+ ],
83
+ "flavor": "a10g-large",
84
+ "timeout": "4h",
85
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
86
+ })
87
+ ```
88
+
89
+ **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/grpo_trainer")`
90
+
91
+ ## Kahneman-Tversky Optimization (KTO)
92
+
93
+ **What it is:** Preference tuning without paired data - uses independent positive/negative examples.
94
+
95
+ **When to use:**
96
+ - Have preference data but not paired comparisons
97
+ - Simpler data collection than DPO
98
+ - Want to incorporate human feedback without explicit pairs
99
+
100
+ **Dataset format:** Examples with binary labels (desirable/undesirable) but not paired
101
+
102
+ **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/kto_trainer")`
103
+
104
+ ## Reward Modeling
105
+
106
+ **What it is:** Train a reward model to score responses, used as a component in RLHF pipelines.
107
+
108
+ **When to use:**
109
+ - Building RLHF pipeline
110
+ - Need automatic quality scoring
111
+ - Creating reward signals for PPO training
112
+
113
+ **Dataset format:** Preference pairs with "chosen" and "rejected" responses
114
+
115
+ **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/reward_trainer")`
116
+
117
+ ## Proximal Policy Optimization (PPO)
118
+
119
+ **What it is:** Classic RLHF method using a reward model to guide policy optimization.
120
+
121
+ **When to use:**
122
+ - Full RLHF pipeline
123
+ - Have trained reward model
124
+ - Need fine-grained control over optimization
125
+
126
+ **Requirements:** Pre-trained reward model
127
+
128
+ **Note:** PPO is more complex than DPO. For most use cases, start with DPO.
129
+
130
+ **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/ppo_trainer")`
131
+
132
+ ## Method Selection Guide
133
+
134
+ | Method | Complexity | Data Required | Use Case |
135
+ |--------|-----------|---------------|----------|
136
+ | **SFT** | Low | Demonstrations | Initial fine-tuning |
137
+ | **DPO** | Medium | Paired preferences | Post-SFT alignment |
138
+ | **GRPO** | Medium | Prompts + reward fn | Online RL with automatic rewards |
139
+ | **KTO** | Medium | Unpaired preferences | Alignment with simpler data |
140
+ | **Reward** | Medium | Paired preferences | Building RLHF pipeline |
141
+ | **PPO** | High | Demonstrations + reward model | Full RLHF |
142
+
143
+ ## Recommended Pipeline
144
+
145
+ **For most use cases:**
146
+ 1. **Start with SFT** - Fine-tune base model on task data
147
+ 2. **Follow with DPO** - Align to preferences using paired data
148
+ 3. **Optional: GGUF conversion** - Deploy for local inference
149
+
150
+ **For advanced RL scenarios:**
151
+ 1. **Start with SFT** - Fine-tune base model
152
+ 2. **Train reward model** - On preference data
153
+ 3. **Apply GRPO or PPO** - Online RL with reward model
154
+
155
+ ## Dataset Format Reference
156
+
157
+ For complete dataset format specifications, use:
158
+ ```python
159
+ hf_doc_fetch("https://huggingface.co/docs/trl/dataset_formats")
160
+ ```
161
+
162
+ Or validate your dataset:
163
+ ```python
164
+ # See scripts/validate_dataset.py
165
+ ```
166
+
167
+ ## See Also
168
+
169
+ - `references/training_patterns.md` - Common training patterns and examples
170
+ - `scripts/train_sft_example.py` - Complete SFT template
171
+ - `scripts/train_dpo_example.py` - Complete DPO template
172
+ - `scripts/validate_dataset.py` - Dataset format validation tool
trl/references/training_patterns.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Common Training Patterns
2
+
3
+ This guide provides common training patterns and use cases for TRL on Hugging Face Jobs.
4
+
5
+ ## Quick Demo (5-10 minutes)
6
+
7
+ Test setup with minimal training:
8
+
9
+ ```python
10
+ hf_jobs("uv", {
11
+ "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/sft.py",
12
+ "script_args": [
13
+ "--model_name_or_path", "Qwen/Qwen2.5-0.5B",
14
+ "--dataset_name", "trl-lib/Capybara",
15
+ "--dataset_train_split", "train[:50]", # Only 50 examples
16
+ "--max_steps", "10",
17
+ "--output_dir", "demo",
18
+ "--push_to_hub",
19
+ "--hub_model_id", "username/demo"
20
+ ],
21
+ "flavor": "t4-small",
22
+ "timeout": "15m",
23
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
24
+ })
25
+ ```
26
+
27
+ **Note:** The TRL maintained script above doesn't include Trackio. For production training with monitoring, see `scripts/train_sft_example.py` for a complete template with Trackio integration.
28
+
29
+ ## Production with Checkpoints
30
+
31
+ Full training with intermediate saves. Use this pattern for long training runs where you want to save progress:
32
+
33
+ ```python
34
+ hf_jobs("uv", {
35
+ "script": """
36
+ # /// script
37
+ # dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
38
+ # ///
39
+
40
+ from datasets import load_dataset
41
+ from peft import LoraConfig
42
+ from trl import SFTTrainer, SFTConfig
43
+ import trackio
44
+
45
+ trackio.init(project="production-training", space_id="username/my-dashboard")
46
+
47
+ dataset = load_dataset("trl-lib/Capybara", split="train")
48
+
49
+ config = SFTConfig(
50
+ output_dir="my-model",
51
+ push_to_hub=True,
52
+ hub_model_id="username/my-model",
53
+ hub_strategy="every_save", # Push each checkpoint
54
+ save_strategy="steps",
55
+ save_steps=100,
56
+ save_total_limit=3,
57
+ num_train_epochs=3,
58
+ report_to="trackio",
59
+ )
60
+
61
+ trainer = SFTTrainer(
62
+ model="Qwen/Qwen2.5-0.5B",
63
+ train_dataset=dataset,
64
+ args=config,
65
+ peft_config=LoraConfig(r=16, lora_alpha=32),
66
+ )
67
+
68
+ trainer.train()
69
+ trainer.push_to_hub()
70
+ trackio.finish()
71
+ """,
72
+ "flavor": "a10g-large",
73
+ "timeout": "6h",
74
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
75
+ })
76
+ ```
77
+
78
+ ## Multi-GPU Training
79
+
80
+ Automatic distributed training across multiple GPUs. TRL/Accelerate handles distribution automatically:
81
+
82
+ ```python
83
+ hf_jobs("uv", {
84
+ "script": """
85
+ # Your training script here (same as single GPU)
86
+ # No changes needed - Accelerate detects multiple GPUs
87
+ """,
88
+ "flavor": "a10g-largex2", # 2x A10G GPUs
89
+ "timeout": "4h",
90
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
91
+ })
92
+ ```
93
+
94
+ **Tips for multi-GPU:**
95
+ - No code changes needed
96
+ - Use `per_device_train_batch_size` (per GPU, not total)
97
+ - Effective batch size = `per_device_train_batch_size` Γ— `num_gpus` Γ— `gradient_accumulation_steps`
98
+ - Monitor GPU utilization to ensure both GPUs are being used
99
+
100
+ ## DPO Training (Preference Learning)
101
+
102
+ Train with preference data for alignment:
103
+
104
+ ```python
105
+ hf_jobs("uv", {
106
+ "script": """
107
+ # /// script
108
+ # dependencies = ["trl>=0.12.0", "trackio"]
109
+ # ///
110
+
111
+ from datasets import load_dataset
112
+ from trl import DPOTrainer, DPOConfig
113
+ import trackio
114
+
115
+ trackio.init(project="dpo-training", space_id="username/my-dashboard")
116
+
117
+ dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
118
+
119
+ config = DPOConfig(
120
+ output_dir="dpo-model",
121
+ push_to_hub=True,
122
+ hub_model_id="username/dpo-model",
123
+ num_train_epochs=1,
124
+ beta=0.1, # KL penalty coefficient
125
+ report_to="trackio",
126
+ )
127
+
128
+ trainer = DPOTrainer(
129
+ model="Qwen/Qwen2.5-0.5B-Instruct", # Use instruct model as base
130
+ train_dataset=dataset,
131
+ args=config,
132
+ )
133
+
134
+ trainer.train()
135
+ trainer.push_to_hub()
136
+ trackio.finish()
137
+ """,
138
+ "flavor": "a10g-large",
139
+ "timeout": "3h",
140
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
141
+ })
142
+ ```
143
+
144
+ **For DPO documentation:** Use `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
145
+
146
+ ## GRPO Training (Online RL)
147
+
148
+ Group Relative Policy Optimization for online reinforcement learning:
149
+
150
+ ```python
151
+ hf_jobs("uv", {
152
+ "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/grpo.py",
153
+ "script_args": [
154
+ "--model_name_or_path", "Qwen/Qwen2.5-0.5B-Instruct",
155
+ "--dataset_name", "trl-lib/math_shepherd",
156
+ "--output_dir", "grpo-model",
157
+ "--push_to_hub",
158
+ "--hub_model_id", "username/grpo-model"
159
+ ],
160
+ "flavor": "a10g-large",
161
+ "timeout": "4h",
162
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
163
+ })
164
+ ```
165
+
166
+ **For GRPO documentation:** Use `hf_doc_fetch("https://huggingface.co/docs/trl/grpo_trainer")`
167
+
168
+ ## Pattern Selection Guide
169
+
170
+ | Use Case | Pattern | Hardware | Time |
171
+ |----------|---------|----------|------|
172
+ | Test setup | Quick Demo | t4-small | 5-10 min |
173
+ | Small dataset (<1K) | Production w/ Checkpoints | t4-medium | 30-60 min |
174
+ | Medium dataset (1-10K) | Production w/ Checkpoints | a10g-large | 2-6 hours |
175
+ | Large dataset (>10K) | Multi-GPU | a10g-largex2 | 4-12 hours |
176
+ | Preference learning | DPO Training | a10g-large | 2-4 hours |
177
+ | Online RL | GRPO Training | a10g-large | 3-6 hours |
178
+
179
+ ## Best Practices
180
+
181
+ 1. **Always start with Quick Demo** - Verify setup before long runs
182
+ 2. **Use checkpoints for runs >1 hour** - Protect against failures
183
+ 3. **Enable Trackio** - Monitor progress in real-time
184
+ 4. **Add 20-30% buffer to timeout** - Account for loading/saving overhead
185
+ 5. **Test with small dataset slice first** - Use `"train[:100]"` to verify code
186
+ 6. **Use multi-GPU for large models** - 7B+ models benefit significantly
187
+
188
+ ## See Also
189
+
190
+ - `scripts/train_sft_example.py` - Complete SFT template with Trackio
191
+ - `scripts/train_dpo_example.py` - Complete DPO template
192
+ - `scripts/train_grpo_example.py` - Complete GRPO template
193
+ - `references/hardware_guide.md` - Detailed hardware specifications
194
+ - `references/training_methods.md` - Overview of all TRL training methods
trl/references/troubleshooting.md ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Troubleshooting TRL Training Jobs
2
+
3
+ Common issues and solutions when training with TRL on Hugging Face Jobs.
4
+
5
+ ## Job Times Out
6
+
7
+ **Problem:** Job terminates before training completes, all progress lost.
8
+
9
+ **Solutions:**
10
+ - Increase timeout parameter (e.g., `"timeout": "4h"`)
11
+ - Reduce `num_train_epochs` or use smaller dataset slice
12
+ - Use smaller model or enable LoRA/PEFT to speed up training
13
+ - Add 20-30% buffer to estimated time for loading/saving overhead
14
+
15
+ **Prevention:**
16
+ - Always start with a quick demo run to estimate timing
17
+ - Use `scripts/estimate_cost.py` to get time estimates
18
+ - Monitor first runs closely via Trackio or logs
19
+
20
+ ## Model Not Saved to Hub
21
+
22
+ **Problem:** Training completes but model doesn't appear on Hub - all work lost.
23
+
24
+ **Check:**
25
+ - [ ] `push_to_hub=True` in training config
26
+ - [ ] `hub_model_id` specified with username (e.g., `"username/model-name"`)
27
+ - [ ] `secrets={"HF_TOKEN": "$HF_TOKEN"}` in job submission
28
+ - [ ] User has write access to target repo
29
+ - [ ] Token has write permissions (check at https://huggingface.co/settings/tokens)
30
+ - [ ] Training script calls `trainer.push_to_hub()` at the end
31
+
32
+ **See:** `references/hub_saving.md` for detailed Hub authentication troubleshooting
33
+
34
+ ## Out of Memory (OOM)
35
+
36
+ **Problem:** Job fails with CUDA out of memory error.
37
+
38
+ **Solutions (in order of preference):**
39
+ 1. **Reduce batch size:** Lower `per_device_train_batch_size` (try 4 β†’ 2 β†’ 1)
40
+ 2. **Increase gradient accumulation:** Raise `gradient_accumulation_steps` to maintain effective batch size
41
+ 3. **Enable LoRA/PEFT:** Use `peft_config=LoraConfig(r=16, lora_alpha=32)` to train adapters only
42
+ 4. **Use larger GPU:** Switch from `t4-medium` β†’ `a10g-large` β†’ `a100-large`
43
+ 5. **Enable gradient checkpointing:** Set `gradient_checkpointing=True` in config (slower but saves memory)
44
+ 6. **Use smaller model:** Try a smaller variant (e.g., 0.5B instead of 3B)
45
+
46
+ **Memory guidelines:**
47
+ - T4 (16GB): <1B models with LoRA
48
+ - A10G (24GB): 1-3B models with LoRA, <1B full fine-tune
49
+ - A100 (40GB/80GB): 7B+ models with LoRA, 3B full fine-tune
50
+
51
+ ## Dataset Format Error
52
+
53
+ **Problem:** Training fails with dataset format errors or missing fields.
54
+
55
+ **Solutions:**
56
+ 1. **Check format documentation:**
57
+ ```python
58
+ hf_doc_fetch("https://huggingface.co/docs/trl/dataset_formats")
59
+ ```
60
+
61
+ 2. **Validate dataset before training:**
62
+ ```bash
63
+ python scripts/validate_dataset.py <dataset-name> <method>
64
+ # e.g., python scripts/validate_dataset.py trl-lib/Capybara sft
65
+ ```
66
+
67
+ 3. **Verify field names:**
68
+ - **SFT:** Needs "messages" field (conversational), OR "text" field, OR "prompt"/"completion"
69
+ - **DPO:** Needs "chosen" and "rejected" fields
70
+ - **GRPO:** Needs prompt-only format
71
+
72
+ 4. **Check dataset split:**
73
+ - Ensure split exists (e.g., `split="train"`)
74
+ - Preview dataset: `load_dataset("name", split="train[:5]")`
75
+
76
+ ## Import/Module Errors
77
+
78
+ **Problem:** Job fails with "ModuleNotFoundError" or import errors.
79
+
80
+ **Solutions:**
81
+ 1. **Add PEP 723 header with dependencies:**
82
+ ```python
83
+ # /// script
84
+ # dependencies = [
85
+ # "trl>=0.12.0",
86
+ # "peft>=0.7.0",
87
+ # "transformers>=4.36.0",
88
+ # ]
89
+ # ///
90
+ ```
91
+
92
+ 2. **Verify exact format:**
93
+ - Must have `# ///` delimiters (with space after `#`)
94
+ - Dependencies must be valid PyPI package names
95
+ - Check spelling and version constraints
96
+
97
+ 3. **Test locally first:**
98
+ ```bash
99
+ uv run train.py # Tests if dependencies are correct
100
+ ```
101
+
102
+ ## Authentication Errors
103
+
104
+ **Problem:** Job fails with authentication or permission errors when pushing to Hub.
105
+
106
+ **Solutions:**
107
+ 1. **Verify authentication:**
108
+ ```python
109
+ mcp__huggingface__hf_whoami() # Check who's authenticated
110
+ ```
111
+
112
+ 2. **Check token permissions:**
113
+ - Go to https://huggingface.co/settings/tokens
114
+ - Ensure token has "write" permission
115
+ - Token must not be "read-only"
116
+
117
+ 3. **Verify token in job:**
118
+ ```python
119
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"} # Must be in job config
120
+ ```
121
+
122
+ 4. **Check repo permissions:**
123
+ - User must have write access to target repo
124
+ - If org repo, user must be member with write access
125
+ - Repo must exist or user must have permission to create
126
+
127
+ ## Job Stuck or Not Starting
128
+
129
+ **Problem:** Job shows "pending" or "starting" for extended period.
130
+
131
+ **Solutions:**
132
+ - Check Jobs dashboard for status: https://huggingface.co/jobs
133
+ - Verify hardware availability (some GPU types may have queues)
134
+ - Try different hardware flavor if one is heavily utilized
135
+ - Check for account billing issues (Jobs requires paid plan)
136
+
137
+ **Typical startup times:**
138
+ - CPU jobs: 10-30 seconds
139
+ - GPU jobs: 30-90 seconds
140
+ - If >3 minutes: likely queued or stuck
141
+
142
+ ## Training Loss Not Decreasing
143
+
144
+ **Problem:** Training runs but loss stays flat or doesn't improve.
145
+
146
+ **Solutions:**
147
+ 1. **Check learning rate:** May be too low (try 2e-5 to 5e-5) or too high (try 1e-6)
148
+ 2. **Verify dataset quality:** Inspect examples to ensure they're reasonable
149
+ 3. **Check model size:** Very small models may not have capacity for task
150
+ 4. **Increase training steps:** May need more epochs or larger dataset
151
+ 5. **Verify dataset format:** Wrong format may cause degraded training
152
+
153
+ ## Logs Not Appearing
154
+
155
+ **Problem:** Cannot see training logs or progress.
156
+
157
+ **Solutions:**
158
+ 1. **Wait 30-60 seconds:** Initial logs can be delayed
159
+ 2. **Check logs via MCP tool:**
160
+ ```python
161
+ hf_jobs("logs", {"job_id": "your-job-id"})
162
+ ```
163
+ 3. **Use Trackio for real-time monitoring:** See `references/trackio_guide.md`
164
+ 4. **Verify job is actually running:**
165
+ ```python
166
+ hf_jobs("inspect", {"job_id": "your-job-id"})
167
+ ```
168
+
169
+ ## Checkpoint/Resume Issues
170
+
171
+ **Problem:** Cannot resume from checkpoint or checkpoint not saved.
172
+
173
+ **Solutions:**
174
+ 1. **Enable checkpoint saving:**
175
+ ```python
176
+ SFTConfig(
177
+ save_strategy="steps",
178
+ save_steps=100,
179
+ hub_strategy="every_save", # Push each checkpoint
180
+ )
181
+ ```
182
+
183
+ 2. **Verify checkpoints pushed to Hub:** Check model repo for checkpoint folders
184
+
185
+ 3. **Resume from checkpoint:**
186
+ ```python
187
+ trainer = SFTTrainer(
188
+ model="username/model-name", # Can be checkpoint path
189
+ resume_from_checkpoint="username/model-name/checkpoint-1000",
190
+ )
191
+ ```
192
+
193
+ ## Getting Help
194
+
195
+ If issues persist:
196
+
197
+ 1. **Check TRL documentation:**
198
+ ```python
199
+ hf_doc_search("your issue", product="trl")
200
+ ```
201
+
202
+ 2. **Check Jobs documentation:**
203
+ ```python
204
+ hf_doc_fetch("https://huggingface.co/docs/huggingface_hub/guides/jobs")
205
+ ```
206
+
207
+ 3. **Review related guides:**
208
+ - `references/hub_saving.md` - Hub authentication issues
209
+ - `references/hardware_guide.md` - Hardware selection and specs
210
+ - `references/uv_scripts_guide.md` - UV script format issues
211
+
212
+ 4. **Ask in HF forums:** https://discuss.huggingface.co/
trl/references/uv_scripts_guide.md ADDED
@@ -0,0 +1,414 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # UV Scripts Guide for TRL Training
2
+
3
+ UV scripts are self-contained Python scripts with inline dependency declarations (PEP 723). They're the modern, recommended approach for custom TRL training.
4
+
5
+ ## What are UV Scripts?
6
+
7
+ UV scripts declare dependencies at the top of the file using special comment syntax:
8
+
9
+ ```python
10
+ # /// script
11
+ # dependencies = [
12
+ # "trl>=0.12.0",
13
+ # "transformers>=4.36.0",
14
+ # ]
15
+ # ///
16
+
17
+ # Your training code here
18
+ from trl import SFTTrainer
19
+ ```
20
+
21
+ ## Benefits
22
+
23
+ 1. **Self-contained**: Dependencies are part of the script
24
+ 2. **Version control**: Pin exact versions for reproducibility
25
+ 3. **No setup files**: No requirements.txt or setup.py needed
26
+ 4. **Portable**: Run anywhere UV is installed
27
+ 5. **Clean**: Much cleaner than bash + pip + python strings
28
+
29
+ ## Creating a UV Script
30
+
31
+ ### Step 1: Define Dependencies
32
+
33
+ Start with dependency declaration:
34
+
35
+ ```python
36
+ # /// script
37
+ # dependencies = [
38
+ # "trl>=0.12.0", # TRL for training
39
+ # "transformers>=4.36.0", # Transformers library
40
+ # "datasets>=2.14.0", # Dataset loading
41
+ # "accelerate>=0.24.0", # Distributed training
42
+ # "peft>=0.7.0", # LoRA/PEFT (optional)
43
+ # ]
44
+ # ///
45
+ ```
46
+
47
+ ### Step 2: Add Training Code
48
+
49
+ ```python
50
+ # /// script
51
+ # dependencies = ["trl", "peft"]
52
+ # ///
53
+
54
+ from datasets import load_dataset
55
+ from peft import LoraConfig
56
+ from trl import SFTTrainer, SFTConfig
57
+
58
+ # Load dataset
59
+ dataset = load_dataset("trl-lib/Capybara", split="train")
60
+
61
+ # Configure training
62
+ config = SFTConfig(
63
+ output_dir="my-model",
64
+ num_train_epochs=3,
65
+ push_to_hub=True,
66
+ hub_model_id="username/my-model",
67
+ )
68
+
69
+ # Train
70
+ trainer = SFTTrainer(
71
+ model="Qwen/Qwen2.5-0.5B",
72
+ train_dataset=dataset,
73
+ args=config,
74
+ peft_config=LoraConfig(r=16, lora_alpha=32),
75
+ )
76
+
77
+ trainer.train()
78
+ trainer.push_to_hub()
79
+ ```
80
+
81
+ ### Step 3: Run on Jobs
82
+
83
+ ```python
84
+ hf_jobs("uv", {
85
+ "script": "train.py", # or URL
86
+ "flavor": "a10g-large",
87
+ "timeout": "2h",
88
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
89
+ })
90
+ ```
91
+
92
+ ## Running Scripts from URLs
93
+
94
+ UV scripts can be run directly from URLs:
95
+
96
+ ```python
97
+ hf_jobs("uv", {
98
+ "script": "https://gist.github.com/username/abc123/raw/train.py",
99
+ "flavor": "a10g-large",
100
+ "timeout": "2h",
101
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
102
+ })
103
+ ```
104
+
105
+ **Benefits:**
106
+ - Share scripts via GitHub Gists
107
+ - Version control in Git repos
108
+ - Scripts accessible from anywhere
109
+
110
+ ## Working with Local Scripts
111
+
112
+ ⚠️ **Important:** The `hf_jobs("uv", ...)` command does NOT support local file paths directly. You must make scripts accessible via URL.
113
+
114
+ ### Why Local Paths Don't Work
115
+
116
+ The Jobs API runs in isolated Docker containers without access to your local filesystem. Scripts must be:
117
+ - Publicly accessible URLs, OR
118
+ - Accessible via authentication (HF_TOKEN for private repos)
119
+
120
+ **Don't:**
121
+ ```python
122
+ # ❌ These will all fail
123
+ hf_jobs("uv", {"script": "train.py"})
124
+ hf_jobs("uv", {"script": "./scripts/train.py"})
125
+ hf_jobs("uv", {"script": "/path/to/train.py"})
126
+ ```
127
+
128
+ **Do:**
129
+ ```python
130
+ # βœ… These work
131
+ hf_jobs("uv", {"script": "https://huggingface.co/user/repo/resolve/main/train.py"})
132
+ hf_jobs("uv", {"script": "https://raw.githubusercontent.com/user/repo/main/train.py"})
133
+ hf_jobs("uv", {"script": "https://gist.githubusercontent.com/user/id/raw/train.py"})
134
+ ```
135
+
136
+ ### Recommended: Upload to Hugging Face Hub
137
+
138
+ The easiest way to use local scripts is to upload them to a Hugging Face repository:
139
+
140
+ ```bash
141
+ # Create a dedicated scripts repo
142
+ huggingface-cli repo create my-training-scripts --type model
143
+
144
+ # Upload your script
145
+ huggingface-cli upload my-training-scripts ./train.py train.py
146
+
147
+ # If you update the script later
148
+ huggingface-cli upload my-training-scripts ./train.py train.py --commit-message "Updated training params"
149
+
150
+ # Use in jobs
151
+ script_url = "https://huggingface.co/USERNAME/my-training-scripts/resolve/main/train.py"
152
+
153
+ hf_jobs("uv", {
154
+ "script": script_url,
155
+ "flavor": "a10g-large",
156
+ "timeout": "2h",
157
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
158
+ })
159
+ ```
160
+
161
+ **Benefits:**
162
+ - βœ… Version control via Git
163
+ - βœ… Private repos supported (with HF_TOKEN)
164
+ - βœ… Easy to share and update
165
+ - βœ… No external dependencies
166
+ - βœ… Integrates with HF ecosystem
167
+
168
+ **For Private Scripts:**
169
+ ```python
170
+ # Your script is in a private repo
171
+ hf_jobs("uv", {
172
+ "script": "https://huggingface.co/USERNAME/private-scripts/resolve/main/train.py",
173
+ "flavor": "a10g-large",
174
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"} # Allows access to private repo
175
+ })
176
+ ```
177
+
178
+ ### Alternative: GitHub Gist
179
+
180
+ For quick scripts or one-off experiments:
181
+
182
+ ```bash
183
+ # 1. Create a gist at https://gist.github.com
184
+ # 2. Paste your script
185
+ # 3. Click "Create public gist" (or secret gist)
186
+ # 4. Click the "Raw" button to get the raw URL
187
+
188
+ # Use in jobs
189
+ hf_jobs("uv", {
190
+ "script": "https://gist.githubusercontent.com/username/gist-id/raw/train.py",
191
+ "flavor": "a10g-large"
192
+ })
193
+ ```
194
+
195
+ **Benefits:**
196
+ - βœ… Quick and easy
197
+ - βœ… No HF CLI setup needed
198
+ - βœ… Good for sharing examples
199
+
200
+ **Limitations:**
201
+ - ❌ Less version control than Git repos
202
+ - ❌ Secret gists are still publicly accessible via URL
203
+
204
+
205
+ ## Using TRL Example Scripts
206
+
207
+ TRL provides maintained scripts that are UV-compatible:
208
+
209
+ ```python
210
+ hf_jobs("uv", {
211
+ "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/sft.py",
212
+ "script_args": [
213
+ "--model_name_or_path", "Qwen/Qwen2.5-0.5B",
214
+ "--dataset_name", "trl-lib/Capybara",
215
+ "--output_dir", "my-model",
216
+ "--push_to_hub",
217
+ "--hub_model_id", "username/my-model"
218
+ ],
219
+ "flavor": "a10g-large",
220
+ "timeout": "2h",
221
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
222
+ })
223
+ ```
224
+
225
+ **Available TRL scripts:**
226
+ - `sft.py` - Supervised fine-tuning
227
+ - `dpo.py` - Direct Preference Optimization
228
+ - `kto.py` - KTO training
229
+ - `grpo.py` - GRPO training
230
+ - `reward.py` - Reward model training
231
+ - `prm.py` - Process reward model
232
+
233
+ All at: https://github.com/huggingface/trl/tree/main/examples/scripts
234
+
235
+ ## Best Practices
236
+
237
+ ### 1. Pin Versions
238
+
239
+ Always pin dependency versions for reproducibility:
240
+
241
+ ```python
242
+ # /// script
243
+ # dependencies = [
244
+ # "trl==0.12.0", # Exact version
245
+ # "transformers>=4.36.0", # Minimum version
246
+ # ]
247
+ # ///
248
+ ```
249
+
250
+ ### 2. Add Logging
251
+
252
+ Include progress logging for monitoring:
253
+
254
+ ```python
255
+ print("βœ… Dataset loaded")
256
+ print("πŸš€ Starting training...")
257
+ print(f"πŸ“Š Training on {len(dataset)} examples")
258
+ ```
259
+
260
+ ### 3. Validate Inputs
261
+
262
+ Check dataset and configuration before training:
263
+
264
+ ```python
265
+ dataset = load_dataset("trl-lib/Capybara", split="train")
266
+ assert len(dataset) > 0, "Dataset is empty!"
267
+ print(f"βœ… Dataset loaded: {len(dataset)} examples")
268
+ ```
269
+
270
+ ### 4. Add Comments
271
+
272
+ Document the script for future reference:
273
+
274
+ ```python
275
+ # Train Qwen-0.5B on Capybara dataset using LoRA
276
+ # Expected runtime: ~2 hours on a10g-large
277
+ # Cost estimate: ~$6-8
278
+ ```
279
+
280
+ ### 5. Test Locally First
281
+
282
+ Test scripts locally before running on Jobs:
283
+
284
+ ```bash
285
+ uv run train.py # Runs locally with uv
286
+ ```
287
+
288
+ ## Docker Images
289
+
290
+ ### Default Image
291
+
292
+ UV scripts run on default Python image with UV installed.
293
+
294
+ ### TRL Image
295
+
296
+ Use official TRL image for faster startup:
297
+
298
+ ```python
299
+ hf_jobs("uv", {
300
+ "script": "train.py",
301
+ "image": "huggingface/trl", # Pre-installed TRL dependencies
302
+ "flavor": "a10g-large",
303
+ "timeout": "2h",
304
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"}
305
+ })
306
+ ```
307
+
308
+ **Benefits:**
309
+ - Faster job startup (no pip install)
310
+ - All TRL dependencies pre-installed
311
+ - Tested and maintained by HF
312
+
313
+ ## Template Scripts
314
+
315
+ ### Basic SFT Template
316
+
317
+ ```python
318
+ # /// script
319
+ # dependencies = ["trl>=0.12.0"]
320
+ # ///
321
+
322
+ from datasets import load_dataset
323
+ from trl import SFTTrainer, SFTConfig
324
+
325
+ dataset = load_dataset("DATASET_NAME", split="train")
326
+
327
+ trainer = SFTTrainer(
328
+ model="MODEL_NAME",
329
+ train_dataset=dataset,
330
+ args=SFTConfig(
331
+ output_dir="OUTPUT_DIR",
332
+ num_train_epochs=3,
333
+ push_to_hub=True,
334
+ hub_model_id="USERNAME/MODEL_NAME",
335
+ )
336
+ )
337
+
338
+ trainer.train()
339
+ trainer.push_to_hub()
340
+ ```
341
+
342
+ ### SFT with LoRA Template
343
+
344
+ ```python
345
+ # /// script
346
+ # dependencies = ["trl>=0.12.0", "peft>=0.7.0"]
347
+ # ///
348
+
349
+ from datasets import load_dataset
350
+ from peft import LoraConfig
351
+ from trl import SFTTrainer, SFTConfig
352
+
353
+ dataset = load_dataset("DATASET_NAME", split="train")
354
+
355
+ trainer = SFTTrainer(
356
+ model="MODEL_NAME",
357
+ train_dataset=dataset,
358
+ peft_config=LoraConfig(r=16, lora_alpha=32),
359
+ args=SFTConfig(
360
+ output_dir="OUTPUT_DIR",
361
+ num_train_epochs=3,
362
+ push_to_hub=True,
363
+ hub_model_id="USERNAME/MODEL_NAME",
364
+ )
365
+ )
366
+
367
+ trainer.train()
368
+ trainer.push_to_hub()
369
+ ```
370
+
371
+ ### DPO Template
372
+
373
+ ```python
374
+ # /// script
375
+ # dependencies = ["trl>=0.12.0"]
376
+ # ///
377
+
378
+ from datasets import load_dataset
379
+ from transformers import AutoTokenizer
380
+ from trl import DPOTrainer, DPOConfig
381
+
382
+ model_name = "MODEL_NAME"
383
+ dataset = load_dataset("DATASET_NAME", split="train")
384
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
385
+
386
+ trainer = DPOTrainer(
387
+ model=model_name,
388
+ train_dataset=dataset,
389
+ tokenizer=tokenizer,
390
+ args=DPOConfig(
391
+ output_dir="OUTPUT_DIR",
392
+ num_train_epochs=3,
393
+ push_to_hub=True,
394
+ hub_model_id="USERNAME/MODEL_NAME",
395
+ )
396
+ )
397
+
398
+ trainer.train()
399
+ trainer.push_to_hub()
400
+ ```
401
+
402
+ ## Troubleshooting
403
+
404
+ ### Issue: Dependencies not installing
405
+ **Check:** Verify dependency names and versions are correct
406
+
407
+ ### Issue: Script not found
408
+ **Check:** Verify URL is accessible and points to raw file
409
+
410
+ ### Issue: Import errors
411
+ **Solution:** Add missing dependencies to `dependencies` list
412
+
413
+ ### Issue: Slow startup
414
+ **Solution:** Use `image="huggingface/trl"` for pre-installed dependencies
trl/scripts/convert_to_gguf.py ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # dependencies = [
4
+ # "transformers>=4.36.0",
5
+ # "peft>=0.7.0",
6
+ # "torch>=2.0.0",
7
+ # "accelerate>=0.24.0",
8
+ # "huggingface_hub>=0.20.0",
9
+ # "sentencepiece>=0.1.99",
10
+ # "protobuf>=3.20.0",
11
+ # "numpy",
12
+ # "gguf",
13
+ # ]
14
+ # ///
15
+
16
+ import os
17
+ import torch
18
+ from transformers import AutoModelForCausalLM, AutoTokenizer
19
+ from peft import PeftModel
20
+ from huggingface_hub import HfApi, snapshot_download
21
+ import subprocess
22
+
23
+ print("πŸ”„ GGUF Conversion Script")
24
+ print("=" * 60)
25
+
26
+ # Configuration
27
+ ADAPTER_MODEL = "evalstate/qwen-capybara-medium"
28
+ BASE_MODEL = "Qwen/Qwen2.5-0.5B"
29
+ OUTPUT_MODEL_NAME = "evalstate/qwen-capybara-medium-gguf"
30
+ username = os.environ.get("HF_USERNAME", "evalstate")
31
+
32
+ print(f"\nπŸ“¦ Configuration:")
33
+ print(f" Base model: {BASE_MODEL}")
34
+ print(f" Adapter model: {ADAPTER_MODEL}")
35
+ print(f" Output repo: {OUTPUT_MODEL_NAME}")
36
+
37
+ # Step 1: Load base model and adapter
38
+ print("\nπŸ”§ Step 1: Loading base model and LoRA adapter...")
39
+ print(" (This may take a few minutes)")
40
+
41
+ base_model = AutoModelForCausalLM.from_pretrained(
42
+ BASE_MODEL,
43
+ dtype=torch.float16,
44
+ device_map="auto",
45
+ trust_remote_code=True,
46
+ )
47
+ print(" βœ… Base model loaded")
48
+
49
+ # Load and merge adapter
50
+ print(" Loading LoRA adapter...")
51
+ model = PeftModel.from_pretrained(base_model, ADAPTER_MODEL)
52
+ print(" βœ… Adapter loaded")
53
+
54
+ print(" Merging adapter with base model...")
55
+ merged_model = model.merge_and_unload()
56
+ print(" βœ… Models merged!")
57
+
58
+ # Load tokenizer
59
+ tokenizer = AutoTokenizer.from_pretrained(ADAPTER_MODEL, trust_remote_code=True)
60
+ print(" βœ… Tokenizer loaded")
61
+
62
+ # Step 2: Save merged model temporarily
63
+ print("\nπŸ’Ύ Step 2: Saving merged model...")
64
+ merged_dir = "/tmp/merged_model"
65
+ merged_model.save_pretrained(merged_dir, safe_serialization=True)
66
+ tokenizer.save_pretrained(merged_dir)
67
+ print(f" βœ… Merged model saved to {merged_dir}")
68
+
69
+ # Step 3: Install llama.cpp for conversion
70
+ print("\nπŸ“₯ Step 3: Setting up llama.cpp for GGUF conversion...")
71
+ print(" Cloning llama.cpp repository...")
72
+ subprocess.run(
73
+ ["git", "clone", "https://github.com/ggerganov/llama.cpp.git", "/tmp/llama.cpp"],
74
+ check=True,
75
+ capture_output=True
76
+ )
77
+ print(" βœ… llama.cpp cloned")
78
+
79
+ print(" Installing Python dependencies...")
80
+ subprocess.run(
81
+ ["pip", "install", "-r", "/tmp/llama.cpp/requirements.txt"],
82
+ check=True,
83
+ capture_output=True
84
+ )
85
+ # Also need sentencepiece for tokenizer conversion
86
+ subprocess.run(
87
+ ["pip", "install", "sentencepiece", "protobuf"],
88
+ check=True,
89
+ capture_output=True
90
+ )
91
+ print(" βœ… Dependencies installed")
92
+
93
+ # Step 4: Convert to GGUF (FP16)
94
+ print("\nπŸ”„ Step 4: Converting to GGUF format (FP16)...")
95
+ gguf_output_dir = "/tmp/gguf_output"
96
+ os.makedirs(gguf_output_dir, exist_ok=True)
97
+
98
+ convert_script = "/tmp/llama.cpp/convert_hf_to_gguf.py"
99
+ gguf_file = f"{gguf_output_dir}/qwen-capybara-medium-f16.gguf"
100
+
101
+ print(f" Running: python {convert_script} {merged_dir}")
102
+ try:
103
+ result = subprocess.run(
104
+ [
105
+ "python", convert_script,
106
+ merged_dir,
107
+ "--outfile", gguf_file,
108
+ "--outtype", "f16"
109
+ ],
110
+ check=True,
111
+ capture_output=True,
112
+ text=True
113
+ )
114
+ print(result.stdout)
115
+ if result.stderr:
116
+ print("Warnings:", result.stderr)
117
+ except subprocess.CalledProcessError as e:
118
+ print(f"❌ Conversion failed!")
119
+ print("STDOUT:", e.stdout)
120
+ print("STDERR:", e.stderr)
121
+ raise
122
+ print(f" βœ… FP16 GGUF created: {gguf_file}")
123
+
124
+ # Step 5: Quantize to different formats
125
+ print("\nβš™οΈ Step 5: Creating quantized versions...")
126
+ quantize_bin = "/tmp/llama.cpp/llama-quantize"
127
+
128
+ # Build quantize tool first
129
+ print(" Building quantize tool...")
130
+ subprocess.run(
131
+ ["make", "-C", "/tmp/llama.cpp", "llama-quantize"],
132
+ check=True,
133
+ capture_output=True
134
+ )
135
+ print(" βœ… Quantize tool built")
136
+
137
+ # Common quantization formats
138
+ quant_formats = [
139
+ ("Q4_K_M", "4-bit, medium quality (recommended)"),
140
+ ("Q5_K_M", "5-bit, higher quality"),
141
+ ("Q8_0", "8-bit, very high quality"),
142
+ ]
143
+
144
+ quantized_files = []
145
+ for quant_type, description in quant_formats:
146
+ print(f" Creating {quant_type} quantization ({description})...")
147
+ quant_file = f"{gguf_output_dir}/qwen-capybara-medium-{quant_type.lower()}.gguf"
148
+
149
+ subprocess.run(
150
+ [quantize_bin, gguf_file, quant_file, quant_type],
151
+ check=True,
152
+ capture_output=True
153
+ )
154
+ quantized_files.append((quant_file, quant_type))
155
+
156
+ # Get file size
157
+ size_mb = os.path.getsize(quant_file) / (1024 * 1024)
158
+ print(f" βœ… {quant_type}: {size_mb:.1f} MB")
159
+
160
+ # Step 6: Upload to Hub
161
+ print("\n☁️ Step 6: Uploading to Hugging Face Hub...")
162
+ api = HfApi()
163
+
164
+ # Create repo
165
+ print(f" Creating repository: {OUTPUT_MODEL_NAME}")
166
+ try:
167
+ api.create_repo(repo_id=OUTPUT_MODEL_NAME, repo_type="model", exist_ok=True)
168
+ print(" βœ… Repository created")
169
+ except Exception as e:
170
+ print(f" ��️ Repository may already exist: {e}")
171
+
172
+ # Upload FP16 version
173
+ print(" Uploading FP16 GGUF...")
174
+ api.upload_file(
175
+ path_or_fileobj=gguf_file,
176
+ path_in_repo="qwen-capybara-medium-f16.gguf",
177
+ repo_id=OUTPUT_MODEL_NAME,
178
+ )
179
+ print(" βœ… FP16 uploaded")
180
+
181
+ # Upload quantized versions
182
+ for quant_file, quant_type in quantized_files:
183
+ print(f" Uploading {quant_type}...")
184
+ api.upload_file(
185
+ path_or_fileobj=quant_file,
186
+ path_in_repo=f"qwen-capybara-medium-{quant_type.lower()}.gguf",
187
+ repo_id=OUTPUT_MODEL_NAME,
188
+ )
189
+ print(f" βœ… {quant_type} uploaded")
190
+
191
+ # Create README
192
+ print("\nπŸ“ Creating README...")
193
+ readme_content = f"""---
194
+ base_model: {BASE_MODEL}
195
+ tags:
196
+ - gguf
197
+ - llama.cpp
198
+ - quantized
199
+ - trl
200
+ - sft
201
+ ---
202
+
203
+ # {OUTPUT_MODEL_NAME.split('/')[-1]}
204
+
205
+ This is a GGUF conversion of [{ADAPTER_MODEL}](https://huggingface.co/{ADAPTER_MODEL}), which is a LoRA fine-tuned version of [{BASE_MODEL}](https://huggingface.co/{BASE_MODEL}).
206
+
207
+ ## Model Details
208
+
209
+ - **Base Model:** {BASE_MODEL}
210
+ - **Fine-tuned Model:** {ADAPTER_MODEL}
211
+ - **Training:** Supervised Fine-Tuning (SFT) with TRL
212
+ - **Format:** GGUF (for llama.cpp, Ollama, LM Studio, etc.)
213
+
214
+ ## Available Quantizations
215
+
216
+ | File | Quant | Size | Description | Use Case |
217
+ |------|-------|------|-------------|----------|
218
+ | qwen-capybara-medium-f16.gguf | F16 | ~1GB | Full precision | Best quality, slower |
219
+ | qwen-capybara-medium-q8_0.gguf | Q8_0 | ~500MB | 8-bit | High quality |
220
+ | qwen-capybara-medium-q5_k_m.gguf | Q5_K_M | ~350MB | 5-bit medium | Good quality, smaller |
221
+ | qwen-capybara-medium-q4_k_m.gguf | Q4_K_M | ~300MB | 4-bit medium | Recommended - good balance |
222
+
223
+ ## Usage
224
+
225
+ ### With llama.cpp
226
+
227
+ ```bash
228
+ # Download model
229
+ huggingface-cli download {OUTPUT_MODEL_NAME} qwen-capybara-medium-q4_k_m.gguf
230
+
231
+ # Run with llama.cpp
232
+ ./llama-cli -m qwen-capybara-medium-q4_k_m.gguf -p "Your prompt here"
233
+ ```
234
+
235
+ ### With Ollama
236
+
237
+ 1. Create a `Modelfile`:
238
+ ```
239
+ FROM ./qwen-capybara-medium-q4_k_m.gguf
240
+ ```
241
+
242
+ 2. Create the model:
243
+ ```bash
244
+ ollama create qwen-capybara -f Modelfile
245
+ ollama run qwen-capybara
246
+ ```
247
+
248
+ ### With LM Studio
249
+
250
+ 1. Download the `.gguf` file
251
+ 2. Import into LM Studio
252
+ 3. Start chatting!
253
+
254
+ ## Training Details
255
+
256
+ This model was fine-tuned using:
257
+ - **Dataset:** trl-lib/Capybara (1,000 examples)
258
+ - **Method:** Supervised Fine-Tuning with LoRA
259
+ - **Epochs:** 3
260
+ - **LoRA rank:** 16
261
+ - **Hardware:** A10G Large GPU
262
+
263
+ ## License
264
+
265
+ Inherits the license from the base model: {BASE_MODEL}
266
+
267
+ ## Citation
268
+
269
+ ```bibtex
270
+ @misc{{qwen-capybara-medium-gguf,
271
+ author = {{{username}}},
272
+ title = {{Qwen Capybara Medium GGUF}},
273
+ year = {{2025}},
274
+ publisher = {{Hugging Face}},
275
+ url = {{https://huggingface.co/{OUTPUT_MODEL_NAME}}}
276
+ }}
277
+ ```
278
+
279
+ ---
280
+
281
+ *Converted to GGUF format using llama.cpp*
282
+ """
283
+
284
+ api.upload_file(
285
+ path_or_fileobj=readme_content.encode(),
286
+ path_in_repo="README.md",
287
+ repo_id=OUTPUT_MODEL_NAME,
288
+ )
289
+ print(" βœ… README uploaded")
290
+
291
+ print("\n" + "=" * 60)
292
+ print("βœ… GGUF Conversion Complete!")
293
+ print(f"πŸ“¦ Repository: https://huggingface.co/{OUTPUT_MODEL_NAME}")
294
+ print("\nπŸ“₯ Download with:")
295
+ print(f" huggingface-cli download {OUTPUT_MODEL_NAME} qwen-capybara-medium-q4_k_m.gguf")
296
+ print("\nπŸš€ Use with Ollama:")
297
+ print(" 1. Download the GGUF file")
298
+ print(" 2. Create Modelfile: FROM ./qwen-capybara-medium-q4_k_m.gguf")
299
+ print(" 3. ollama create qwen-capybara -f Modelfile")
300
+ print(" 4. ollama run qwen-capybara")
301
+ print("=" * 60)
trl/scripts/estimate_cost.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # dependencies = []
4
+ # ///
5
+ """
6
+ Estimate training time and cost for TRL jobs.
7
+
8
+ Usage:
9
+ python estimate_cost.py --model <model> --dataset <dataset> --hardware <flavor>
10
+
11
+ Example:
12
+ python estimate_cost.py --model Qwen/Qwen2.5-0.5B --dataset trl-lib/Capybara --hardware a10g-large
13
+ """
14
+
15
+ import argparse
16
+
17
+ # Hardware costs per hour (approximate)
18
+ HARDWARE_COSTS = {
19
+ "t4-small": 0.75,
20
+ "t4-medium": 1.50,
21
+ "l4x1": 2.50,
22
+ "a10g-small": 3.50,
23
+ "a10g-large": 5.00,
24
+ "a10g-largex2": 10.00,
25
+ "a10g-largex4": 20.00,
26
+ "a100-large": 10.00,
27
+ }
28
+
29
+ # Model sizes in billions of parameters
30
+ MODEL_SIZES = {
31
+ "0.5B": 0.5,
32
+ "1.5B": 1.5,
33
+ "3B": 3,
34
+ "7B": 7,
35
+ "13B": 13,
36
+ }
37
+
38
+ def estimate_training_time(model_params, dataset_size, epochs, hardware):
39
+ """Estimate training time in hours."""
40
+ # Rough estimates based on empirical observations
41
+ # These are approximations and actual times will vary
42
+
43
+ base_time_per_1k_examples = 0.1 # hours for 1B model on a10g-large
44
+
45
+ # Adjust for model size
46
+ time = base_time_per_1k_examples * model_params * (dataset_size / 1000) * epochs
47
+
48
+ # Adjust for hardware (relative to a10g-large baseline)
49
+ hardware_multipliers = {
50
+ "t4-small": 2.0,
51
+ "t4-medium": 1.5,
52
+ "l4x1": 1.2,
53
+ "a10g-small": 1.3,
54
+ "a10g-large": 1.0,
55
+ "a10g-largex2": 0.6,
56
+ "a10g-largex4": 0.4,
57
+ "a100-large": 0.7,
58
+ }
59
+
60
+ multiplier = hardware_multipliers.get(hardware, 1.0)
61
+ time *= multiplier
62
+
63
+ return time
64
+
65
+ def parse_args():
66
+ parser = argparse.ArgumentParser(description="Estimate training cost for TRL jobs")
67
+ parser.add_argument("--model", required=True, help="Model name or size (e.g., 'Qwen/Qwen2.5-0.5B' or '0.5B')")
68
+ parser.add_argument("--dataset", required=True, help="Dataset name")
69
+ parser.add_argument("--hardware", required=True, choices=HARDWARE_COSTS.keys(), help="Hardware flavor")
70
+ parser.add_argument("--dataset-size", type=int, help="Override dataset size (number of examples)")
71
+ parser.add_argument("--epochs", type=int, default=3, help="Number of training epochs")
72
+ return parser.parse_args()
73
+
74
+ def extract_model_size(model_name):
75
+ """Extract model size from name or return parsed value."""
76
+ for size_str, size_val in MODEL_SIZES.items():
77
+ if size_str in model_name:
78
+ return size_val
79
+
80
+ # Try to parse directly
81
+ try:
82
+ if "B" in model_name:
83
+ return float(model_name.replace("B", ""))
84
+ except:
85
+ pass
86
+
87
+ return 1.0 # Default to 1B if can't determine
88
+
89
+ def main():
90
+ args = parse_args()
91
+
92
+ # Extract model parameters
93
+ model_params = extract_model_size(args.model)
94
+ print(f"πŸ“Š Model: {args.model} (~{model_params}B parameters)")
95
+
96
+ # Estimate dataset size (would need to load to get real size)
97
+ if args.dataset_size:
98
+ dataset_size = args.dataset_size
99
+ else:
100
+ # Common dataset sizes (approximations)
101
+ dataset_sizes = {
102
+ "trl-lib/Capybara": 16000,
103
+ "Anthropic/hh-rlhf": 160000,
104
+ }
105
+ dataset_size = dataset_sizes.get(args.dataset, 10000)
106
+
107
+ print(f"πŸ“¦ Dataset: {args.dataset} (~{dataset_size} examples)")
108
+ print(f"πŸ”„ Epochs: {args.epochs}")
109
+ print(f"πŸ’» Hardware: {args.hardware}")
110
+ print()
111
+
112
+ # Estimate training time
113
+ estimated_hours = estimate_training_time(model_params, dataset_size, args.epochs, args.hardware)
114
+ estimated_cost = estimated_hours * HARDWARE_COSTS[args.hardware]
115
+
116
+ # Recommend timeout with buffer
117
+ recommended_timeout_hours = estimated_hours * 1.3 # 30% buffer
118
+
119
+ print(f"⏱️ Estimated training time: {estimated_hours:.1f} hours")
120
+ print(f"πŸ’° Estimated cost: ${estimated_cost:.2f}")
121
+ print(f"⏰ Recommended timeout: {recommended_timeout_hours:.1f}h (with 30% buffer)")
122
+ print()
123
+
124
+ # Warnings and recommendations
125
+ if estimated_hours > 4:
126
+ print("⚠️ Long training time - consider:")
127
+ print(" - Using faster hardware")
128
+ print(" - Reducing epochs")
129
+ print(" - Using a smaller dataset subset for testing")
130
+
131
+ if model_params >= 7 and args.hardware not in ["a10g-largex2", "a10g-largex4", "a100-large"]:
132
+ print("⚠️ Large model - consider using:")
133
+ print(" - Larger GPU (a100-large)")
134
+ print(" - Multi-GPU setup (a10g-largex2 or a10g-largex4)")
135
+ print(" - LoRA/PEFT for memory efficiency")
136
+
137
+ print()
138
+ print("πŸ“‹ Example job configuration:")
139
+ print(f"""
140
+ hf_jobs("uv", {{
141
+ "script": "your_training_script.py",
142
+ "flavor": "{args.hardware}",
143
+ "timeout": "{recommended_timeout_hours:.0f}h",
144
+ "secrets": {{"HF_TOKEN": "$HF_TOKEN"}}
145
+ }})
146
+ """)
147
+
148
+ if __name__ == "__main__":
149
+ main()
trl/scripts/train_dpo_example.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # dependencies = [
4
+ # "trl>=0.12.0",
5
+ # "transformers>=4.36.0",
6
+ # "accelerate>=0.24.0",
7
+ # "trackio",
8
+ # ]
9
+ # ///
10
+
11
+ """
12
+ Production-ready DPO training example for preference learning.
13
+
14
+ DPO (Direct Preference Optimization) trains models on preference pairs
15
+ (chosen vs rejected responses) without requiring a reward model.
16
+
17
+ Usage with hf_jobs MCP tool:
18
+ hf_jobs("uv", {
19
+ "script": '''<paste this entire file>''',
20
+ "flavor": "a10g-large",
21
+ "timeout": "3h",
22
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"},
23
+ })
24
+
25
+ Or submit the script content directly inline without saving to a file.
26
+ """
27
+
28
+ import trackio
29
+ from datasets import load_dataset
30
+ from trl import DPOTrainer, DPOConfig
31
+
32
+ # Initialize Trackio for real-time monitoring
33
+ trackio.init(
34
+ project="qwen-dpo-alignment",
35
+ space_id="username/my-trackio-dashboard",
36
+ config={
37
+ "model": "Qwen/Qwen2.5-0.5B-Instruct",
38
+ "dataset": "trl-lib/ultrafeedback_binarized",
39
+ "method": "DPO",
40
+ "beta": 0.1,
41
+ "num_epochs": 1,
42
+ }
43
+ )
44
+
45
+ # Load preference dataset
46
+ dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
47
+ print(f"βœ… Dataset loaded: {len(dataset)} preference pairs")
48
+
49
+ # Training configuration
50
+ config = DPOConfig(
51
+ # CRITICAL: Hub settings
52
+ output_dir="qwen-dpo-aligned",
53
+ push_to_hub=True,
54
+ hub_model_id="username/qwen-dpo-aligned",
55
+ hub_strategy="every_save",
56
+
57
+ # DPO-specific parameters
58
+ beta=0.1, # KL penalty coefficient (higher = stay closer to reference)
59
+
60
+ # Training parameters
61
+ num_train_epochs=1, # DPO typically needs fewer epochs than SFT
62
+ per_device_train_batch_size=4,
63
+ gradient_accumulation_steps=4,
64
+ learning_rate=5e-7, # DPO uses much lower LR than SFT
65
+
66
+ # Logging & checkpointing
67
+ logging_steps=10,
68
+ save_strategy="steps",
69
+ save_steps=100,
70
+ save_total_limit=2,
71
+
72
+ # Optimization
73
+ warmup_ratio=0.1,
74
+ lr_scheduler_type="cosine",
75
+
76
+ # Monitoring
77
+ report_to="trackio",
78
+ )
79
+
80
+ # Initialize and train
81
+ # Note: DPO requires an instruct-tuned model as the base
82
+ trainer = DPOTrainer(
83
+ model="Qwen/Qwen2.5-0.5B-Instruct", # Use instruct model, not base model
84
+ train_dataset=dataset,
85
+ args=config,
86
+ )
87
+
88
+ print("πŸš€ Starting DPO training...")
89
+ trainer.train()
90
+
91
+ print("πŸ’Ύ Pushing to Hub...")
92
+ trainer.push_to_hub()
93
+
94
+ # Finish Trackio tracking
95
+ trackio.finish()
96
+
97
+ print("βœ… Complete! Model at: https://huggingface.co/username/qwen-dpo-aligned")
98
+ print("πŸ“Š View metrics at: https://huggingface.co/spaces/username/my-trackio-dashboard")
trl/scripts/train_grpo_example.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # dependencies = [
4
+ # "trl>=0.12.0",
5
+ # "transformers>=4.36.0",
6
+ # "accelerate>=0.24.0",
7
+ # "trackio",
8
+ # ]
9
+ # ///
10
+
11
+ """
12
+ Production-ready GRPO training example for online RL.
13
+
14
+ GRPO (Group Relative Policy Optimization) is an online RL method that
15
+ optimizes relative to group performance. Best for tasks with automatic
16
+ reward signals like code execution or math verification.
17
+
18
+ Usage with hf_jobs MCP tool:
19
+ hf_jobs("uv", {
20
+ "script": '''<paste this entire file>''',
21
+ "flavor": "a10g-large",
22
+ "timeout": "4h",
23
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"},
24
+ })
25
+
26
+ Or submit the script content directly inline without saving to a file.
27
+
28
+ Note: For most GRPO use cases, the TRL maintained script is recommended:
29
+ https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/grpo.py
30
+ """
31
+
32
+ import trackio
33
+ from datasets import load_dataset
34
+ from trl import GRPOTrainer, GRPOConfig
35
+
36
+ # Initialize Trackio for real-time monitoring
37
+ trackio.init(
38
+ project="qwen-grpo-math",
39
+ space_id="username/my-trackio-dashboard",
40
+ config={
41
+ "model": "Qwen/Qwen2.5-0.5B-Instruct",
42
+ "dataset": "trl-lib/math_shepherd",
43
+ "method": "GRPO",
44
+ }
45
+ )
46
+
47
+ # Load dataset (GRPO uses prompt-only format)
48
+ dataset = load_dataset("trl-lib/math_shepherd", split="train")
49
+ print(f"βœ… Dataset loaded: {len(dataset)} prompts")
50
+
51
+ # Training configuration
52
+ config = GRPOConfig(
53
+ # CRITICAL: Hub settings
54
+ output_dir="qwen-grpo-math",
55
+ push_to_hub=True,
56
+ hub_model_id="username/qwen-grpo-math",
57
+ hub_strategy="every_save",
58
+
59
+ # Training parameters
60
+ num_train_epochs=1,
61
+ per_device_train_batch_size=4,
62
+ gradient_accumulation_steps=4,
63
+ learning_rate=1e-6,
64
+
65
+ # Logging & checkpointing
66
+ logging_steps=10,
67
+ save_strategy="steps",
68
+ save_steps=100,
69
+ save_total_limit=2,
70
+
71
+ # Optimization
72
+ warmup_ratio=0.1,
73
+ lr_scheduler_type="cosine",
74
+
75
+ # Monitoring
76
+ report_to="trackio",
77
+ )
78
+
79
+ # Initialize and train
80
+ # Note: GRPO requires an instruct-tuned model as the base
81
+ trainer = GRPOTrainer(
82
+ model="Qwen/Qwen2.5-0.5B-Instruct",
83
+ train_dataset=dataset,
84
+ args=config,
85
+ )
86
+
87
+ print("πŸš€ Starting GRPO training...")
88
+ trainer.train()
89
+
90
+ print("πŸ’Ύ Pushing to Hub...")
91
+ trainer.push_to_hub()
92
+
93
+ # Finish Trackio tracking
94
+ trackio.finish()
95
+
96
+ print("βœ… Complete! Model at: https://huggingface.co/username/qwen-grpo-math")
97
+ print("πŸ“Š View metrics at: https://huggingface.co/spaces/username/my-trackio-dashboard")
trl/scripts/train_sft_example.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # dependencies = [
4
+ # "trl>=0.12.0",
5
+ # "peft>=0.7.0",
6
+ # "transformers>=4.36.0",
7
+ # "accelerate>=0.24.0",
8
+ # "trackio", # For real-time monitoring
9
+ # ]
10
+ # ///
11
+
12
+ """
13
+ Production-ready SFT training example with all best practices.
14
+
15
+ This script demonstrates:
16
+ - Trackio integration for real-time monitoring
17
+ - LoRA/PEFT for efficient training
18
+ - Proper Hub saving configuration
19
+ - Checkpoint management
20
+ - Optimized training parameters
21
+
22
+ Usage with hf_jobs MCP tool:
23
+ hf_jobs("uv", {
24
+ "script": '''<paste this entire file>''',
25
+ "flavor": "a10g-large",
26
+ "timeout": "3h",
27
+ "secrets": {"HF_TOKEN": "$HF_TOKEN"},
28
+ })
29
+
30
+ Or submit the script content directly inline without saving to a file.
31
+ """
32
+
33
+ import trackio
34
+ from datasets import load_dataset
35
+ from peft import LoraConfig
36
+ from trl import SFTTrainer, SFTConfig
37
+
38
+ # Initialize Trackio for real-time monitoring
39
+ trackio.init(
40
+ project="qwen-capybara-sft",
41
+ space_id="username/my-trackio-dashboard", # Creates Space if it doesn't exist
42
+ config={
43
+ "model": "Qwen/Qwen2.5-0.5B",
44
+ "dataset": "trl-lib/Capybara",
45
+ "learning_rate": 2e-5,
46
+ "num_epochs": 3,
47
+ "peft_method": "LoRA",
48
+ }
49
+ )
50
+
51
+ # Load and validate
52
+ dataset = load_dataset("trl-lib/Capybara", split="train")
53
+ print(f"βœ… Dataset loaded: {len(dataset)} examples")
54
+
55
+ # Training configuration
56
+ config = SFTConfig(
57
+ # CRITICAL: Hub settings
58
+ output_dir="qwen-capybara-sft",
59
+ push_to_hub=True,
60
+ hub_model_id="username/qwen-capybara-sft",
61
+ hub_strategy="every_save", # Push checkpoints
62
+
63
+ # Training parameters
64
+ num_train_epochs=3,
65
+ per_device_train_batch_size=4,
66
+ gradient_accumulation_steps=4,
67
+ learning_rate=2e-5,
68
+
69
+ # Logging & checkpointing
70
+ logging_steps=10,
71
+ save_strategy="steps",
72
+ save_steps=100,
73
+ save_total_limit=2,
74
+
75
+ # Optimization
76
+ warmup_ratio=0.1,
77
+ lr_scheduler_type="cosine",
78
+
79
+ # Monitoring
80
+ report_to="trackio", # Integrate with Trackio
81
+ )
82
+
83
+ # LoRA configuration
84
+ peft_config = LoraConfig(
85
+ r=16,
86
+ lora_alpha=32,
87
+ lora_dropout=0.05,
88
+ bias="none",
89
+ task_type="CAUSAL_LM",
90
+ target_modules=["q_proj", "v_proj"],
91
+ )
92
+
93
+ # Initialize and train
94
+ trainer = SFTTrainer(
95
+ model="Qwen/Qwen2.5-0.5B",
96
+ train_dataset=dataset,
97
+ args=config,
98
+ peft_config=peft_config,
99
+ )
100
+
101
+ print("πŸš€ Starting training...")
102
+ trainer.train()
103
+
104
+ print("πŸ’Ύ Pushing to Hub...")
105
+ trainer.push_to_hub()
106
+
107
+ # Finish Trackio tracking
108
+ trackio.finish()
109
+
110
+ print("βœ… Complete! Model at: https://huggingface.co/username/qwen-capybara-sft")
111
+ print("πŸ“Š View metrics at: https://huggingface.co/spaces/username/my-trackio-dashboard")
trl/scripts/validate_dataset.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # dependencies = [
4
+ # "datasets>=2.14.0",
5
+ # ]
6
+ # ///
7
+ """
8
+ Validate dataset format for TRL training.
9
+
10
+ Usage:
11
+ python validate_dataset.py <dataset_name> <method>
12
+
13
+ Examples:
14
+ python validate_dataset.py trl-lib/Capybara sft
15
+ python validate_dataset.py Anthropic/hh-rlhf dpo
16
+ """
17
+
18
+ import sys
19
+ from datasets import load_dataset
20
+
21
+ def validate_sft_dataset(dataset):
22
+ """Validate SFT dataset format."""
23
+ print("πŸ” Validating SFT dataset...")
24
+
25
+ # Check for common fields
26
+ columns = dataset.column_names
27
+ print(f"πŸ“‹ Columns: {columns}")
28
+
29
+ has_messages = "messages" in columns
30
+ has_text = "text" in columns
31
+
32
+ if not (has_messages or has_text):
33
+ print("❌ Dataset must have 'messages' or 'text' field")
34
+ return False
35
+
36
+ # Check first example
37
+ example = dataset[0]
38
+
39
+ if has_messages:
40
+ messages = example["messages"]
41
+ if not isinstance(messages, list):
42
+ print("❌ 'messages' field must be a list")
43
+ return False
44
+
45
+ if len(messages) == 0:
46
+ print("❌ 'messages' field is empty")
47
+ return False
48
+
49
+ # Check message format
50
+ msg = messages[0]
51
+ if not isinstance(msg, dict):
52
+ print("❌ Messages must be dictionaries")
53
+ return False
54
+
55
+ if "role" not in msg or "content" not in msg:
56
+ print("❌ Messages must have 'role' and 'content' keys")
57
+ return False
58
+
59
+ print("βœ… Messages format valid")
60
+ print(f" First message: {msg['role']}: {msg['content'][:50]}...")
61
+
62
+ if has_text:
63
+ text = example["text"]
64
+ if not isinstance(text, str):
65
+ print("❌ 'text' field must be a string")
66
+ return False
67
+
68
+ if len(text) == 0:
69
+ print("❌ 'text' field is empty")
70
+ return False
71
+
72
+ print("βœ… Text format valid")
73
+ print(f" First text: {text[:100]}...")
74
+
75
+ return True
76
+
77
+ def validate_dpo_dataset(dataset):
78
+ """Validate DPO dataset format."""
79
+ print("πŸ” Validating DPO dataset...")
80
+
81
+ columns = dataset.column_names
82
+ print(f"πŸ“‹ Columns: {columns}")
83
+
84
+ required = ["prompt", "chosen", "rejected"]
85
+ missing = [col for col in required if col not in columns]
86
+
87
+ if missing:
88
+ print(f"❌ Missing required fields: {missing}")
89
+ return False
90
+
91
+ # Check first example
92
+ example = dataset[0]
93
+
94
+ for field in required:
95
+ value = example[field]
96
+ if isinstance(value, str):
97
+ if len(value) == 0:
98
+ print(f"❌ '{field}' field is empty")
99
+ return False
100
+ print(f"βœ… '{field}' format valid (string)")
101
+ elif isinstance(value, list):
102
+ if len(value) == 0:
103
+ print(f"❌ '{field}' field is empty")
104
+ return False
105
+ print(f"βœ… '{field}' format valid (list of messages)")
106
+ else:
107
+ print(f"❌ '{field}' must be string or list")
108
+ return False
109
+
110
+ return True
111
+
112
+ def validate_kto_dataset(dataset):
113
+ """Validate KTO dataset format."""
114
+ print("πŸ” Validating KTO dataset...")
115
+
116
+ columns = dataset.column_names
117
+ print(f"πŸ“‹ Columns: {columns}")
118
+
119
+ required = ["prompt", "completion", "label"]
120
+ missing = [col for col in required if col not in columns]
121
+
122
+ if missing:
123
+ print(f"❌ Missing required fields: {missing}")
124
+ return False
125
+
126
+ # Check first example
127
+ example = dataset[0]
128
+
129
+ if not isinstance(example["label"], bool):
130
+ print("❌ 'label' field must be boolean")
131
+ return False
132
+
133
+ print("βœ… KTO format valid")
134
+ return True
135
+
136
+ def main():
137
+ if len(sys.argv) != 3:
138
+ print("Usage: python validate_dataset.py <dataset_name> <method>")
139
+ print("Methods: sft, dpo, kto")
140
+ sys.exit(1)
141
+
142
+ dataset_name = sys.argv[1]
143
+ method = sys.argv[2].lower()
144
+
145
+ print(f"πŸ“¦ Loading dataset: {dataset_name}")
146
+ try:
147
+ dataset = load_dataset(dataset_name, split="train")
148
+ print(f"βœ… Dataset loaded: {len(dataset)} examples")
149
+ except Exception as e:
150
+ print(f"❌ Failed to load dataset: {e}")
151
+ sys.exit(1)
152
+
153
+ validators = {
154
+ "sft": validate_sft_dataset,
155
+ "dpo": validate_dpo_dataset,
156
+ "kto": validate_kto_dataset,
157
+ }
158
+
159
+ if method not in validators:
160
+ print(f"❌ Unknown method: {method}")
161
+ print(f"Supported methods: {list(validators.keys())}")
162
+ sys.exit(1)
163
+
164
+ validator = validators[method]
165
+ valid = validator(dataset)
166
+
167
+ if valid:
168
+ print(f"\nβœ… Dataset is valid for {method.upper()} training")
169
+ sys.exit(0)
170
+ else:
171
+ print(f"\n❌ Dataset is NOT valid for {method.upper()} training")
172
+ sys.exit(1)
173
+
174
+ if __name__ == "__main__":
175
+ main()