evalstate commited on
Commit
7093da7
·
1 Parent(s): eb23fb4

update preferences prefer uv

Browse files
trl/SKILL.md CHANGED
@@ -40,7 +40,7 @@ Use this skill when users want to:
40
 
41
  When assisting with training jobs:
42
 
43
- 1. **Submit jobs directly with inline scripts** - The `script` parameter accepts Python code directly. Do NOT save to local files unless the user explicitly requests it. Pass the script content as a string to `hf_jobs()`. If user asks to "train a model", "fine-tune", or similar requests, you MUST create the training script AND submit the job immediately.
44
 
45
  2. **Always include Trackio** - Every training script should include Trackio for real-time monitoring. Use example scripts in `scripts/` as templates.
46
 
@@ -116,27 +116,9 @@ The job is running in the background. Ask me to check status/logs when ready!
116
 
117
  ## Quick Start: Three Approaches
118
 
119
- ### Approach 1: TRL Jobs Package (EasiestRecommended for Beginners)
120
 
121
- The `trl-jobs` package provides optimized defaults and one-liner training:
122
-
123
- ```bash
124
- # Install (users only, not needed for this environment)
125
- pip install trl-jobs
126
-
127
- # Train with SFT (simplest possible)
128
- trl-jobs sft \
129
- --model_name Qwen/Qwen2.5-0.5B \
130
- --dataset_name trl-lib/Capybara
131
- ```
132
-
133
- **Benefits:** Pre-configured settings, automatic Trackio integration, automatic Hub push, one-line commands
134
- **When to use:** User is new to training, standard scenarios, quick experimentation
135
- **Repository:** https://github.com/huggingface/trl-jobs
136
-
137
- ### Approach 2: UV Scripts (Recommended for Custom Training)
138
-
139
- UV scripts use PEP 723 inline dependencies for clean, self-contained training. **Submit script content directly inline:**
140
 
141
  ```python
142
  hf_jobs("uv", {
@@ -183,11 +165,50 @@ trackio.finish()
183
  })
184
  ```
185
 
186
- **Benefits:** Clean code, dependencies declared inline (PEP 723), no file saving required
187
- **When to use:** Custom training logic, full control over training
188
- **See:** `references/uv_scripts_guide.md` for complete UV scripts guide
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
189
 
190
- ### Approach 3: TRL Maintained Scripts (Run Official Examples)
191
 
192
  TRL provides battle-tested scripts for all methods. Can be run from URLs:
193
 
@@ -225,6 +246,26 @@ hub_repo_details(["uv-scripts/classification"], repo_type="dataset", include_rea
225
 
226
  **Popular collections:** ocr, classification, synthetic-data, vllm, dataset-creation
227
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
228
  ## Hardware Selection
229
 
230
  | Model Size | Recommended Hardware | Cost (approx/hr) | Use Case |
@@ -545,7 +586,6 @@ Add to PEP 723 header:
545
  - `references/training_patterns.md` - Common training patterns and examples
546
  - `references/gguf_conversion.md` - Complete GGUF conversion guide
547
  - `references/trackio_guide.md` - Trackio monitoring setup
548
- - `references/uv_scripts_guide.md` - Complete UV scripts guide
549
  - `references/hardware_guide.md` - Hardware specs and selection
550
  - `references/hub_saving.md` - Hub authentication troubleshooting
551
  - `references/troubleshooting.md` - Common issues and solutions
@@ -577,7 +617,7 @@ Add to PEP 723 header:
577
  4. **Always enable Hub push** - Environment is ephemeral; without push, all results lost
578
  5. **Include Trackio** - Use example scripts as templates for real-time monitoring
579
  6. **Offer cost estimation** - When parameters are known, use `scripts/estimate_cost.py`
580
- 7. **Three approaches available:** TRL Jobs package (easiest), UV scripts (custom, modern), TRL maintained scripts (official examples)
581
  8. **Use hf_doc_fetch/hf_doc_search** for latest TRL documentation
582
  9. **Validate dataset format** before training with dataset inspector (see Dataset Validation section)
583
  10. **Choose appropriate hardware** for model size; use LoRA for models >7B
 
40
 
41
  When assisting with training jobs:
42
 
43
+ 1. **ALWAYS use `hf_jobs()` MCP tool** - Submit jobs using `hf_jobs("uv", {...})`, NOT bash `trl-jobs` commands. The `script` parameter accepts Python code directly. Do NOT save to local files unless the user explicitly requests it. Pass the script content as a string to `hf_jobs()`. If user asks to "train a model", "fine-tune", or similar requests, you MUST create the training script AND submit the job immediately using `hf_jobs()`.
44
 
45
  2. **Always include Trackio** - Every training script should include Trackio for real-time monitoring. Use example scripts in `scripts/` as templates.
46
 
 
116
 
117
  ## Quick Start: Three Approaches
118
 
119
+ ### Approach 1: UV Scripts (RecommendedDefault Choice)
120
 
121
+ UV scripts use PEP 723 inline dependencies for clean, self-contained training. **This is the primary approach for Claude Code.**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
 
123
  ```python
124
  hf_jobs("uv", {
 
165
  })
166
  ```
167
 
168
+ **Benefits:** Direct MCP tool usage, clean code, dependencies declared inline (PEP 723), no file saving required, full control
169
+ **When to use:** Default choice for all training tasks in Claude Code, custom training logic, any scenario requiring `hf_jobs()`
170
+
171
+ #### Working with Scripts
172
+
173
+ ⚠️ **Important:** The `script` parameter accepts either inline code (as shown above) OR a URL. **Local file paths do NOT work.**
174
+
175
+ **Why local paths don't work:**
176
+ Jobs run in isolated Docker containers without access to your local filesystem. Scripts must be:
177
+ - Inline code (recommended for custom training)
178
+ - Publicly accessible URLs
179
+ - Private repo URLs (with HF_TOKEN)
180
+
181
+ **Common mistakes:**
182
+ ```python
183
+ # ❌ These will all fail
184
+ hf_jobs("uv", {"script": "train.py"})
185
+ hf_jobs("uv", {"script": "./scripts/train.py"})
186
+ hf_jobs("uv", {"script": "/path/to/train.py"})
187
+ ```
188
+
189
+ **Correct approaches:**
190
+ ```python
191
+ # ✅ Inline code (recommended)
192
+ hf_jobs("uv", {"script": "# /// script\n# dependencies = [...]\n# ///\n\n<your code>"})
193
+
194
+ # ✅ From Hugging Face Hub
195
+ hf_jobs("uv", {"script": "https://huggingface.co/user/repo/resolve/main/train.py"})
196
+
197
+ # ✅ From GitHub
198
+ hf_jobs("uv", {"script": "https://raw.githubusercontent.com/user/repo/main/train.py"})
199
+
200
+ # ✅ From Gist
201
+ hf_jobs("uv", {"script": "https://gist.githubusercontent.com/user/id/raw/train.py"})
202
+ ```
203
+
204
+ **To use local scripts:** Upload to HF Hub first:
205
+ ```bash
206
+ huggingface-cli repo create my-training-scripts --type model
207
+ huggingface-cli upload my-training-scripts ./train.py train.py
208
+ # Use: https://huggingface.co/USERNAME/my-training-scripts/resolve/main/train.py
209
+ ```
210
 
211
+ ### Approach 2: TRL Maintained Scripts (Official Examples)
212
 
213
  TRL provides battle-tested scripts for all methods. Can be run from URLs:
214
 
 
246
 
247
  **Popular collections:** ocr, classification, synthetic-data, vllm, dataset-creation
248
 
249
+ ### Approach 3: TRL Jobs Package (For Terminal Use)
250
+
251
+ The `trl-jobs` package provides optimized defaults and one-liner training. **Note: This approach uses bash commands, not `hf_jobs()` MCP tool.**
252
+
253
+ ```bash
254
+ # Install (users only, not needed for this environment)
255
+ pip install trl-jobs
256
+
257
+ # Train with SFT (simplest possible)
258
+ trl-jobs sft \
259
+ --model_name Qwen/Qwen2.5-0.5B \
260
+ --dataset_name trl-lib/Capybara
261
+ ```
262
+
263
+ **Benefits:** Pre-configured settings, automatic Trackio integration, automatic Hub push, one-line commands
264
+ **When to use:** User working in terminal directly (not Claude Code context), quick local experimentation
265
+ **Repository:** https://github.com/huggingface/trl-jobs
266
+
267
+ ⚠️ **In Claude Code context, use Approach 1 (UV Scripts) with `hf_jobs()` instead.**
268
+
269
  ## Hardware Selection
270
 
271
  | Model Size | Recommended Hardware | Cost (approx/hr) | Use Case |
 
586
  - `references/training_patterns.md` - Common training patterns and examples
587
  - `references/gguf_conversion.md` - Complete GGUF conversion guide
588
  - `references/trackio_guide.md` - Trackio monitoring setup
 
589
  - `references/hardware_guide.md` - Hardware specs and selection
590
  - `references/hub_saving.md` - Hub authentication troubleshooting
591
  - `references/troubleshooting.md` - Common issues and solutions
 
617
  4. **Always enable Hub push** - Environment is ephemeral; without push, all results lost
618
  5. **Include Trackio** - Use example scripts as templates for real-time monitoring
619
  6. **Offer cost estimation** - When parameters are known, use `scripts/estimate_cost.py`
620
+ 7. **Use UV scripts (Approach 1)** - Default to `hf_jobs("uv", {...})` with inline scripts; TRL maintained scripts for standard training; avoid bash `trl-jobs` commands in Claude Code
621
  8. **Use hf_doc_fetch/hf_doc_search** for latest TRL documentation
622
  9. **Validate dataset format** before training with dataset inspector (see Dataset Validation section)
623
  10. **Choose appropriate hardware** for model size; use LoRA for models >7B
trl/references/troubleshooting.md CHANGED
@@ -257,7 +257,7 @@ If issues persist:
257
  3. **Review related guides:**
258
  - `references/hub_saving.md` - Hub authentication issues
259
  - `references/hardware_guide.md` - Hardware selection and specs
260
- - `references/uv_scripts_guide.md` - UV script format issues
261
  - `references/training_patterns.md` - Eval dataset requirements
 
262
 
263
  4. **Ask in HF forums:** https://discuss.huggingface.co/
 
257
  3. **Review related guides:**
258
  - `references/hub_saving.md` - Hub authentication issues
259
  - `references/hardware_guide.md` - Hardware selection and specs
 
260
  - `references/training_patterns.md` - Eval dataset requirements
261
+ - SKILL.md "Working with Scripts" section - Script format and URL issues
262
 
263
  4. **Ask in HF forums:** https://discuss.huggingface.co/
trl/references/uv_scripts_guide.md DELETED
@@ -1,414 +0,0 @@
1
- # UV Scripts Guide for TRL Training
2
-
3
- UV scripts are self-contained Python scripts with inline dependency declarations (PEP 723). They're the modern, recommended approach for custom TRL training.
4
-
5
- ## What are UV Scripts?
6
-
7
- UV scripts declare dependencies at the top of the file using special comment syntax:
8
-
9
- ```python
10
- # /// script
11
- # dependencies = [
12
- # "trl>=0.12.0",
13
- # "transformers>=4.36.0",
14
- # ]
15
- # ///
16
-
17
- # Your training code here
18
- from trl import SFTTrainer
19
- ```
20
-
21
- ## Benefits
22
-
23
- 1. **Self-contained**: Dependencies are part of the script
24
- 2. **Version control**: Pin exact versions for reproducibility
25
- 3. **No setup files**: No requirements.txt or setup.py needed
26
- 4. **Portable**: Run anywhere UV is installed
27
- 5. **Clean**: Much cleaner than bash + pip + python strings
28
-
29
- ## Creating a UV Script
30
-
31
- ### Step 1: Define Dependencies
32
-
33
- Start with dependency declaration:
34
-
35
- ```python
36
- # /// script
37
- # dependencies = [
38
- # "trl>=0.12.0", # TRL for training
39
- # "transformers>=4.36.0", # Transformers library
40
- # "datasets>=2.14.0", # Dataset loading
41
- # "accelerate>=0.24.0", # Distributed training
42
- # "peft>=0.7.0", # LoRA/PEFT (optional)
43
- # ]
44
- # ///
45
- ```
46
-
47
- ### Step 2: Add Training Code
48
-
49
- ```python
50
- # /// script
51
- # dependencies = ["trl", "peft"]
52
- # ///
53
-
54
- from datasets import load_dataset
55
- from peft import LoraConfig
56
- from trl import SFTTrainer, SFTConfig
57
-
58
- # Load dataset
59
- dataset = load_dataset("trl-lib/Capybara", split="train")
60
-
61
- # Configure training
62
- config = SFTConfig(
63
- output_dir="my-model",
64
- num_train_epochs=3,
65
- push_to_hub=True,
66
- hub_model_id="username/my-model",
67
- )
68
-
69
- # Train
70
- trainer = SFTTrainer(
71
- model="Qwen/Qwen2.5-0.5B",
72
- train_dataset=dataset,
73
- args=config,
74
- peft_config=LoraConfig(r=16, lora_alpha=32),
75
- )
76
-
77
- trainer.train()
78
- trainer.push_to_hub()
79
- ```
80
-
81
- ### Step 3: Run on Jobs
82
-
83
- ```python
84
- hf_jobs("uv", {
85
- "script": "train.py", # or URL
86
- "flavor": "a10g-large",
87
- "timeout": "2h",
88
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
89
- })
90
- ```
91
-
92
- ## Running Scripts from URLs
93
-
94
- UV scripts can be run directly from URLs:
95
-
96
- ```python
97
- hf_jobs("uv", {
98
- "script": "https://gist.github.com/username/abc123/raw/train.py",
99
- "flavor": "a10g-large",
100
- "timeout": "2h",
101
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
102
- })
103
- ```
104
-
105
- **Benefits:**
106
- - Share scripts via GitHub Gists
107
- - Version control in Git repos
108
- - Scripts accessible from anywhere
109
-
110
- ## Working with Local Scripts
111
-
112
- ⚠️ **Important:** The `hf_jobs("uv", ...)` command does NOT support local file paths directly. You must make scripts accessible via URL.
113
-
114
- ### Why Local Paths Don't Work
115
-
116
- The Jobs API runs in isolated Docker containers without access to your local filesystem. Scripts must be:
117
- - Publicly accessible URLs, OR
118
- - Accessible via authentication (HF_TOKEN for private repos)
119
-
120
- **Don't:**
121
- ```python
122
- # ❌ These will all fail
123
- hf_jobs("uv", {"script": "train.py"})
124
- hf_jobs("uv", {"script": "./scripts/train.py"})
125
- hf_jobs("uv", {"script": "/path/to/train.py"})
126
- ```
127
-
128
- **Do:**
129
- ```python
130
- # ✅ These work
131
- hf_jobs("uv", {"script": "https://huggingface.co/user/repo/resolve/main/train.py"})
132
- hf_jobs("uv", {"script": "https://raw.githubusercontent.com/user/repo/main/train.py"})
133
- hf_jobs("uv", {"script": "https://gist.githubusercontent.com/user/id/raw/train.py"})
134
- ```
135
-
136
- ### Recommended: Upload to Hugging Face Hub
137
-
138
- The easiest way to use local scripts is to upload them to a Hugging Face repository:
139
-
140
- ```bash
141
- # Create a dedicated scripts repo
142
- huggingface-cli repo create my-training-scripts --type model
143
-
144
- # Upload your script
145
- huggingface-cli upload my-training-scripts ./train.py train.py
146
-
147
- # If you update the script later
148
- huggingface-cli upload my-training-scripts ./train.py train.py --commit-message "Updated training params"
149
-
150
- # Use in jobs
151
- script_url = "https://huggingface.co/USERNAME/my-training-scripts/resolve/main/train.py"
152
-
153
- hf_jobs("uv", {
154
- "script": script_url,
155
- "flavor": "a10g-large",
156
- "timeout": "2h",
157
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
158
- })
159
- ```
160
-
161
- **Benefits:**
162
- - ✅ Version control via Git
163
- - ✅ Private repos supported (with HF_TOKEN)
164
- - ✅ Easy to share and update
165
- - ✅ No external dependencies
166
- - ✅ Integrates with HF ecosystem
167
-
168
- **For Private Scripts:**
169
- ```python
170
- # Your script is in a private repo
171
- hf_jobs("uv", {
172
- "script": "https://huggingface.co/USERNAME/private-scripts/resolve/main/train.py",
173
- "flavor": "a10g-large",
174
- "secrets": {"HF_TOKEN": "$HF_TOKEN"} # Allows access to private repo
175
- })
176
- ```
177
-
178
- ### Alternative: GitHub Gist
179
-
180
- For quick scripts or one-off experiments:
181
-
182
- ```bash
183
- # 1. Create a gist at https://gist.github.com
184
- # 2. Paste your script
185
- # 3. Click "Create public gist" (or secret gist)
186
- # 4. Click the "Raw" button to get the raw URL
187
-
188
- # Use in jobs
189
- hf_jobs("uv", {
190
- "script": "https://gist.githubusercontent.com/username/gist-id/raw/train.py",
191
- "flavor": "a10g-large"
192
- })
193
- ```
194
-
195
- **Benefits:**
196
- - ✅ Quick and easy
197
- - ✅ No HF CLI setup needed
198
- - ✅ Good for sharing examples
199
-
200
- **Limitations:**
201
- - ❌ Less version control than Git repos
202
- - ❌ Secret gists are still publicly accessible via URL
203
-
204
-
205
- ## Using TRL Example Scripts
206
-
207
- TRL provides maintained scripts that are UV-compatible:
208
-
209
- ```python
210
- hf_jobs("uv", {
211
- "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/sft.py",
212
- "script_args": [
213
- "--model_name_or_path", "Qwen/Qwen2.5-0.5B",
214
- "--dataset_name", "trl-lib/Capybara",
215
- "--output_dir", "my-model",
216
- "--push_to_hub",
217
- "--hub_model_id", "username/my-model"
218
- ],
219
- "flavor": "a10g-large",
220
- "timeout": "2h",
221
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
222
- })
223
- ```
224
-
225
- **Available TRL scripts:**
226
- - `sft.py` - Supervised fine-tuning
227
- - `dpo.py` - Direct Preference Optimization
228
- - `kto.py` - KTO training
229
- - `grpo.py` - GRPO training
230
- - `reward.py` - Reward model training
231
- - `prm.py` - Process reward model
232
-
233
- All at: https://github.com/huggingface/trl/tree/main/examples/scripts
234
-
235
- ## Best Practices
236
-
237
- ### 1. Pin Versions
238
-
239
- Always pin dependency versions for reproducibility:
240
-
241
- ```python
242
- # /// script
243
- # dependencies = [
244
- # "trl==0.12.0", # Exact version
245
- # "transformers>=4.36.0", # Minimum version
246
- # ]
247
- # ///
248
- ```
249
-
250
- ### 2. Add Logging
251
-
252
- Include progress logging for monitoring:
253
-
254
- ```python
255
- print("✅ Dataset loaded")
256
- print("🚀 Starting training...")
257
- print(f"📊 Training on {len(dataset)} examples")
258
- ```
259
-
260
- ### 3. Validate Inputs
261
-
262
- Check dataset and configuration before training:
263
-
264
- ```python
265
- dataset = load_dataset("trl-lib/Capybara", split="train")
266
- assert len(dataset) > 0, "Dataset is empty!"
267
- print(f"✅ Dataset loaded: {len(dataset)} examples")
268
- ```
269
-
270
- ### 4. Add Comments
271
-
272
- Document the script for future reference:
273
-
274
- ```python
275
- # Train Qwen-0.5B on Capybara dataset using LoRA
276
- # Expected runtime: ~2 hours on a10g-large
277
- # Cost estimate: ~$6-8
278
- ```
279
-
280
- ### 5. Test Locally First
281
-
282
- Test scripts locally before running on Jobs:
283
-
284
- ```bash
285
- uv run train.py # Runs locally with uv
286
- ```
287
-
288
- ## Docker Images
289
-
290
- ### Default Image
291
-
292
- UV scripts run on default Python image with UV installed.
293
-
294
- ### TRL Image
295
-
296
- Use official TRL image for faster startup:
297
-
298
- ```python
299
- hf_jobs("uv", {
300
- "script": "train.py",
301
- "image": "huggingface/trl", # Pre-installed TRL dependencies
302
- "flavor": "a10g-large",
303
- "timeout": "2h",
304
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
305
- })
306
- ```
307
-
308
- **Benefits:**
309
- - Faster job startup (no pip install)
310
- - All TRL dependencies pre-installed
311
- - Tested and maintained by HF
312
-
313
- ## Template Scripts
314
-
315
- ### Basic SFT Template
316
-
317
- ```python
318
- # /// script
319
- # dependencies = ["trl>=0.12.0"]
320
- # ///
321
-
322
- from datasets import load_dataset
323
- from trl import SFTTrainer, SFTConfig
324
-
325
- dataset = load_dataset("DATASET_NAME", split="train")
326
-
327
- trainer = SFTTrainer(
328
- model="MODEL_NAME",
329
- train_dataset=dataset,
330
- args=SFTConfig(
331
- output_dir="OUTPUT_DIR",
332
- num_train_epochs=3,
333
- push_to_hub=True,
334
- hub_model_id="USERNAME/MODEL_NAME",
335
- )
336
- )
337
-
338
- trainer.train()
339
- trainer.push_to_hub()
340
- ```
341
-
342
- ### SFT with LoRA Template
343
-
344
- ```python
345
- # /// script
346
- # dependencies = ["trl>=0.12.0", "peft>=0.7.0"]
347
- # ///
348
-
349
- from datasets import load_dataset
350
- from peft import LoraConfig
351
- from trl import SFTTrainer, SFTConfig
352
-
353
- dataset = load_dataset("DATASET_NAME", split="train")
354
-
355
- trainer = SFTTrainer(
356
- model="MODEL_NAME",
357
- train_dataset=dataset,
358
- peft_config=LoraConfig(r=16, lora_alpha=32),
359
- args=SFTConfig(
360
- output_dir="OUTPUT_DIR",
361
- num_train_epochs=3,
362
- push_to_hub=True,
363
- hub_model_id="USERNAME/MODEL_NAME",
364
- )
365
- )
366
-
367
- trainer.train()
368
- trainer.push_to_hub()
369
- ```
370
-
371
- ### DPO Template
372
-
373
- ```python
374
- # /// script
375
- # dependencies = ["trl>=0.12.0"]
376
- # ///
377
-
378
- from datasets import load_dataset
379
- from transformers import AutoTokenizer
380
- from trl import DPOTrainer, DPOConfig
381
-
382
- model_name = "MODEL_NAME"
383
- dataset = load_dataset("DATASET_NAME", split="train")
384
- tokenizer = AutoTokenizer.from_pretrained(model_name)
385
-
386
- trainer = DPOTrainer(
387
- model=model_name,
388
- train_dataset=dataset,
389
- tokenizer=tokenizer,
390
- args=DPOConfig(
391
- output_dir="OUTPUT_DIR",
392
- num_train_epochs=3,
393
- push_to_hub=True,
394
- hub_model_id="USERNAME/MODEL_NAME",
395
- )
396
- )
397
-
398
- trainer.train()
399
- trainer.push_to_hub()
400
- ```
401
-
402
- ## Troubleshooting
403
-
404
- ### Issue: Dependencies not installing
405
- **Check:** Verify dependency names and versions are correct
406
-
407
- ### Issue: Script not found
408
- **Check:** Verify URL is accessible and points to raw file
409
-
410
- ### Issue: Import errors
411
- **Solution:** Add missing dependencies to `dependencies` list
412
-
413
- ### Issue: Slow startup
414
- **Solution:** Use `image="huggingface/trl"` for pre-installed dependencies