davanstrien HF Staff commited on
Commit
eb70165
·
1 Parent(s): d1d4f16

Add Unsloth VLM training script for Iconclass

Browse files

- Add iconclass-vlm-sft.py: Main training script with Unsloth optimizations
- Add submit_training_job.py: Helper for HF Jobs submission
- Add README.md: Comprehensive usage guide

Features:
- 2x faster training with Unsloth
- 4-bit quantization + LoRA
- Self-contained UV script
- Works with HF Jobs + Unsloth Docker image

Files changed (3) hide show
  1. README.md +336 -0
  2. iconclass-vlm-sft.py +656 -0
  3. submit_training_job.py +187 -0
README.md ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VLM Training with Unsloth
2
+
3
+ Fine-tune Vision-Language Models efficiently using [Unsloth](https://github.com/unslothai/unsloth) - get 2x faster training with lower memory usage!
4
+
5
+ ## 🎨 Example: Iconclass VLM
6
+
7
+ This directory contains scripts for fine-tuning VLMs to generate [Iconclass](https://iconclass.org) metadata codes from artwork images. Iconclass is a hierarchical classification system used in art history and cultural heritage.
8
+
9
+ ### What You'll Train
10
+
11
+ Given an artwork image, the model outputs structured JSON:
12
+
13
+ ```json
14
+ {
15
+ "iconclass-codes": ["25H213", "25H216", "25I"]
16
+ }
17
+ ```
18
+
19
+ Where codes represent:
20
+ - `25H213`: river
21
+ - `25H216`: waterfall
22
+ - `25I`: city-view with man-made constructions
23
+
24
+ ## 🚀 Quick Start
25
+
26
+ ### Option 1: Run on HF Jobs (Recommended)
27
+
28
+ ```bash
29
+ # Set your HF token
30
+ export HF_TOKEN=your_token_here
31
+
32
+ # Submit training job
33
+ python submit_training_job.py
34
+ ```
35
+
36
+ That's it! Your model will train on cloud GPUs and automatically push to the Hub.
37
+
38
+ ### Option 2: Run Locally (Requires GPU)
39
+
40
+ ```bash
41
+ # Install UV (if not already installed)
42
+ curl -LsSf https://astral.sh/uv/install.sh | sh
43
+
44
+ # Run training
45
+ uv run iconclass-vlm-sft.py \
46
+ --base-model Qwen/Qwen3-VL-8B-Instruct \
47
+ --dataset davanstrien/iconclass-vlm-sft \
48
+ --output-model your-username/iconclass-vlm
49
+ ```
50
+
51
+ ### Option 3: Quick Test (100 steps)
52
+
53
+ ```bash
54
+ uv run iconclass-vlm-sft.py \
55
+ --base-model Qwen/Qwen3-VL-8B-Instruct \
56
+ --dataset davanstrien/iconclass-vlm-sft \
57
+ --output-model your-username/iconclass-vlm-test \
58
+ --max-steps 100
59
+ ```
60
+
61
+ ## 📋 Requirements
62
+
63
+ ### For HF Jobs
64
+ - Hugging Face account with Jobs access
65
+ - HF token with write permissions
66
+
67
+ ### For Local Training
68
+ - CUDA-capable GPU (A100 recommended, A10G works)
69
+ - 40GB+ VRAM for 8B models (with 4-bit quantization)
70
+ - Python 3.11+
71
+ - [UV](https://docs.astral.sh/uv/) installed
72
+
73
+ ## 🎛️ Configuration
74
+
75
+ ### Quick Config via Python Script
76
+
77
+ Edit `submit_training_job.py`:
78
+
79
+ ```python
80
+ # Model and dataset
81
+ BASE_MODEL = "Qwen/Qwen3-VL-8B-Instruct"
82
+ DATASET = "davanstrien/iconclass-vlm-sft"
83
+ OUTPUT_MODEL = "your-username/iconclass-vlm"
84
+
85
+ # Training settings
86
+ BATCH_SIZE = 2
87
+ GRADIENT_ACCUMULATION = 8
88
+ LEARNING_RATE = 2e-5
89
+ MAX_STEPS = None # Auto-calculate for 1 epoch
90
+
91
+ # LoRA settings
92
+ LORA_R = 16
93
+ LORA_ALPHA = 32
94
+
95
+ # GPU
96
+ GPU_FLAVOR = "a100-large" # or "a100", "a10g-large"
97
+ ```
98
+
99
+ ### Full CLI Options
100
+
101
+ ```bash
102
+ uv run iconclass-vlm-sft.py --help
103
+ ```
104
+
105
+ Key arguments:
106
+
107
+ | Argument | Default | Description |
108
+ |----------|---------|-------------|
109
+ | `--base-model` | Required | Base VLM (e.g., Qwen/Qwen3-VL-8B-Instruct) |
110
+ | `--dataset` | Required | Training dataset on HF Hub |
111
+ | `--output-model` | Required | Where to push your model |
112
+ | `--lora-r` | 16 | LoRA rank (higher = more capacity) |
113
+ | `--lora-alpha` | 32 | LoRA alpha (usually 2×r) |
114
+ | `--learning-rate` | 2e-5 | Learning rate |
115
+ | `--batch-size` | 2 | Per-device batch size |
116
+ | `--gradient-accumulation` | 8 | Gradient accumulation steps |
117
+ | `--max-steps` | Auto | Total training steps |
118
+ | `--num-epochs` | 1.0 | Epochs (if max-steps not set) |
119
+
120
+ ## 🏗️ Architecture
121
+
122
+ ### What Makes This Fast?
123
+
124
+ 1. **Unsloth Optimizations**: 2x faster training through:
125
+ - Optimized CUDA kernels
126
+ - Better memory management
127
+ - Efficient gradient checkpointing
128
+
129
+ 2. **4-bit Quantization**:
130
+ - Loads model in 4-bit precision
131
+ - Dramatically reduces VRAM usage
132
+ - Minimal impact on quality with LoRA
133
+
134
+ 3. **LoRA (Low-Rank Adaptation)**:
135
+ - Only trains 0.1-1% of parameters
136
+ - Much faster than full fine-tuning
137
+ - Easy to merge back or share
138
+
139
+ ### Training Flow
140
+
141
+ ```
142
+ Dataset (HF Hub)
143
+
144
+ FastVisionModel.from_pretrained (4-bit)
145
+
146
+ Apply LoRA adapters
147
+
148
+ SFTTrainer (Unsloth-optimized)
149
+
150
+ Push to Hub with model card
151
+ ```
152
+
153
+ ## 📊 Expected Performance
154
+
155
+ ### Training Time (Qwen3-VL-8B on A100)
156
+
157
+ | Dataset Size | Batch Config | Time | Cost (est.) |
158
+ |--------------|--------------|------|-------------|
159
+ | 44K samples | BS=2, GA=8 | ~4h | $16 |
160
+ | 10K samples | BS=2, GA=8 | ~1h | $4 |
161
+ | 1K samples | BS=2, GA=8 | ~10min | $0.70 |
162
+
163
+ *BS = Batch Size, GA = Gradient Accumulation*
164
+
165
+ ### GPU Requirements
166
+
167
+ | Model Size | Min GPU | Recommended | VRAM Usage |
168
+ |------------|---------|-------------|------------|
169
+ | 3B-4B | A10G | A100 | ~20GB |
170
+ | 7B-8B | A100 | A100 | ~35GB |
171
+ | 13B+ | A100 (80GB) | A100 (80GB) | ~60GB |
172
+
173
+ ## 🔍 Monitoring Your Job
174
+
175
+ ### Via CLI
176
+
177
+ ```bash
178
+ # Check status
179
+ hfjobs status your-job-id
180
+
181
+ # Stream logs
182
+ hfjobs logs your-job-id --follow
183
+
184
+ # List all jobs
185
+ hfjobs list
186
+ ```
187
+
188
+ ### Via Python
189
+
190
+ ```python
191
+ from huggingface_hub import HfApi
192
+
193
+ api = HfApi()
194
+ job = api.get_job("your-job-id")
195
+
196
+ print(job.status)
197
+ print(job.logs())
198
+ ```
199
+
200
+ ### Via Web
201
+
202
+ Your job URL: `https://huggingface.co/jobs/your-username/your-job-id`
203
+
204
+ ## 🎯 Using Your Fine-Tuned Model
205
+
206
+ ```python
207
+ from unsloth import FastVisionModel
208
+ from PIL import Image
209
+
210
+ # Load your model
211
+ model, tokenizer = FastVisionModel.from_pretrained(
212
+ model_name="your-username/iconclass-vlm",
213
+ load_in_4bit=True,
214
+ max_seq_length=2048,
215
+ )
216
+ FastVisionModel.for_inference(model)
217
+
218
+ # Prepare input
219
+ image = Image.open("artwork.jpg")
220
+ prompt = "Extract ICONCLASS labels for this image."
221
+
222
+ messages = [
223
+ {
224
+ "role": "user",
225
+ "content": [
226
+ {"type": "image"},
227
+ {"type": "text", "text": prompt},
228
+ ],
229
+ }
230
+ ]
231
+
232
+ # Apply chat template
233
+ inputs = tokenizer.apply_chat_template(
234
+ messages,
235
+ add_generation_prompt=True,
236
+ return_tensors="pt",
237
+ ).to("cuda")
238
+
239
+ # Generate
240
+ outputs = model.generate(
241
+ **inputs,
242
+ max_new_tokens=256,
243
+ temperature=0.7,
244
+ top_p=0.9,
245
+ )
246
+
247
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
248
+ print(response)
249
+ # {"iconclass-codes": ["31A235", "31A24(+1)", "61B(+54)"]}
250
+ ```
251
+
252
+ ## 📦 Files in This Directory
253
+
254
+ | File | Purpose |
255
+ |------|---------|
256
+ | `iconclass-vlm-sft.py` | Main training script (UV script) |
257
+ | `submit_training_job.py` | Helper to submit HF Jobs |
258
+ | `README.md` | This file |
259
+
260
+ ## 🛠️ Troubleshooting
261
+
262
+ ### Out of Memory?
263
+
264
+ Reduce batch size or increase gradient accumulation:
265
+ ```bash
266
+ --batch-size 1 --gradient-accumulation 16
267
+ ```
268
+
269
+ ### Training Too Slow?
270
+
271
+ Increase batch size if you have VRAM:
272
+ ```bash
273
+ --batch-size 4 --gradient-accumulation 4
274
+ ```
275
+
276
+ ### Model Not Learning?
277
+
278
+ Try adjusting learning rate:
279
+ ```bash
280
+ --learning-rate 5e-5 # Higher
281
+ --learning-rate 1e-5 # Lower
282
+ ```
283
+
284
+ Or increase LoRA rank:
285
+ ```bash
286
+ --lora-r 32 --lora-alpha 64
287
+ ```
288
+
289
+ ### Jobs Failing?
290
+
291
+ Check logs:
292
+ ```bash
293
+ hfjobs logs your-job-id
294
+ ```
295
+
296
+ Common issues:
297
+ - HF_TOKEN not set correctly
298
+ - Output model repo doesn't exist (create it first)
299
+ - GPU out of memory (reduce batch size)
300
+
301
+ ## 🔗 Related Resources
302
+
303
+ - **Unsloth**: https://github.com/unslothai/unsloth
304
+ - **Unsloth Docs**: https://docs.unsloth.ai/
305
+ - **TRL**: https://github.com/huggingface/trl
306
+ - **HF Jobs**: https://huggingface.co/docs/hub/spaces-sdks-jobs
307
+ - **UV**: https://docs.astral.sh/uv/
308
+ - **Iconclass**: https://iconclass.org
309
+ - **Blog Post**: https://danielvanstrien.xyz/posts/2025/iconclass-vlm-sft/
310
+
311
+ ## 💡 Tips
312
+
313
+ 1. **Start Small**: Test with `--max-steps 100` before full training
314
+ 2. **Use Wandb**: Add `--report-to wandb` for better monitoring
315
+ 3. **Save Often**: Use `--save-steps 50` for checkpoints
316
+ 4. **Multiple GPUs**: Script automatically uses all available GPUs
317
+ 5. **Resume Training**: Load from checkpoint with `--resume-from-checkpoint`
318
+
319
+ ## 📝 Citation
320
+
321
+ If you use this training setup, please cite:
322
+
323
+ ```bibtex
324
+ @misc{iconclass-vlm-training,
325
+ author = {Daniel van Strien},
326
+ title = {Efficient VLM Fine-tuning with Unsloth for Art History},
327
+ year = {2025},
328
+ publisher = {GitHub},
329
+ howpublished = {\url{https://github.com/davanstrien/uv-scripts}}
330
+ }
331
+ ```
332
+
333
+ ---
334
+
335
+ Made with 🦥 [Unsloth](https://github.com/unslothai/unsloth) •
336
+ Powered by 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
iconclass-vlm-sft.py ADDED
@@ -0,0 +1,656 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "transformers==4.57.0",
6
+ # "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git",
7
+ # "trl==0.22.2",
8
+ # "huggingface-hub[hf_transfer]",
9
+ # "pillow",
10
+ # "torch",
11
+ # "peft",
12
+ # "bitsandbytes",
13
+ # "accelerate",
14
+ # ]
15
+ #
16
+ # ///
17
+
18
+ """
19
+ Fine-tune Vision-Language Models for Iconclass metadata generation using Unsloth.
20
+
21
+ This script trains VLMs to generate structured Iconclass codes from artwork images,
22
+ using Unsloth's optimized training for 2x speed and lower memory usage.
23
+
24
+ Features:
25
+ - 🚀 2x faster training with Unsloth optimizations
26
+ - 💾 4-bit quantization for efficient memory usage
27
+ - 📊 LoRA fine-tuning for parameter efficiency
28
+ - 🎨 Specialized for art history metadata (Iconclass)
29
+ - 🤗 Seamless HF Hub integration
30
+ """
31
+
32
+ import argparse
33
+ import json
34
+ import logging
35
+ import os
36
+ import sys
37
+ from datetime import datetime
38
+ from typing import Any, Dict
39
+
40
+ import torch
41
+ from datasets import load_dataset
42
+ from huggingface_hub import HfApi, ModelCard, login
43
+ from trl import SFTConfig, SFTTrainer
44
+ from unsloth import FastVisionModel, UnslothVisionDataCollator
45
+
46
+ logging.basicConfig(
47
+ level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
48
+ )
49
+ logger = logging.getLogger(__name__)
50
+
51
+
52
+ def check_cuda_availability():
53
+ """Check if CUDA is available and exit if not."""
54
+ if not torch.cuda.is_available():
55
+ logger.error("CUDA is not available. This script requires a GPU.")
56
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
57
+ sys.exit(1)
58
+ else:
59
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
60
+
61
+
62
+ def create_model_card(
63
+ base_model: str,
64
+ dataset: str,
65
+ num_samples: int,
66
+ training_time: str,
67
+ lora_r: int,
68
+ lora_alpha: int,
69
+ learning_rate: float,
70
+ batch_size: int,
71
+ gradient_accumulation: int,
72
+ max_steps: int,
73
+ ) -> str:
74
+ """Create a comprehensive model card for the fine-tuned model."""
75
+ model_name = base_model.split("/")[-1]
76
+
77
+ return f"""---
78
+ base_model: {base_model}
79
+ tags:
80
+ - vision
81
+ - vlm
82
+ - iconclass
83
+ - art-history
84
+ - unsloth
85
+ - fine-tuned
86
+ - lora
87
+ library_name: transformers
88
+ license: mit
89
+ ---
90
+
91
+ # Iconclass VLM - Fine-tuned {model_name}
92
+
93
+ This model generates [Iconclass](https://iconclass.org) metadata codes from artwork images.
94
+ Fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) for efficient training.
95
+
96
+ ## Model Details
97
+
98
+ - **Base Model**: [{base_model}](https://huggingface.co/{base_model})
99
+ - **Training Method**: Supervised Fine-Tuning with LoRA
100
+ - **Training Framework**: Unsloth + TRL
101
+ - **Task**: Structured metadata generation (JSON output)
102
+ - **Domain**: Art history / Cultural heritage
103
+
104
+ ## Training Details
105
+
106
+ ### Dataset
107
+
108
+ - **Source**: [{dataset}](https://huggingface.co/datasets/{dataset})
109
+ - **Samples**: {num_samples:,}
110
+ - **Format**: Vision-language pairs with Iconclass labels
111
+ - **Training Time**: {training_time}
112
+ - **Training Date**: {datetime.now().strftime("%Y-%m-%d")}
113
+
114
+ ### Configuration
115
+
116
+ **LoRA Settings**
117
+ - Rank (r): {lora_r}
118
+ - Alpha: {lora_alpha}
119
+ - Dropout: 0.1
120
+ - Target modules: Language layers + Attention
121
+
122
+ **Training Hyperparameters**
123
+ - Learning rate: {learning_rate}
124
+ - Batch size: {batch_size}
125
+ - Gradient accumulation: {gradient_accumulation}
126
+ - Effective batch size: {batch_size * gradient_accumulation}
127
+ - Max steps: {max_steps:,}
128
+ - Optimizer: AdamW 8-bit
129
+ - Precision: bfloat16
130
+
131
+ **Efficiency**
132
+ - Quantization: 4-bit (Unsloth)
133
+ - Training speedup: ~2x (vs standard training)
134
+ - Memory optimization: Gradient checkpointing
135
+
136
+ ## Usage
137
+
138
+ ```python
139
+ from unsloth import FastVisionModel
140
+ from PIL import Image
141
+
142
+ # Load model
143
+ model, tokenizer = FastVisionModel.from_pretrained(
144
+ model_name="your-username/this-model",
145
+ load_in_4bit=True,
146
+ max_seq_length=2048,
147
+ )
148
+ FastVisionModel.for_inference(model)
149
+
150
+ # Prepare input
151
+ image = Image.open("artwork.jpg")
152
+ prompt = "Extract ICONCLASS labels for this image."
153
+
154
+ messages = [
155
+ {{
156
+ "role": "user",
157
+ "content": [
158
+ {{"type": "image"}},
159
+ {{"type": "text", "text": prompt}},
160
+ ],
161
+ }}
162
+ ]
163
+
164
+ inputs = tokenizer.apply_chat_template(
165
+ messages,
166
+ add_generation_prompt=True,
167
+ return_tensors="pt",
168
+ ).to("cuda")
169
+
170
+ # Generate
171
+ outputs = model.generate(
172
+ **inputs,
173
+ max_new_tokens=256,
174
+ temperature=0.7,
175
+ top_p=0.9,
176
+ )
177
+
178
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
179
+ print(response) # {{"iconclass-codes": ["25H213", "25H216", "25I"]}}
180
+ ```
181
+
182
+ ## Output Format
183
+
184
+ The model outputs JSON with Iconclass codes:
185
+
186
+ ```json
187
+ {{
188
+ "iconclass-codes": ["31A235", "31A24(+1)", "61B(+54)"]
189
+ }}
190
+ ```
191
+
192
+ ## Iconclass System
193
+
194
+ Iconclass is a hierarchical classification system for art and iconography:
195
+ - **2** Nature (landscapes, animals, plants)
196
+ - **3** Human Being (portraits, figures, anatomy)
197
+ - **4** Society & Civilization (architecture, tools)
198
+ - **7** Bible (religious scenes)
199
+ - **9** Classical Mythology
200
+
201
+ Learn more: [iconclass.org](https://iconclass.org)
202
+
203
+ ## Limitations
204
+
205
+ - Trained specifically on Western art history
206
+ - Best performance on artworks with existing Iconclass labels
207
+ - May struggle with contemporary or non-Western art
208
+ - Outputs should be validated by domain experts
209
+
210
+ ## Training Script
211
+
212
+ Trained using UV script for reproducibility:
213
+
214
+ ```bash
215
+ uv run https://huggingface.co/datasets/uv-scripts/training/raw/main/iconclass-vlm-sft.py \\
216
+ --base-model {base_model} \\
217
+ --dataset {dataset} \\
218
+ --output-model your-username/iconclass-vlm \\
219
+ --lora-r {lora_r} \\
220
+ --learning-rate {learning_rate}
221
+ ```
222
+
223
+ ## Citation
224
+
225
+ If you use this model, please cite:
226
+
227
+ ```bibtex
228
+ @misc{{iconclass-vlm-{datetime.now().year},
229
+ author = {{Your Name}},
230
+ title = {{Iconclass VLM: Vision-Language Model for Art History Metadata}},
231
+ year = {{{datetime.now().year}}},
232
+ publisher = {{Hugging Face}},
233
+ howpublished = {{\\url{{https://huggingface.co/your-username/this-model}}}}
234
+ }}
235
+ ```
236
+
237
+ ---
238
+
239
+ Fine-tuned with 🦥 [Unsloth](https://github.com/unslothai/unsloth) •
240
+ Trained using 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
241
+ """
242
+
243
+
244
+ def main(
245
+ base_model: str,
246
+ dataset: str,
247
+ output_model: str,
248
+ lora_r: int = 16,
249
+ lora_alpha: int = 32,
250
+ lora_dropout: float = 0.1,
251
+ learning_rate: float = 2e-5,
252
+ batch_size: int = 2,
253
+ gradient_accumulation: int = 8,
254
+ max_steps: int = None,
255
+ num_epochs: float = 1.0,
256
+ warmup_ratio: float = 0.1,
257
+ logging_steps: int = 10,
258
+ save_steps: int = 100,
259
+ eval_steps: int = 100,
260
+ max_seq_length: int = 2048,
261
+ hf_token: str = None,
262
+ dataset_split: str = "train",
263
+ eval_split: str = "valid",
264
+ private: bool = False,
265
+ push_to_hub: bool = True,
266
+ ):
267
+ """Train a vision-language model for Iconclass metadata generation."""
268
+
269
+ # Check CUDA availability first
270
+ check_cuda_availability()
271
+
272
+ # Track start time
273
+ start_time = datetime.now()
274
+
275
+ # Enable HF_TRANSFER for faster downloads
276
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
277
+
278
+ # Login to HF if token provided
279
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
280
+ if HF_TOKEN:
281
+ login(token=HF_TOKEN)
282
+ else:
283
+ logger.warning("No HF token provided. Push to Hub will fail without auth.")
284
+
285
+ # Load dataset
286
+ logger.info(f"Loading dataset: {dataset}")
287
+ train_dataset = load_dataset(dataset, split=dataset_split)
288
+ eval_dataset = load_dataset(dataset, split=eval_split) if eval_split else None
289
+
290
+ logger.info(f"Training samples: {len(train_dataset):,}")
291
+ if eval_dataset:
292
+ logger.info(f"Evaluation samples: {len(eval_dataset):,}")
293
+
294
+ # Calculate max_steps if not provided
295
+ if max_steps is None:
296
+ steps_per_epoch = len(train_dataset) // (batch_size * gradient_accumulation)
297
+ max_steps = int(steps_per_epoch * num_epochs)
298
+ logger.info(
299
+ f"Calculated max_steps: {max_steps:,} ({num_epochs} epoch(s), {steps_per_epoch} steps/epoch)"
300
+ )
301
+
302
+ # Load model with Unsloth
303
+ logger.info(f"Loading model: {base_model}")
304
+ model, tokenizer = FastVisionModel.from_pretrained(
305
+ model_name=base_model,
306
+ max_seq_length=max_seq_length,
307
+ load_in_4bit=True,
308
+ dtype=None, # Auto-detect
309
+ fast_inference=False, # For training
310
+ gpu_memory_utilization=0.8,
311
+ )
312
+
313
+ # Apply LoRA
314
+ logger.info("Configuring LoRA...")
315
+ model = FastVisionModel.get_peft_model(
316
+ model,
317
+ finetune_vision_layers=False, # Only finetune language layers
318
+ finetune_language_layers=True,
319
+ finetune_attention_modules=True,
320
+ finetune_mlp_modules=True,
321
+ r=lora_r,
322
+ lora_alpha=lora_alpha,
323
+ lora_dropout=lora_dropout,
324
+ bias="none",
325
+ random_state=42,
326
+ use_rslora=False,
327
+ use_gradient_checkpointing="unsloth",
328
+ )
329
+
330
+ # Prepare model for training
331
+ model = FastVisionModel.for_training(model)
332
+
333
+ # Configure training
334
+ logger.info("Configuring training...")
335
+ training_args = SFTConfig(
336
+ output_dir="./iconclass-vlm-outputs",
337
+ per_device_train_batch_size=batch_size,
338
+ per_device_eval_batch_size=batch_size,
339
+ gradient_accumulation_steps=gradient_accumulation,
340
+ max_steps=max_steps,
341
+ learning_rate=learning_rate,
342
+ warmup_ratio=warmup_ratio,
343
+ logging_steps=logging_steps,
344
+ save_steps=save_steps,
345
+ eval_steps=eval_steps if eval_dataset else None,
346
+ eval_strategy="steps" if eval_dataset else "no",
347
+ save_strategy="steps",
348
+ bf16=True,
349
+ optim="adamw_8bit",
350
+ weight_decay=0.01,
351
+ lr_scheduler_type="cosine",
352
+ seed=42,
353
+ remove_unused_columns=False, # Required for Unsloth VLM
354
+ dataset_text_field="", # Required for Unsloth VLM
355
+ dataset_kwargs={"skip_prepare_dataset": True}, # Required for Unsloth VLM
356
+ max_seq_length=max_seq_length,
357
+ gradient_checkpointing=True,
358
+ gradient_checkpointing_kwargs={"use_reentrant": False},
359
+ hub_model_id=output_model if push_to_hub else None,
360
+ push_to_hub=push_to_hub,
361
+ hub_private_repo=private,
362
+ hub_token=HF_TOKEN,
363
+ report_to="none", # Can change to "tensorboard" or "wandb"
364
+ )
365
+
366
+ # Initialize trainer
367
+ logger.info("Initializing trainer...")
368
+ trainer = SFTTrainer(
369
+ model=model,
370
+ args=training_args,
371
+ train_dataset=train_dataset,
372
+ eval_dataset=eval_dataset,
373
+ data_collator=UnslothVisionDataCollator(model, tokenizer),
374
+ processing_class=tokenizer,
375
+ )
376
+
377
+ # Train!
378
+ logger.info("Starting training...")
379
+ logger.info(f"Total steps: {max_steps:,}")
380
+ logger.info(
381
+ f"Effective batch size: {batch_size * gradient_accumulation * torch.cuda.device_count()}"
382
+ )
383
+
384
+ trainer.train()
385
+
386
+ logger.info("Training complete!")
387
+
388
+ # Calculate training time
389
+ end_time = datetime.now()
390
+ training_duration = end_time - start_time
391
+ training_time = f"{training_duration.total_seconds() / 60:.1f} minutes"
392
+ logger.info(f"Training time: {training_time}")
393
+
394
+ # Save model
395
+ logger.info("Saving model...")
396
+ trainer.save_model(training_args.output_dir)
397
+
398
+ # Create and push model card
399
+ if push_to_hub:
400
+ logger.info("Creating model card...")
401
+ card_content = create_model_card(
402
+ base_model=base_model,
403
+ dataset=dataset,
404
+ num_samples=len(train_dataset),
405
+ training_time=training_time,
406
+ lora_r=lora_r,
407
+ lora_alpha=lora_alpha,
408
+ learning_rate=learning_rate,
409
+ batch_size=batch_size,
410
+ gradient_accumulation=gradient_accumulation,
411
+ max_steps=max_steps,
412
+ )
413
+
414
+ card = ModelCard(card_content)
415
+ card.push_to_hub(output_model, token=HF_TOKEN)
416
+ logger.info("✅ Model card created and pushed!")
417
+
418
+ logger.info("✅ Training complete!")
419
+ logger.info(f"Model available at: https://huggingface.co/{output_model}")
420
+ else:
421
+ logger.info(f"✅ Training complete! Model saved to {training_args.output_dir}")
422
+
423
+
424
+ if __name__ == "__main__":
425
+ # Show example usage if no arguments
426
+ if len(sys.argv) == 1:
427
+ print("=" * 80)
428
+ print("Unsloth VLM Fine-tuning for Iconclass Metadata")
429
+ print("=" * 80)
430
+ print("\nFine-tune vision-language models to generate Iconclass codes from")
431
+ print("artwork images using Unsloth's 2x faster training.")
432
+ print("\nFeatures:")
433
+ print("- 🚀 2x faster training with Unsloth optimizations")
434
+ print("- 💾 4-bit quantization for efficient memory usage")
435
+ print("- 📊 LoRA fine-tuning for parameter efficiency")
436
+ print("- 🎨 Specialized for art history metadata (Iconclass)")
437
+ print("\nExample usage:")
438
+ print("\n1. Basic training:")
439
+ print(" uv run iconclass-vlm-sft.py \\")
440
+ print(" --base-model Qwen/Qwen3-VL-8B-Instruct \\")
441
+ print(" --dataset davanstrien/iconclass-vlm-sft \\")
442
+ print(" --output-model your-username/iconclass-vlm")
443
+ print("\n2. Custom LoRA settings:")
444
+ print(" uv run iconclass-vlm-sft.py \\")
445
+ print(" --base-model Qwen/Qwen3-VL-8B-Instruct \\")
446
+ print(" --dataset davanstrien/iconclass-vlm-sft \\")
447
+ print(" --output-model your-username/iconclass-vlm \\")
448
+ print(" --lora-r 32 \\")
449
+ print(" --lora-alpha 64 \\")
450
+ print(" --learning-rate 1e-5")
451
+ print("\n3. Quick test run (fewer steps):")
452
+ print(" uv run iconclass-vlm-sft.py \\")
453
+ print(" --base-model Qwen/Qwen3-VL-8B-Instruct \\")
454
+ print(" --dataset davanstrien/iconclass-vlm-sft \\")
455
+ print(" --output-model your-username/iconclass-vlm-test \\")
456
+ print(" --max-steps 100")
457
+ print("\n4. Running on HF Jobs:")
458
+ print(" hfjobs uv run \\")
459
+ print(" --flavor a100-large \\")
460
+ print(" --image unsloth/unsloth:latest \\")
461
+ print(" -e HF_TOKEN=$HF_TOKEN \\")
462
+ print(
463
+ " https://huggingface.co/datasets/uv-scripts/training/raw/main/iconclass-vlm-sft.py \\"
464
+ )
465
+ print(" --base-model Qwen/Qwen3-VL-8B-Instruct \\")
466
+ print(" --dataset davanstrien/iconclass-vlm-sft \\")
467
+ print(" --output-model your-username/iconclass-vlm")
468
+ print("\n" + "=" * 80)
469
+ print("\nFor full help, run: uv run iconclass-vlm-sft.py --help")
470
+ sys.exit(0)
471
+
472
+ parser = argparse.ArgumentParser(
473
+ description="Fine-tune VLMs for Iconclass metadata generation with Unsloth",
474
+ formatter_class=argparse.RawDescriptionHelpFormatter,
475
+ epilog="""
476
+ Examples:
477
+ # Basic training
478
+ uv run iconclass-vlm-sft.py \\
479
+ --base-model Qwen/Qwen3-VL-8B-Instruct \\
480
+ --dataset davanstrien/iconclass-vlm-sft \\
481
+ --output-model username/iconclass-vlm
482
+
483
+ # Custom hyperparameters
484
+ uv run iconclass-vlm-sft.py \\
485
+ --base-model Qwen/Qwen3-VL-8B-Instruct \\
486
+ --dataset davanstrien/iconclass-vlm-sft \\
487
+ --output-model username/iconclass-vlm \\
488
+ --lora-r 32 --learning-rate 1e-5 --batch-size 4
489
+
490
+ # Quick test
491
+ uv run iconclass-vlm-sft.py \\
492
+ --base-model Qwen/Qwen3-VL-8B-Instruct \\
493
+ --dataset davanstrien/iconclass-vlm-sft \\
494
+ --output-model username/test \\
495
+ --max-steps 50
496
+ """,
497
+ )
498
+
499
+ # Required arguments
500
+ parser.add_argument(
501
+ "--base-model",
502
+ required=True,
503
+ help="Base VLM model from Hugging Face Hub (e.g., Qwen/Qwen3-VL-8B-Instruct)",
504
+ )
505
+ parser.add_argument(
506
+ "--dataset",
507
+ required=True,
508
+ help="Training dataset ID from Hugging Face Hub",
509
+ )
510
+ parser.add_argument(
511
+ "--output-model",
512
+ required=True,
513
+ help="Output model ID for Hugging Face Hub (e.g., username/iconclass-vlm)",
514
+ )
515
+
516
+ # LoRA configuration
517
+ lora_group = parser.add_argument_group("LoRA Configuration")
518
+ lora_group.add_argument(
519
+ "--lora-r",
520
+ type=int,
521
+ default=16,
522
+ help="LoRA rank (default: 16). Higher = more capacity but slower",
523
+ )
524
+ lora_group.add_argument(
525
+ "--lora-alpha",
526
+ type=int,
527
+ default=32,
528
+ help="LoRA alpha scaling (default: 32). Usually 2*r",
529
+ )
530
+ lora_group.add_argument(
531
+ "--lora-dropout",
532
+ type=float,
533
+ default=0.1,
534
+ help="LoRA dropout rate (default: 0.1)",
535
+ )
536
+
537
+ # Training configuration
538
+ training_group = parser.add_argument_group("Training Configuration")
539
+ training_group.add_argument(
540
+ "--learning-rate",
541
+ type=float,
542
+ default=2e-5,
543
+ help="Learning rate (default: 2e-5)",
544
+ )
545
+ training_group.add_argument(
546
+ "--batch-size",
547
+ type=int,
548
+ default=2,
549
+ help="Per-device batch size (default: 2)",
550
+ )
551
+ training_group.add_argument(
552
+ "--gradient-accumulation",
553
+ type=int,
554
+ default=8,
555
+ help="Gradient accumulation steps (default: 8)",
556
+ )
557
+ training_group.add_argument(
558
+ "--max-steps",
559
+ type=int,
560
+ help="Maximum training steps. If not set, calculated from num-epochs",
561
+ )
562
+ training_group.add_argument(
563
+ "--num-epochs",
564
+ type=float,
565
+ default=1.0,
566
+ help="Number of training epochs (default: 1.0). Ignored if max-steps is set",
567
+ )
568
+ training_group.add_argument(
569
+ "--warmup-ratio",
570
+ type=float,
571
+ default=0.1,
572
+ help="Warmup ratio (default: 0.1)",
573
+ )
574
+
575
+ # Logging and checkpointing
576
+ logging_group = parser.add_argument_group("Logging and Checkpointing")
577
+ logging_group.add_argument(
578
+ "--logging-steps",
579
+ type=int,
580
+ default=10,
581
+ help="Log every N steps (default: 10)",
582
+ )
583
+ logging_group.add_argument(
584
+ "--save-steps",
585
+ type=int,
586
+ default=100,
587
+ help="Save checkpoint every N steps (default: 100)",
588
+ )
589
+ logging_group.add_argument(
590
+ "--eval-steps",
591
+ type=int,
592
+ default=100,
593
+ help="Evaluate every N steps (default: 100)",
594
+ )
595
+
596
+ # Dataset configuration
597
+ dataset_group = parser.add_argument_group("Dataset Configuration")
598
+ dataset_group.add_argument(
599
+ "--dataset-split",
600
+ default="train",
601
+ help="Dataset split to use for training (default: train)",
602
+ )
603
+ dataset_group.add_argument(
604
+ "--eval-split",
605
+ default="valid",
606
+ help="Dataset split to use for evaluation (default: valid)",
607
+ )
608
+ dataset_group.add_argument(
609
+ "--max-seq-length",
610
+ type=int,
611
+ default=2048,
612
+ help="Maximum sequence length (default: 2048)",
613
+ )
614
+
615
+ # Misc
616
+ misc_group = parser.add_argument_group("Miscellaneous")
617
+ misc_group.add_argument(
618
+ "--hf-token",
619
+ help="Hugging Face API token (or set HF_TOKEN env var)",
620
+ )
621
+ misc_group.add_argument(
622
+ "--private",
623
+ action="store_true",
624
+ help="Make output model private",
625
+ )
626
+ misc_group.add_argument(
627
+ "--no-push",
628
+ action="store_true",
629
+ help="Don't push to Hub (save locally only)",
630
+ )
631
+
632
+ args = parser.parse_args()
633
+
634
+ main(
635
+ base_model=args.base_model,
636
+ dataset=args.dataset,
637
+ output_model=args.output_model,
638
+ lora_r=args.lora_r,
639
+ lora_alpha=args.lora_alpha,
640
+ lora_dropout=args.lora_dropout,
641
+ learning_rate=args.learning_rate,
642
+ batch_size=args.batch_size,
643
+ gradient_accumulation=args.gradient_accumulation,
644
+ max_steps=args.max_steps,
645
+ num_epochs=args.num_epochs,
646
+ warmup_ratio=args.warmup_ratio,
647
+ logging_steps=args.logging_steps,
648
+ save_steps=args.save_steps,
649
+ eval_steps=args.eval_steps,
650
+ max_seq_length=args.max_seq_length,
651
+ hf_token=args.hf_token,
652
+ dataset_split=args.dataset_split,
653
+ eval_split=args.eval_split,
654
+ private=args.private,
655
+ push_to_hub=not args.no_push,
656
+ )
submit_training_job.py ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ Submit Unsloth VLM fine-tuning job to HF Jobs.
4
+
5
+ This script submits a training job using the Unsloth Docker image with UV script execution.
6
+ Simplifies the process of running iconclass-vlm-sft.py on cloud GPUs.
7
+ """
8
+
9
+ import os
10
+ from huggingface_hub import HfApi
11
+ from dotenv import load_dotenv
12
+
13
+ load_dotenv() # Load environment variables from .env file if present
14
+
15
+
16
+ # =============================================================================
17
+ # CONFIGURATION
18
+ # =============================================================================
19
+
20
+ # Model and dataset configuration
21
+ BASE_MODEL = "Qwen/Qwen3-VL-8B-Instruct"
22
+ DATASET = "davanstrien/iconclass-vlm-sft"
23
+ OUTPUT_MODEL = "davanstrien/Qwen3-VL-8B-iconclass-vlm"
24
+
25
+ # Training hyperparameters
26
+ BATCH_SIZE = 2
27
+ GRADIENT_ACCUMULATION = 8
28
+ MAX_STEPS = None # Set to None to use full dataset (1 epoch)
29
+ NUM_EPOCHS = 1.0 # Only used if MAX_STEPS is None
30
+ LEARNING_RATE = 2e-5
31
+
32
+ # LoRA configuration
33
+ LORA_R = 16
34
+ LORA_ALPHA = 32
35
+ LORA_DROPOUT = 0.1
36
+
37
+ # Training infrastructure
38
+ GPU_FLAVOR = "a100-large" # Options: a100-large, a100, a10g-large
39
+ TIMEOUT = "12h" # Adjust based on dataset size
40
+ DOCKER_IMAGE = "unsloth/unsloth:latest" # Pre-configured Unsloth environment
41
+
42
+ # Script location
43
+ SCRIPT_URL = "https://huggingface.co/datasets/uv-scripts/training/raw/main/iconclass-vlm-sft.py"
44
+ # For local testing, you can also use a local path:
45
+ # SCRIPT_PATH = "/path/to/iconclass-vlm-sft.py"
46
+
47
+ # Optional: Calculate max_steps for full dataset
48
+ if MAX_STEPS is None:
49
+ from datasets import load_dataset
50
+
51
+ print("Calculating max_steps for full dataset...")
52
+ dataset = load_dataset(DATASET, split="train")
53
+ steps_per_epoch = len(dataset) // (BATCH_SIZE * GRADIENT_ACCUMULATION)
54
+ MAX_STEPS = int(steps_per_epoch * NUM_EPOCHS)
55
+ print(f"Dataset size: {len(dataset):,} samples")
56
+ print(f"Steps per epoch: {steps_per_epoch:,}")
57
+ print(f"Total steps ({NUM_EPOCHS} epoch(s)): {MAX_STEPS:,}")
58
+ print()
59
+
60
+
61
+ # =============================================================================
62
+ # SUBMISSION FUNCTION
63
+ # =============================================================================
64
+
65
+
66
+ def submit_training_job():
67
+ """Submit VLM training job using HF Jobs with Unsloth Docker image."""
68
+
69
+ # Verify HF token is available
70
+ HF_TOKEN = os.environ.get("HF_TOKEN")
71
+ if not HF_TOKEN:
72
+ print("⚠️ HF_TOKEN not found in environment")
73
+ print("Please set: export HF_TOKEN=your_token_here")
74
+ print("Or add it to a .env file in this directory")
75
+ return
76
+
77
+ api = HfApi(token=HF_TOKEN)
78
+
79
+ # Build the script arguments
80
+ script_args = [
81
+ "--base-model",
82
+ BASE_MODEL,
83
+ "--dataset",
84
+ DATASET,
85
+ "--output-model",
86
+ OUTPUT_MODEL,
87
+ "--lora-r",
88
+ str(LORA_R),
89
+ "--lora-alpha",
90
+ str(LORA_ALPHA),
91
+ "--lora-dropout",
92
+ str(LORA_DROPOUT),
93
+ "--learning-rate",
94
+ str(LEARNING_RATE),
95
+ "--batch-size",
96
+ str(BATCH_SIZE),
97
+ "--gradient-accumulation",
98
+ str(GRADIENT_ACCUMULATION),
99
+ "--max-steps",
100
+ str(MAX_STEPS),
101
+ "--logging-steps",
102
+ "10",
103
+ "--save-steps",
104
+ "100",
105
+ "--eval-steps",
106
+ "100",
107
+ ]
108
+
109
+ print("=" * 80)
110
+ print("Submitting Unsloth VLM Fine-tuning Job to HF Jobs")
111
+ print("=" * 80)
112
+ print(f"\n📦 Configuration:")
113
+ print(f" Base Model: {BASE_MODEL}")
114
+ print(f" Dataset: {DATASET}")
115
+ print(f" Output: {OUTPUT_MODEL}")
116
+ print(f"\n🎛️ Training Settings:")
117
+ print(f" Max Steps: {MAX_STEPS:,}")
118
+ print(f" Batch Size: {BATCH_SIZE}")
119
+ print(f" Grad Accum: {GRADIENT_ACCUMULATION}")
120
+ print(f" Effective BS: {BATCH_SIZE * GRADIENT_ACCUMULATION}")
121
+ print(f" Learning Rate: {LEARNING_RATE}")
122
+ print(f"\n🔧 LoRA Settings:")
123
+ print(f" Rank (r): {LORA_R}")
124
+ print(f" Alpha: {LORA_ALPHA}")
125
+ print(f" Dropout: {LORA_DROPOUT}")
126
+ print(f"\n💻 Infrastructure:")
127
+ print(f" GPU: {GPU_FLAVOR}")
128
+ print(f" Docker Image: {DOCKER_IMAGE}")
129
+ print(f" Timeout: {TIMEOUT}")
130
+ print(f"\n🚀 Submitting job...")
131
+
132
+ # Submit the job using run_uv_job with Unsloth Docker image
133
+ job = api.run_uv_job(
134
+ script=SCRIPT_URL, # Can also be a local path
135
+ script_args=script_args,
136
+ dependencies=[], # Unsloth image + UV handles all dependencies
137
+ flavor=GPU_FLAVOR,
138
+ image=DOCKER_IMAGE, # Use Unsloth's pre-configured Docker image
139
+ timeout=TIMEOUT,
140
+ env={
141
+ "HF_HUB_ENABLE_HF_TRANSFER": "1", # Fast downloads
142
+ },
143
+ secrets={
144
+ "HF_TOKEN": HF_TOKEN,
145
+ },
146
+ )
147
+
148
+ print("\n✅ Job submitted successfully!")
149
+ print("\n📊 Job Details:")
150
+ print(f" Job ID: {job.id}")
151
+ print(f" Status: {job.status}")
152
+ print(f" URL: https://huggingface.co/jobs/{job.id}")
153
+ print("\n💡 Monitor your job:")
154
+ print(f" • Web: https://huggingface.co/jobs/{job.id}")
155
+ print(f" • CLI: hfjobs status {job.id}")
156
+ print(f" • Logs: hfjobs logs {job.id} --follow")
157
+ print("\n🎯 Your model will be available at:")
158
+ print(f" https://huggingface.co/{OUTPUT_MODEL}")
159
+ print("\n" + "=" * 80)
160
+
161
+ return job
162
+
163
+
164
+ # =============================================================================
165
+ # MAIN
166
+ # =============================================================================
167
+
168
+
169
+ def main():
170
+ """Main entry point."""
171
+ job = submit_training_job()
172
+
173
+ if job:
174
+ # Optional: Show Python code to monitor the job
175
+ print("\n📝 To monitor this job programmatically:")
176
+ print("""
177
+ from huggingface_hub import HfApi
178
+
179
+ api = HfApi()
180
+ job = api.get_job("{}")
181
+ print(job.status) # Check status
182
+ print(job.logs()) # View logs
183
+ """.format(job.id))
184
+
185
+
186
+ if __name__ == "__main__":
187
+ main()