Troubleshooting TRL Training Jobs
Common issues and solutions when training with TRL on Hugging Face Jobs.
Training Hangs at "Starting training..." Step
Problem: Job starts but hangs at the training step - never progresses, never times out, just sits there.
Root Cause: Using eval_strategy="steps" or eval_strategy="epoch" without providing an eval_dataset to the trainer.
Solution:
Option A: Provide eval_dataset (recommended)
# Create train/eval split
dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
trainer = SFTTrainer(
model="Qwen/Qwen2.5-0.5B",
train_dataset=dataset_split["train"],
eval_dataset=dataset_split["test"], # β MUST provide when eval_strategy is enabled
args=SFTConfig(
eval_strategy="steps",
eval_steps=50,
...
),
)
Option B: Disable evaluation
trainer = SFTTrainer(
model="Qwen/Qwen2.5-0.5B",
train_dataset=dataset,
# No eval_dataset
args=SFTConfig(
eval_strategy="no", # β Explicitly disable
...
),
)
Prevention:
- Always create train/eval split for better monitoring
- Use
dataset.train_test_split(test_size=0.1, seed=42) - Check example scripts:
scripts/train_sft_example.pyincludes proper eval setup
Job Times Out
Problem: Job terminates before training completes, all progress lost.
Solutions:
- Increase timeout parameter (e.g.,
"timeout": "4h") - Reduce
num_train_epochsor use smaller dataset slice - Use smaller model or enable LoRA/PEFT to speed up training
- Add 20-30% buffer to estimated time for loading/saving overhead
Prevention:
- Always start with a quick demo run to estimate timing
- Use
scripts/estimate_cost.pyto get time estimates - Monitor first runs closely via Trackio or logs
Model Not Saved to Hub
Problem: Training completes but model doesn't appear on Hub - all work lost.
Check:
-
push_to_hub=Truein training config -
hub_model_idspecified with username (e.g.,"username/model-name") -
secrets={"HF_TOKEN": "$HF_TOKEN"}in job submission - User has write access to target repo
- Token has write permissions (check at https://huggingface.co/settings/tokens)
- Training script calls
trainer.push_to_hub()at the end
See: references/hub_saving.md for detailed Hub authentication troubleshooting
Out of Memory (OOM)
Problem: Job fails with CUDA out of memory error.
Solutions (in order of preference):
- Reduce batch size: Lower
per_device_train_batch_size(try 4 β 2 β 1) - Increase gradient accumulation: Raise
gradient_accumulation_stepsto maintain effective batch size - Enable LoRA/PEFT: Use
peft_config=LoraConfig(r=16, lora_alpha=32)to train adapters only - Use larger GPU: Switch from
t4-mediumβa10g-largeβa100-large - Enable gradient checkpointing: Set
gradient_checkpointing=Truein config (slower but saves memory) - Use smaller model: Try a smaller variant (e.g., 0.5B instead of 3B)
Memory guidelines:
- T4 (16GB): <1B models with LoRA
- A10G (24GB): 1-3B models with LoRA, <1B full fine-tune
- A100 (40GB/80GB): 7B+ models with LoRA, 3B full fine-tune
Dataset Format Error
Problem: Training fails with dataset format errors or missing fields.
Solutions:
Check format documentation:
hf_doc_fetch("https://huggingface.co/docs/trl/dataset_formats")Validate dataset before training:
uv run https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py \ --dataset <dataset-name> --split trainOr via hf_jobs:
hf_jobs("uv", { "script": "https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py", "script_args": ["--dataset", "dataset-name", "--split", "train"] })Verify field names:
- SFT: Needs "messages" field (conversational), OR "text" field, OR "prompt"/"completion"
- DPO: Needs "chosen" and "rejected" fields
- GRPO: Needs prompt-only format
Check dataset split:
- Ensure split exists (e.g.,
split="train") - Preview dataset:
load_dataset("name", split="train[:5]")
- Ensure split exists (e.g.,
Import/Module Errors
Problem: Job fails with "ModuleNotFoundError" or import errors.
Solutions:
Add PEP 723 header with dependencies:
# /// script # dependencies = [ # "trl>=0.12.0", # "peft>=0.7.0", # "transformers>=4.36.0", # ] # ///Verify exact format:
- Must have
# ///delimiters (with space after#) - Dependencies must be valid PyPI package names
- Check spelling and version constraints
- Must have
Test locally first:
uv run train.py # Tests if dependencies are correct
Authentication Errors
Problem: Job fails with authentication or permission errors when pushing to Hub.
Solutions:
Verify authentication:
mcp__huggingface__hf_whoami() # Check who's authenticatedCheck token permissions:
- Go to https://huggingface.co/settings/tokens
- Ensure token has "write" permission
- Token must not be "read-only"
Verify token in job:
"secrets": {"HF_TOKEN": "$HF_TOKEN"} # Must be in job configCheck repo permissions:
- User must have write access to target repo
- If org repo, user must be member with write access
- Repo must exist or user must have permission to create
Job Stuck or Not Starting
Problem: Job shows "pending" or "starting" for extended period.
Solutions:
- Check Jobs dashboard for status: https://huggingface.co/jobs
- Verify hardware availability (some GPU types may have queues)
- Try different hardware flavor if one is heavily utilized
- Check for account billing issues (Jobs requires paid plan)
Typical startup times:
- CPU jobs: 10-30 seconds
- GPU jobs: 30-90 seconds
- If >3 minutes: likely queued or stuck
Training Loss Not Decreasing
Problem: Training runs but loss stays flat or doesn't improve.
Solutions:
- Check learning rate: May be too low (try 2e-5 to 5e-5) or too high (try 1e-6)
- Verify dataset quality: Inspect examples to ensure they're reasonable
- Check model size: Very small models may not have capacity for task
- Increase training steps: May need more epochs or larger dataset
- Verify dataset format: Wrong format may cause degraded training
Logs Not Appearing
Problem: Cannot see training logs or progress.
Solutions:
- Wait 30-60 seconds: Initial logs can be delayed
- Check logs via MCP tool:
hf_jobs("logs", {"job_id": "your-job-id"}) - Use Trackio for real-time monitoring: See
references/trackio_guide.md - Verify job is actually running:
hf_jobs("inspect", {"job_id": "your-job-id"})
Checkpoint/Resume Issues
Problem: Cannot resume from checkpoint or checkpoint not saved.
Solutions:
Enable checkpoint saving:
SFTConfig( save_strategy="steps", save_steps=100, hub_strategy="every_save", # Push each checkpoint )Verify checkpoints pushed to Hub: Check model repo for checkpoint folders
Resume from checkpoint:
trainer = SFTTrainer( model="username/model-name", # Can be checkpoint path resume_from_checkpoint="username/model-name/checkpoint-1000", )
Getting Help
If issues persist:
Check TRL documentation:
hf_doc_search("your issue", product="trl")Check Jobs documentation:
hf_doc_fetch("https://huggingface.co/docs/huggingface_hub/guides/jobs")Review related guides:
references/hub_saving.md- Hub authentication issuesreferences/hardware_guide.md- Hardware selection and specsreferences/training_patterns.md- Eval dataset requirements- SKILL.md "Working with Scripts" section - Script format and URL issues
Ask in HF forums: https://discuss.huggingface.co/