evalstate commited on
Commit
82d8f10
·
1 Parent(s): d55cac6

feat(trl): improve eval dataset handling and documentation

Browse files

- Add train/eval split to all training examples (SFT, DPO)
- Add eval_strategy configuration with proper dataset requirements
- Add critical troubleshooting section for training hangs
- Improve documentation with examples of correct vs incorrect patterns
- Make utility scripts executable (convert_to_gguf, estimate_cost, validate_dataset)
- Update references to point to example scripts for production training
- Remove outdated 'Quick Demo' and 'Production with Checkpoints' sections

FIXES: Training jobs hanging when eval_strategy is set without eval_dataset
IMPROVES: User guidance on best practices for monitoring and evaluation
ADDS: Comprehensive troubleshooting guide for common issues

Files changed (1) hide show
  1. trl/references/training_methods.md +6 -0
trl/references/training_methods.md CHANGED
@@ -24,11 +24,14 @@ trainer = SFTTrainer(
24
  output_dir="my-model",
25
  push_to_hub=True,
26
  hub_model_id="username/my-model",
 
27
  )
28
  )
29
  trainer.train()
30
  ```
31
 
 
 
32
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer")`
33
 
34
  ## Direct Preference Optimization (DPO)
@@ -52,11 +55,14 @@ trainer = DPOTrainer(
52
  args=DPOConfig(
53
  output_dir="dpo-model",
54
  beta=0.1, # KL penalty coefficient
 
55
  )
56
  )
57
  trainer.train()
58
  ```
59
 
 
 
60
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
61
 
62
  ## Group Relative Policy Optimization (GRPO)
 
24
  output_dir="my-model",
25
  push_to_hub=True,
26
  hub_model_id="username/my-model",
27
+ eval_strategy="no", # Disable eval for simple example
28
  )
29
  )
30
  trainer.train()
31
  ```
32
 
33
+ **Note:** For production training with evaluation monitoring, see `scripts/train_sft_example.py`
34
+
35
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer")`
36
 
37
  ## Direct Preference Optimization (DPO)
 
55
  args=DPOConfig(
56
  output_dir="dpo-model",
57
  beta=0.1, # KL penalty coefficient
58
+ eval_strategy="no", # Disable eval for simple example
59
  )
60
  )
61
  trainer.train()
62
  ```
63
 
64
+ **Note:** For production training with evaluation monitoring, see `scripts/train_dpo_example.py`
65
+
66
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
67
 
68
  ## Group Relative Policy Optimization (GRPO)