Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
DUAL-GPO
/
phi-2-gpo-v25-i1
like
0
PEFT
TensorBoard
Safetensors
HuggingFaceH4/ultrafeedback_binarized
phi
alignment-handbook
Generated from Trainer
trl
dpo
custom_code
License:
mit
Model card
Files
Files and versions
Metrics
Training metrics
Community
Use this model
123afc1
phi-2-gpo-v25-i1
1 contributor
History:
11 commits
lole25
Model save
123afc1
verified
3 months ago
runs
Model save
3 months ago
.gitattributes
1.52 kB
initial commit
3 months ago
README.md
1.28 kB
Model save
3 months ago
adapter_config.json
607 Bytes
Training in progress, step 500
3 months ago
adapter_model.safetensors
168 MB
LFS
Model save
3 months ago
added_tokens.json
1.08 kB
Training in progress, step 100
3 months ago
all_results.json
195 Bytes
Model save
3 months ago
config.json
928 Bytes
End of training
3 months ago
merges.txt
456 kB
Training in progress, step 100
3 months ago
special_tokens_map.json
587 Bytes
Training in progress, step 100
3 months ago
tokenizer.json
2.11 MB
Training in progress, step 100
3 months ago
tokenizer_config.json
7.82 kB
Training in progress, step 100
3 months ago
train_results.json
195 Bytes
Model save
3 months ago
trainer_state.json
45.8 kB
Model save
3 months ago
training_args.bin
5.82 kB
LFS
Training in progress, step 500
3 months ago
vocab.json
798 kB
Training in progress, step 100
3 months ago