Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
RedaAlami
/
zephyr-7b-dpo-qlora
like
0
PEFT
TensorBoard
Safetensors
TII-Frontier-Team/Reasoning_DPO
llama
alignment-handbook
trl
dpo
Generated from Trainer
4-bit precision
bitsandbytes
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
0f309ac
zephyr-7b-dpo-qlora
/
train_results.json
Commit History
Model save
6195168
verified
RedaAlami
commited on
Oct 4, 2024
Model save
efa96d7
verified
RedaAlami
commited on
Aug 30, 2024
Model save
08b8142
verified
RedaAlami
commited on
Aug 1, 2024