Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
chrlu
/
zephyr-7b-gemma-dpo
like
0
Text Generation
Transformers
TensorBoard
Safetensors
argilla/dpo-mix-7k
gemma
alignment-handbook
trl
dpo
Generated from Trainer
conversational
text-generation-inference
Inference Endpoints
License:
other
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Deploy
Use this model
main
zephyr-7b-gemma-dpo
/
training_args.bin
Commit History
Model save
f5a8c5e
verified
chrlu
commited on
Apr 29
Model save
792bab4
verified
chrlu
commited on
Apr 27
Model save
c3b5e59
verified
chrlu
commited on
Apr 27
Model save
03cad8d
verified
chrlu
commited on
Apr 27
Model save
05a86a5
verified
chrlu
commited on
Apr 27