Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
mimicheng
/
zephyr-7b-sft-qlora-1ep-28jan
like
0
PEFT
Safetensors
HuggingFaceH4/ultrachat_200k
mixtral
dpo-experiment
Generated from Trainer
trl
sft
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Use this model
0e7a173
zephyr-7b-sft-qlora-1ep-28jan
/
eval_results.json
mimicheng
Model save
0e7a173
verified
9 months ago
raw
Copy download link
history
blame
Safe
189 Bytes
{
"epoch"
:
1.0
,
"eval_loss"
:
3.4159839153289795
,
"eval_runtime"
:
3064.6681
,
"eval_samples"
:
23110
,
"eval_samples_per_second"
:
5.035
,
"eval_steps_per_second"
:
0.629
}