Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
qhar0h
/
openhermes-mistral-dpo-gptq
like
0
TensorBoard
Safetensors
trl
dpo
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
main
openhermes-mistral-dpo-gptq
1 contributor
History:
5 commits
qhar0h
Model save
7ee85e0
verified
11 months ago
runs
Model save
11 months ago
.gitattributes
Safe
1.52 kB
initial commit
11 months ago
README.md
Safe
1.2 kB
Model save
11 months ago
adapter_config.json
Safe
610 Bytes
Model save
11 months ago
adapter_model.safetensors
Safe
13.6 MB
LFS
Model save
11 months ago
added_tokens.json
Safe
51 Bytes
Model save
11 months ago
special_tokens_map.json
Safe
630 Bytes
Model save
11 months ago
tokenizer.json
Safe
1.8 MB
Model save
11 months ago
tokenizer.model
Safe
493 kB
LFS
Model save
11 months ago
tokenizer_config.json
Safe
1.42 kB
Model save
11 months ago
training_args.bin
pickle
Detected Pickle imports (8)
"accelerate.state.PartialState"
,
"transformers.trainer_utils.IntervalStrategy"
,
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.training_args.TrainingArguments"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"torch.device"
,
"transformers.training_args.OptimizerNames"
How to fix it?
4.16 kB
LFS
Model save
11 months ago