Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
CHZY-1
/
sqlcoder-Mistral_7b_FineTuned_PEFT_QLORA_adapter_r_32_alpha_64
like
0
PEFT
TensorBoard
Safetensors
trl
sft
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Use this model
main
sqlcoder-Mistral_7b_FineTuned_PEFT_QLORA_adapter_r_32_alpha_64
1 contributor
History:
2 commits
CHZY-1
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
abe72f2
verified
2 months ago
runs
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
.gitattributes
Safe
1.52 kB
initial commit
2 months ago
README.md
Safe
1.48 kB
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
adapter_config.json
Safe
770 Bytes
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
adapter_model.safetensors
Safe
1.38 GB
LFS
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
added_tokens.json
Safe
21 Bytes
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
special_tokens_map.json
Safe
552 Bytes
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
tokenizer.json
Safe
1.8 MB
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
tokenizer.model
Safe
493 kB
LFS
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
tokenizer_config.json
Safe
1.66 kB
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago
training_args.bin
pickle
Detected Pickle imports (9)
"transformers.training_args.OptimizerNames"
,
"transformers.trainer_utils.SchedulerType"
,
"accelerate.state.PartialState"
,
"transformers.trainer_utils.IntervalStrategy"
,
"accelerate.utils.dataclasses.DistributedType"
,
"torch.device"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"transformers.trainer_utils.HubStrategy"
,
"trl.trainer.sft_config.SFTConfig"
How to fix it?
5.56 kB
LFS
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
2 months ago