Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
hugger111
/
BlenderBot_400M_distill_falcon_redefined_first10krows
like
0
Text2Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
blenderbot
Generated from Trainer
Inference Endpoints
8-bit precision
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
2
Train
Deploy
Use this model
main
BlenderBot_400M_distill_falcon_redefined_first10krows
3 contributors
History:
27 commits
hugger111
librarian-bot
Librarian Bot: Add base_model information to model (
#2
)
b68ee2e
about 1 year ago
runs
End of training
over 1 year ago
.gitattributes
Safe
1.52 kB
initial commit
over 1 year ago
.gitignore
Safe
13 Bytes
Training in progress, step 500
over 1 year ago
README.md
Safe
1.95 kB
Librarian Bot: Add base_model information to model (#2)
about 1 year ago
added_tokens.json
Safe
21 Bytes
Upload tokenizer
over 1 year ago
config.json
Safe
1.98 kB
Training in progress, step 500
over 1 year ago
generation_config.json
Safe
342 Bytes
End of training
over 1 year ago
merges.txt
Safe
62.9 kB
Upload tokenizer
over 1 year ago
model.safetensors
Safe
377 MB
LFS
Adding `safetensors` variant of this model (#1)
over 1 year ago
pytorch_model.bin
Safe
pickle
Detected Pickle imports (5)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.BFloat16Storage"
,
"torch.CharStorage"
,
"torch.FloatStorage"
What is a pickle import?
377 MB
LFS
Training in progress, step 10000
over 1 year ago
special_tokens_map.json
Safe
843 Bytes
Upload tokenizer
over 1 year ago
tokenizer_config.json
Safe
1.34 kB
Upload tokenizer
over 1 year ago
training_args.bin
pickle
Detected Pickle imports (8)
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.training_args.OptimizerNames"
,
"accelerate.state.PartialState"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.training_args.TrainingArguments"
,
"torch.device"
,
"transformers.trainer_utils.IntervalStrategy"
How to fix it?
4.03 kB
LFS
Training in progress, step 500
over 1 year ago
vocab.json
Safe
143 kB
Upload tokenizer
over 1 year ago