Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sahilnagaralu
/
movie-script
like
1
Text Generation
Transformers
PyTorch
gptj
Generated from Trainer
Inference Endpoints
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
4ef5867
movie-script
1 contributor
History:
33 commits
sahilnagaralu
Model save
4ef5867
about 1 year ago
.gitattributes
1.52 kB
initial commit
about 1 year ago
README.md
1.38 kB
Model save
about 1 year ago
added_tokens.json
4.33 kB
Upload tokenizer
about 1 year ago
config.json
995 Bytes
Model save
about 1 year ago
generation_config.json
141 Bytes
Model save
about 1 year ago
merges.txt
456 kB
Upload tokenizer
about 1 year ago
pytorch_model-00001-of-00003.bin
pickle
Detected Pickle imports (3)
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
9.95 GB
LFS
Model save
about 1 year ago
pytorch_model-00002-of-00003.bin
pickle
Detected Pickle imports (3)
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
9.93 GB
LFS
Model save
about 1 year ago
pytorch_model-00003-of-00003.bin
pickle
Detected Pickle imports (3)
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
4.32 GB
LFS
Model save
about 1 year ago
pytorch_model.bin.index.json
21.7 kB
Model save
about 1 year ago
special_tokens_map.json
470 Bytes
Upload tokenizer
about 1 year ago
tokenizer.json
2.14 MB
Upload tokenizer
about 1 year ago
tokenizer_config.json
26.7 kB
Upload tokenizer
about 1 year ago
training_args.bin
pickle
Detected Pickle imports (8)
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.trainer_utils.SchedulerType"
,
"torch.device"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.training_args.TrainingArguments"
,
"transformers.training_args.OptimizerNames"
,
"accelerate.state.PartialState"
,
"transformers.trainer_utils.IntervalStrategy"
How to fix it?
4.03 kB
LFS
Model save
about 1 year ago
vocab.json
798 kB
Upload tokenizer
about 1 year ago