Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
HamidRezaAttar
/
gpt2-product-description-generator
like
14
Text Generation
Transformers
PyTorch
English
gpt2
text-generation-inference
Inference Endpoints
arxiv:
1706.03762
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
691a62f
gpt2-product-description-generator
1 contributor
History:
7 commits
This model has 1 file scanned as unsafe.
Show
files
HamidRezaAttar
add tokenizer.json
691a62f
over 2 years ago
.gitattributes
Safe
1.26 kB
First model version
over 2 years ago
.gitignore
Safe
13 Bytes
UPDATE .gitignore
over 2 years ago
README.md
Safe
31 Bytes
Update README.md
over 2 years ago
config.json
Safe
907 Bytes
First model version
over 2 years ago
dls_eng_Batch4.pkl
Unsafe
pickle
Detected Pickle imports (28)
"fastai.text.data.LMDataLoader"
,
"__builtin__.unicode"
,
"tokenizers.models.Model"
,
"__builtin__.getattr"
,
"fastcore.transform.Pipeline"
,
"numpy.core.multiarray.scalar"
,
"fastai.data.core.TfmdLists"
,
"fastcore.imports.noop"
,
"random.Random"
,
"torch.Tensor"
,
"fastai.torch_core.Chunks"
,
"tokenizers.Tokenizer"
,
"fastcore.xtras.ReindexCollection"
,
"numpy.core.multiarray._reconstruct"
,
"fastai.text.data._maybe_first"
,
"_codecs.encode"
,
"__builtin__.tuple"
,
"fastai.text.data.LMTensorText"
,
"fastai.data.load._wif"
,
"fastai.data.core.DataLoaders"
,
"transformers.models.gpt2.tokenization_gpt2_fast.GPT2TokenizerFast"
,
"numpy.ndarray"
,
"fastcore.foundation.L"
,
"torch.device"
,
"__main__.TransformersTokenizer"
,
"fastai.data.load._FakeLoader"
,
"pathlib.PosixPath"
,
"numpy.dtype"
How to fix it?
138 MB
LFS
First model version
over 2 years ago
history_epoch1.csv
Safe
382 Bytes
LFS
First model version
over 2 years ago
pytorch_model.bin
Safe
pickle
Detected Pickle imports (4)
"torch.ByteStorage"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
510 MB
LFS
First model version
over 2 years ago
tokenizer.json
Safe
1.36 MB
add tokenizer.json
over 2 years ago