Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
memray
/
opennmt-kpg
like
2
Model card
Files
Files and versions
Community
18f3af4
opennmt-kpg
1 contributor
History:
6 commits
memray
Upload bartFT_presabs_kp20k.checkpoint_step_45000.pt with git-lfs
18f3af4
almost 3 years ago
.gitattributes
Safe
1.22 kB
initial commit
almost 3 years ago
bartFT_presabs_kp20k.checkpoint_step_45000.pt
pickle
Detected Pickle imports (15)
"typing.Any"
,
"omegaconf.nodes.AnyNode"
,
"argparse.Namespace"
,
"omegaconf.listconfig.ListConfig"
,
"omegaconf.dictconfig.DictConfig"
,
"omegaconf.base.ContainerMetadata"
,
"__builtin__.list"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.defaultdict"
,
"torch.FloatStorage"
,
"collections.OrderedDict"
,
"__builtin__.long"
,
"torch.DoubleStorage"
,
"omegaconf.base.Metadata"
,
"__builtin__.dict"
How to fix it?
5.29 GB
LFS
Upload bartFT_presabs_kp20k.checkpoint_step_45000.pt with git-lfs
almost 3 years ago
transformer_presabs_kp20k.checkpoint_step_95000.pt
pickle
Detected Pickle imports (19)
"torchtext.data.field.Field"
,
"functools.partial"
,
"collections.Counter"
,
"torch._utils._rebuild_tensor_v2"
,
"tokenizers.AddedToken"
,
"_codecs.encode"
,
"onmt.inputters.text_dataset._feature_tokenize"
,
"tokenizers.models.Model"
,
"collections.OrderedDict"
,
"onmt.inputters.text_dataset.TextMultiField"
,
"transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast"
,
"argparse.Namespace"
,
"onmt.inputters.inputter.make_src"
,
"torch.FloatStorage"
,
"tokenizers.Tokenizer"
,
"onmt.inputters.inputter.make_tgt"
,
"torchtext.data.utils._split_tokenizer"
,
"torchtext.vocab.Vocab"
,
"torchtext.data.field.RawField"
How to fix it?
1.17 GB
LFS
Upload transformer_presabs_kp20k.checkpoint_step_95000.pt with git-lfs
almost 3 years ago
transformer_presabs_kptimes.checkpoint_step_65000.pt
pickle
Detected Pickle imports (19)
"tokenizers.Tokenizer"
,
"onmt.inputters.text_dataset.TextMultiField"
,
"torch._utils._rebuild_tensor_v2"
,
"torchtext.vocab.Vocab"
,
"torchtext.data.field.Field"
,
"transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast"
,
"collections.OrderedDict"
,
"onmt.inputters.text_dataset._feature_tokenize"
,
"torchtext.data.utils._split_tokenizer"
,
"onmt.inputters.inputter.make_tgt"
,
"torchtext.data.field.RawField"
,
"argparse.Namespace"
,
"collections.Counter"
,
"torch.FloatStorage"
,
"functools.partial"
,
"onmt.inputters.inputter.make_src"
,
"tokenizers.AddedToken"
,
"_codecs.encode"
,
"tokenizers.models.Model"
How to fix it?
1.17 GB
LFS
Upload transformer_presabs_kptimes.checkpoint_step_65000.pt with git-lfs
almost 3 years ago
transformer_presabs_openkp.checkpoint_step_55000.pt
pickle
Detected Pickle imports (19)
"torchtext.data.utils._split_tokenizer"
,
"_codecs.encode"
,
"collections.OrderedDict"
,
"tokenizers.models.Model"
,
"transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast"
,
"collections.Counter"
,
"torch._utils._rebuild_tensor_v2"
,
"onmt.inputters.inputter.make_tgt"
,
"argparse.Namespace"
,
"torch.FloatStorage"
,
"torchtext.vocab.Vocab"
,
"torchtext.data.field.Field"
,
"onmt.inputters.text_dataset.TextMultiField"
,
"onmt.inputters.inputter.make_src"
,
"torchtext.data.field.RawField"
,
"tokenizers.AddedToken"
,
"onmt.inputters.text_dataset._feature_tokenize"
,
"functools.partial"
,
"tokenizers.Tokenizer"
How to fix it?
1.17 GB
LFS
Upload transformer_presabs_openkp.checkpoint_step_55000.pt with git-lfs
almost 3 years ago
transformer_presabs_stackex.checkpoint_step_45000.pt
pickle
Detected Pickle imports (19)
"tokenizers.AddedToken"
,
"onmt.inputters.inputter.make_src"
,
"collections.OrderedDict"
,
"argparse.Namespace"
,
"onmt.inputters.inputter.make_tgt"
,
"torchtext.data.field.Field"
,
"collections.Counter"
,
"torchtext.data.utils._split_tokenizer"
,
"tokenizers.models.Model"
,
"torchtext.vocab.Vocab"
,
"functools.partial"
,
"torch._utils._rebuild_tensor_v2"
,
"onmt.inputters.text_dataset._feature_tokenize"
,
"_codecs.encode"
,
"onmt.inputters.text_dataset.TextMultiField"
,
"torchtext.data.field.RawField"
,
"torch.FloatStorage"
,
"tokenizers.Tokenizer"
,
"transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast"
How to fix it?
1.17 GB
LFS
Upload transformer_presabs_stackex.checkpoint_step_45000.pt with git-lfs
almost 3 years ago