Create README.md
a12725c
-
1.18 kB
initial commit
-
1.29 kB
Create README.md
-
2.49 kB
Upload get_predictions.py
-
73.5 kB
Upload morph_labels.txt
pytorch_model.bin
Detected Pickle imports (30)
- "torch.float32",
- "torch.FloatStorage",
- "torch.QInt8Storage",
- "transformers.models.roberta.modeling_roberta.RobertaIntermediate",
- "transformers.models.roberta.modeling_roberta.RobertaSelfOutput",
- "transformers.models.roberta.modeling_roberta.RobertaAttention",
- "torch.nn.quantized.dynamic.modules.linear.Linear",
- "transformers.models.roberta.modeling_roberta.RobertaOutput",
- "torch.qint8",
- "torch._utils._rebuild_qtensor",
- "torch.nn.modules.container.ModuleList",
- "torch.LongStorage",
- "transformers.models.roberta.modeling_roberta.RobertaSelfAttention",
- "collections.OrderedDict",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.normalization.LayerNorm",
- "__builtin__.set",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.roberta.modeling_roberta.RobertaEncoder",
- "transformers.models.roberta.modeling_roberta.RobertaLayer",
- "torch.nn.modules.dropout.Dropout",
- "transformers.activations.GELUActivation",
- "torch._C._nn.gelu",
- "transformers.models.roberta.modeling_roberta.RobertaModel",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaForTokenClassification",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig",
- "torch.per_tensor_affine",
- "torch.nn.quantized.modules.linear.LinearPackedParams",
- "transformers.models.roberta.modeling_roberta.RobertaEmbeddings"
How to fix it?
184 MB
Upload pytorch_model.bin with git-lfs
-
238 Bytes
Upload special_tokens_map.json
-
2.37 MB
Upload tokenizer.json
-
459 Bytes
Upload tokenizer_config.json