Update tokenizer
685bb05
-
386 Bytes
Update readme
-
4.06 kB
Update tokenizer
-
1.32 kB
Added tokenizer, improved performance
-
892 MB
Added tokenizer, improved performance
pytorch_model_quantized.bin
Detected Pickle imports (24)
- "torch.QInt8Storage",
- "transformers.models.t5.configuration_t5.T5Config",
- "torch._utils._rebuild_parameter",
- "torch.FloatStorage",
- "transformers.models.t5.modeling_t5.T5LayerCrossAttention",
- "transformers.models.t5.modeling_t5.T5Stack",
- "collections.OrderedDict",
- "torch.nn.quantized.dynamic.modules.linear.Linear",
- "torch.per_tensor_affine",
- "__builtin__.set",
- "transformers.models.t5.modeling_t5.T5LayerSelfAttention",
- "transformers.models.t5.modeling_t5.T5Block",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.quantized.modules.linear.LinearPackedParams",
- "transformers.models.t5.modeling_t5.T5DenseReluDense",
- "transformers.models.t5.modeling_t5.T5Attention",
- "torch.nn.modules.sparse.Embedding",
- "torch.nn.modules.container.ModuleList",
- "torch.qint8",
- "transformers.models.t5.modeling_t5.T5LayerNorm",
- "transformers.models.t5.modeling_t5.T5LayerFF",
- "transformers.models.t5.modeling_t5.T5ForConditionalGeneration",
- "torch.nn.modules.dropout.Dropout",
- "torch._utils._rebuild_qtensor"
How to fix it?
322 MB
Quantized version of model
-
792 kB
Added sp model for tokenisation
-
1.39 MB
Added tokenizer, improved performance