allow flax
3e981a3
-
391 Bytes
allow flax
-
304 Bytes
model + tokenizer
-
764 Bytes
model + tokenizer
-
456 kB
model + tokenizer
pytorch_model_quantized.bin
Detected Pickle imports (23)
- "torch.FloatStorage",
- "torch.per_tensor_affine",
- "transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.gpt2.modeling_gpt2.Attention",
- "torch.QInt8Storage",
- "torch.ByteStorage",
- "transformers.models.gpt2.modeling_gpt2.MLP",
- "transformers.models.gpt2.modeling_gpt2.Block",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.qint8",
- "torch.nn.modules.dropout.Dropout",
- "transformers.activations.gelu_new",
- "torch.nn.quantized.modules.linear.LinearPackedParams",
- "__builtin__.set",
- "transformers.models.gpt2.modeling_gpt2.GPT2Model",
- "torch._utils._rebuild_qtensor",
- "torch.nn.quantized.dynamic.modules.linear.Linear",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.gpt2.configuration_gpt2.GPT2Config",
- "torch.nn.modules.sparse.Embedding",
- "collections.OrderedDict"
How to fix it?
1.08 GB
model + tokenizer
-
1.36 MB
model + tokenizer
-
1.04 MB
model + tokenizer