Use torch save.
b43a417
-
1.52 kB
initial commit
-
245 Bytes
initial commit
-
5.31 kB
Use torch save.
-
126 kB
Initial commit
model_v4.pkl
Detected Pickle imports (15)
- "__main__.AttentionBlock",
- "torch.nn.modules.container.Sequential",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.linear.NonDynamicallyQuantizableLinear",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.nn.modules.sparse.Embedding",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.activation.GELU",
- "torch.storage._load_from_bytes",
- "torch.nn.modules.activation.MultiheadAttention",
- "torch.nn.modules.linear.Linear",
- "collections.OrderedDict",
- "__main__.Model",
- "torch.nn.modules.dropout.Dropout"
How to fix it?
4.13 MB
Initial commit
model_v4t.pkl
Detected Pickle imports (16)
- "__main__.AttentionBlock",
- "torch.nn.modules.container.Sequential",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.linear.NonDynamicallyQuantizableLinear",
- "torch._utils._rebuild_tensor_v2",
- "__builtin__.set",
- "torch.nn.modules.sparse.Embedding",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.activation.GELU",
- "torch.FloatStorage",
- "torch.nn.modules.activation.MultiheadAttention",
- "torch.nn.modules.linear.Linear",
- "collections.OrderedDict",
- "__main__.Model",
- "torch.nn.modules.dropout.Dropout"
How to fix it?
4.13 MB
Use torch save.
-
5 Bytes
update requirement
-
11.3 kB
Initial commit