Update README.md
264d624
verified
-
1.52 kB
initial commit
-
5.43 kB
Update README.md
-
0 Bytes
Update README.md
model.pt
Detected Pickle imports (39)
- "torch.nn.modules.activation.SiLU",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.modeling_phi3_v.Phi3RMSNorm",
- "transformers.models.clip.modeling_clip.CLIPVisionTransformer",
- "transformers.models.clip.configuration_clip.CLIPVisionConfig",
- "transformers.activations.QuickGELUActivation",
- "torch.BFloat16Storage",
- "quanto.tensor.qtype.qtype",
- "torch._utils._rebuild_parameter",
- "transformers.models.clip.modeling_clip.CLIPEncoder",
- "torch.bfloat16",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.modeling_phi3_v.Phi3MLP",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.modeling_phi3_v.Phi3VForCausalLM",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.modeling_phi3_v.Phi3Attention",
- "collections.OrderedDict",
- "torch.int8",
- "torch.nn.modules.sparse.Embedding",
- "torch.device",
- "torch._utils._rebuild_tensor_v2",
- "torch.FloatStorage",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.image_embedding_phi3_v.Phi3ImageEmbedding",
- "transformers.models.clip.modeling_clip.CLIPMLP",
- "transformers.models.clip.modeling_clip.CLIPEncoderLayer",
- "quanto.nn.qlinear.QLinear",
- "__builtin__.set",
- "torch.nn.modules.dropout.Dropout",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.activation.GELU",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.modeling_phi3_v.Phi3VModel",
- "transformers.models.clip.modeling_clip.CLIPVisionModel",
- "quanto.nn.qconv2d.QConv2d",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.modeling_phi3_v.Phi3SuScaledRotaryEmbedding",
- "transformers.models.clip.modeling_clip.CLIPAttention",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.modeling_phi3_v.Phi3DecoderLayer",
- "transformers.models.clip.modeling_clip.CLIPVisionEmbeddings",
- "transformers_modules.microsoft.Phi-3-vision-128k-instruct.7b92b8c62807f5a98a9fa47cdfd4144f11fbd112.configuration_phi3_v.Phi3VConfig",
- "torch.LongStorage",
- "torch.nn.modules.container.Sequential",
- "transformers.generation.configuration_utils.GenerationConfig"
How to fix it?
8.29 GB
Upload folder using huggingface_hub (#1)
-
1.04 kB
Upload folder using huggingface_hub (#1)
-
670 Bytes
Upload folder using huggingface_hub (#1)
-
1.85 MB
Upload folder using huggingface_hub (#1)
-
9.44 kB
Upload folder using huggingface_hub (#1)