Update README.md
38da96c
verified
-
1.52 kB
initial commit
-
5.38 kB
Upload folder using huggingface_hub (#1)
-
80 Bytes
Upload folder using huggingface_hub (#1)
-
0 Bytes
Update README.md
-
1.67 MB
Upload folder using huggingface_hub (#1)
model.pt
Detected Pickle imports (22)
- "torch.FloatStorage",
- "collections.OrderedDict",
- "quanto.tensor.qtype.qtype",
- "torch._utils._rebuild_parameter",
- "torch.device",
- "transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.5652710542966fa2414b1cf39b675fdc67d7eec4.modeling_qwen.Qwen2ForCausalLM",
- "transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.5652710542966fa2414b1cf39b675fdc67d7eec4.modeling_qwen.Qwen2SdpaAttention",
- "quanto.nn.qlinear.QLinear",
- "torch._utils._rebuild_tensor_v2",
- "transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.5652710542966fa2414b1cf39b675fdc67d7eec4.modeling_qwen.Qwen2RotaryEmbedding",
- "transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.5652710542966fa2414b1cf39b675fdc67d7eec4.modeling_qwen.Qwen2RMSNorm",
- "torch.nn.modules.activation.SiLU",
- "transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.5652710542966fa2414b1cf39b675fdc67d7eec4.modeling_qwen.Qwen2DecoderLayer",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.qwen2.configuration_qwen2.Qwen2Config",
- "torch.nn.modules.container.ModuleList",
- "transformers.generation.configuration_utils.GenerationConfig",
- "transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.5652710542966fa2414b1cf39b675fdc67d7eec4.modeling_qwen.Qwen2Model",
- "torch.float32",
- "transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.5652710542966fa2414b1cf39b675fdc67d7eec4.modeling_qwen.Qwen2MLP",
- "torch.int8",
- "__builtin__.set"
How to fix it?
10.9 GB
Upload folder using huggingface_hub (#1)
-
1.03 kB
Upload folder using huggingface_hub (#1)
-
370 Bytes
Upload folder using huggingface_hub (#1)
-
7.03 MB
Upload folder using huggingface_hub (#1)
-
1.4 kB
Upload folder using huggingface_hub (#1)
-
2.78 MB
Upload folder using huggingface_hub (#1)