Upload tokenizer
bf25405
-
1.52 kB
initial commit
-
5.16 kB
Update README.md
-
676 Bytes
First model version
-
2.45 GB
First model version
-
1.16 kB
Upload MixtralForCausalLM
-
116 Bytes
Upload MixtralForCausalLM
-
4.99 GB
Upload MixtralForCausalLM
-
4.98 GB
Upload MixtralForCausalLM
-
5 GB
Upload MixtralForCausalLM
-
5 GB
Upload MixtralForCausalLM
-
4.52 GB
Upload MixtralForCausalLM
-
599 kB
Upload MixtralForCausalLM
-
967 MB
First model version
rng_state.pth
Detected Pickle imports (7)
- "numpy.core.multiarray._reconstruct",
- "_codecs.encode",
- "numpy.ndarray",
- "collections.OrderedDict",
- "torch._utils._rebuild_tensor_v2",
- "torch.ByteStorage",
- "numpy.dtype"
How to fix it?
14.2 kB
First model version
-
1.06 kB
First model version
-
414 Bytes
Upload tokenizer
-
1.8 MB
Upload tokenizer
-
967 Bytes
Upload tokenizer
-
8.2 kB
First model version
training_args.bin
Detected Pickle imports (8)
- "accelerate.state.PartialState",
- "transformers.trainer_utils.HubStrategy",
- "transformers.training_args.TrainingArguments",
- "transformers.trainer_utils.IntervalStrategy",
- "torch.device",
- "transformers.trainer_utils.SchedulerType",
- "transformers.training_args.OptimizerNames",
- "accelerate.utils.dataclasses.DistributedType"
How to fix it?
4.73 kB
First model version