Update README.md
5734eec
verified
-
1.52 kB
initial commit
-
26 kB
Update README.md
-
746 Bytes
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint
-
645 Bytes
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.3.2 checkpoint
-
570 Bytes
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint
-
111 Bytes
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.3.2 checkpoint
-
4.94 GB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint
-
5 GB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint
-
4.54 GB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint
-
24 kB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.3.2 checkpoint
-
551 Bytes
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.3.2 checkpoint
-
1.8 MB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.3.2 checkpoint
-
493 kB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.3.2 checkpoint
-
1.39 kB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.3.2 checkpoint
-
196 Bytes
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint
-
668 kB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint
training_args.bin
Detected Pickle imports (12)
- "accelerate.state.PartialState",
- "transformers.trainer_utils.SchedulerType",
- "torch.bfloat16",
- "transformers.trainer_utils.HubStrategy",
- "transformers.integrations.deepspeed.HfDeepSpeedConfig",
- "torch.device",
- "h4.training.config.DPOTrainingArguments",
- "transformers.integrations.deepspeed.HfTrainerDeepSpeedConfig",
- "accelerate.utils.dataclasses.DeepSpeedPlugin",
- "transformers.training_args.OptimizerNames",
- "transformers.trainer_utils.IntervalStrategy",
- "accelerate.utils.dataclasses.DistributedType"
How to fix it?
6.26 kB
Add HuggingFaceH4/mistral-7b-dpo-v21.0cai.0.2 checkpoint