upload tokenizer.json
beea53d
verified
-
1.52 kB
initial commit
-
9.35 kB
Update README.md
-
1.08 kB
upload added_tokens.json
-
1.43 kB
upload config.json
-
9.26 kB
upload configuration_phi.py
-
74 Bytes
upload generation_config.json
-
9.71 kB
upload generation_utils.py
-
23.2 kB
upload llava_arch.py
-
456 kB
upload merges.txt
-
5 GB
upload model-00001-of-00002.safetensors
-
1.19 GB
upload model-00002-of-00002.safetensors
-
84.9 kB
upload model.safetensors.index.json
-
9.3 kB
upload modeling_llava_phi.py
-
62.9 kB
upload modeling_phi.py
-
473 Bytes
upload special_tokens_map.json
-
2.11 MB
upload tokenizer.json
-
7.37 kB
upload tokenizer_config.json
-
1.44 MB
upload trainer_state.json
training_args.bin
Detected Pickle imports (11)
- "transformers.training_args.OptimizerNames",
- "accelerate.state.PartialState",
- "transformers.trainer_utils.SchedulerType",
- "torch.device",
- "torch.bfloat16",
- "llava.train.train.TrainingArguments",
- "transformers.trainer_utils.IntervalStrategy",
- "accelerate.utils.dataclasses.DistributedType",
- "transformers.trainer_utils.HubStrategy",
- "accelerate.utils.dataclasses.DeepSpeedPlugin",
- "transformers.integrations.deepspeed.HfTrainerDeepSpeedConfig"
How to fix it?
6.78 kB
upload training_args.bin
-
97.5 kB
upload training_datasets_by_stage.jpg
-
798 kB
upload vocab.json