|
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... |
|
To disable this warning, you can either: |
|
- Avoid using `tokenizers` before the fork if possible |
|
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) |
|
/bin/bash: nvdia-smi: command not found |
|
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... |
|
To disable this warning, you can either: |
|
- Avoid using `tokenizers` before the fork if possible |
|
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) |
|
adding: kaggle/working/ (stored 0%) |
|
adding: kaggle/working/test.csv (deflated 81%) |
|
adding: kaggle/working/trainer/ (stored 0%) |
|
adding: kaggle/working/trainer/README.md (deflated 48%) |
|
adding: kaggle/working/trainer/adapter_config.json (deflated 52%) |
|
adding: kaggle/working/trainer/checkpoint-118/ (stored 0%) |
|
adding: kaggle/working/trainer/checkpoint-118/rng_state.pth (deflated 25%) |
|
adding: kaggle/working/trainer/checkpoint-118/optimizer.pt (deflated 16%) |
|
adding: kaggle/working/trainer/checkpoint-118/README.md (deflated 66%) |
|
adding: kaggle/working/trainer/checkpoint-118/scheduler.pt (deflated 56%) |
|
adding: kaggle/working/trainer/checkpoint-118/adapter_config.json (deflated 52%) |
|
adding: kaggle/working/trainer/checkpoint-118/training_args.bin (deflated 51%) |
|
adding: kaggle/working/trainer/checkpoint-118/trainer_state.json (deflated 55%) |
|
adding: kaggle/working/trainer/checkpoint-118/adapter_model.safetensors (deflated 8%) |
|
adding: kaggle/working/trainer/checkpoint-472/ (stored 0%) |
|
adding: kaggle/working/trainer/checkpoint-472/rng_state.pth (deflated 25%) |
|
adding: kaggle/working/trainer/checkpoint-472/optimizer.pt (deflated 16%) |
|
adding: kaggle/working/trainer/checkpoint-472/README.md (deflated 66%) |
|
adding: kaggle/working/trainer/checkpoint-472/scheduler.pt (deflated 55%) |
|
adding: kaggle/working/trainer/checkpoint-472/adapter_config.json (deflated 52%) |
|
adding: kaggle/working/trainer/checkpoint-472/training_args.bin (deflated 51%) |
|
adding: kaggle/working/trainer/checkpoint-472/trainer_state.json (deflated 71%) |
|
adding: kaggle/working/trainer/checkpoint-472/adapter_model.safetensors (deflated 7%) |
|
adding: kaggle/working/trainer/checkpoint-236/ (stored 0%) |
|
adding: kaggle/working/trainer/checkpoint-236/rng_state.pth (deflated 25%) |
|
adding: kaggle/working/trainer/checkpoint-236/optimizer.pt (deflated 16%) |
|
adding: kaggle/working/trainer/checkpoint-236/README.md (deflated 66%) |
|
adding: kaggle/working/trainer/checkpoint-236/scheduler.pt (deflated 56%) |
|
adding: kaggle/working/trainer/checkpoint-236/adapter_config.json (deflated 52%) |
|
adding: kaggle/working/trainer/checkpoint-236/training_args.bin (deflated 51%) |
|
adding: kaggle/working/trainer/checkpoint-236/trainer_state.json (deflated 63%) |
|
adding: kaggle/working/trainer/checkpoint-236/adapter_model.safetensors (deflated 7%) |
|
adding: kaggle/working/trainer/training_args.bin (deflated 51%) |
|
adding: kaggle/working/trainer/checkpoint-354/ (stored 0%) |
|
adding: kaggle/working/trainer/checkpoint-354/rng_state.pth (deflated 25%) |
|
adding: kaggle/working/trainer/checkpoint-354/optimizer.pt (deflated 16%) |
|
adding: kaggle/working/trainer/checkpoint-354/README.md (deflated 66%) |
|
adding: kaggle/working/trainer/checkpoint-354/scheduler.pt (deflated 55%) |
|
adding: kaggle/working/trainer/checkpoint-354/adapter_config.json (deflated 52%) |
|
adding: kaggle/working/trainer/checkpoint-354/training_args.bin (deflated 51%) |
|
adding: kaggle/working/trainer/checkpoint-354/trainer_state.json (deflated 68%) |
|
adding: kaggle/working/trainer/checkpoint-354/adapter_model.safetensors (deflated 7%) |
|
|