File size: 3,801 Bytes
6b0fcb8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
	- Avoid using `tokenizers` before the fork if possible
	- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
/bin/bash: nvdia-smi: command not found
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
	- Avoid using `tokenizers` before the fork if possible
	- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  adding: kaggle/working/ (stored 0%)
  adding: kaggle/working/test.csv (deflated 81%)
  adding: kaggle/working/trainer/ (stored 0%)
  adding: kaggle/working/trainer/README.md (deflated 48%)
  adding: kaggle/working/trainer/adapter_config.json (deflated 52%)
  adding: kaggle/working/trainer/checkpoint-118/ (stored 0%)
  adding: kaggle/working/trainer/checkpoint-118/rng_state.pth (deflated 25%)
  adding: kaggle/working/trainer/checkpoint-118/optimizer.pt (deflated 16%)
  adding: kaggle/working/trainer/checkpoint-118/README.md (deflated 66%)
  adding: kaggle/working/trainer/checkpoint-118/scheduler.pt (deflated 56%)
  adding: kaggle/working/trainer/checkpoint-118/adapter_config.json (deflated 52%)
  adding: kaggle/working/trainer/checkpoint-118/training_args.bin (deflated 51%)
  adding: kaggle/working/trainer/checkpoint-118/trainer_state.json (deflated 55%)
  adding: kaggle/working/trainer/checkpoint-118/adapter_model.safetensors (deflated 8%)
  adding: kaggle/working/trainer/checkpoint-472/ (stored 0%)
  adding: kaggle/working/trainer/checkpoint-472/rng_state.pth (deflated 25%)
  adding: kaggle/working/trainer/checkpoint-472/optimizer.pt (deflated 16%)
  adding: kaggle/working/trainer/checkpoint-472/README.md (deflated 66%)
  adding: kaggle/working/trainer/checkpoint-472/scheduler.pt (deflated 55%)
  adding: kaggle/working/trainer/checkpoint-472/adapter_config.json (deflated 52%)
  adding: kaggle/working/trainer/checkpoint-472/training_args.bin (deflated 51%)
  adding: kaggle/working/trainer/checkpoint-472/trainer_state.json (deflated 71%)
  adding: kaggle/working/trainer/checkpoint-472/adapter_model.safetensors (deflated 7%)
  adding: kaggle/working/trainer/checkpoint-236/ (stored 0%)
  adding: kaggle/working/trainer/checkpoint-236/rng_state.pth (deflated 25%)
  adding: kaggle/working/trainer/checkpoint-236/optimizer.pt (deflated 16%)
  adding: kaggle/working/trainer/checkpoint-236/README.md (deflated 66%)
  adding: kaggle/working/trainer/checkpoint-236/scheduler.pt (deflated 56%)
  adding: kaggle/working/trainer/checkpoint-236/adapter_config.json (deflated 52%)
  adding: kaggle/working/trainer/checkpoint-236/training_args.bin (deflated 51%)
  adding: kaggle/working/trainer/checkpoint-236/trainer_state.json (deflated 63%)
  adding: kaggle/working/trainer/checkpoint-236/adapter_model.safetensors (deflated 7%)
  adding: kaggle/working/trainer/training_args.bin (deflated 51%)
  adding: kaggle/working/trainer/checkpoint-354/ (stored 0%)
  adding: kaggle/working/trainer/checkpoint-354/rng_state.pth (deflated 25%)
  adding: kaggle/working/trainer/checkpoint-354/optimizer.pt (deflated 16%)
  adding: kaggle/working/trainer/checkpoint-354/README.md (deflated 66%)
  adding: kaggle/working/trainer/checkpoint-354/scheduler.pt (deflated 55%)
  adding: kaggle/working/trainer/checkpoint-354/adapter_config.json (deflated 52%)
  adding: kaggle/working/trainer/checkpoint-354/training_args.bin (deflated 51%)
  adding: kaggle/working/trainer/checkpoint-354/trainer_state.json (deflated 68%)
  adding: kaggle/working/trainer/checkpoint-354/adapter_model.safetensors (deflated 7%)