File size: 9,378 Bytes
cac475f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
2022-04-04 18:09:43 INFO Running runs: []
2022-04-04 18:09:43 INFO Agent received command: run
2022-04-04 18:09:43 INFO Agent starting run with config:
	dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets
	dataset_config_name: clean
	dataset_name: librispeech_asr
	eval_split_name: validation
	generation_max_length: 40
	generation_num_beams: 1
	gradient_accumulation_steps: 1
	learning_rate: 2.565346074198426e-05
	length_column_name: input_length
	logging_steps: 1
	matmul_precision: highest
	max_duration_in_seconds: 15
	max_target_length: 64
	min_duration_in_seconds: 15
	model_name_or_path: ./
	num_train_epochs: 5
	output_dir: ./
	per_device_eval_batch_size: 2
	per_device_train_batch_size: 2
	preprocessing_num_workers: 16
	text_column_name: text
	train_split_name: train.100
	wandb_project: flax-wav2vec2-2-bart-large-cnn
	warmup_steps: 500
2022-04-04 18:09:43 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=2.565346074198426e-05 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500
2022-04-04 18:09:48 INFO Running runs: ['p4wqexfj']
2022-04-04 18:10:23 INFO Cleaning up finished run: p4wqexfj
2022-04-04 18:10:24 INFO Agent received command: run
2022-04-04 18:10:24 INFO Agent starting run with config:
	dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets
	dataset_config_name: clean
	dataset_name: librispeech_asr
	eval_split_name: validation
	generation_max_length: 40
	generation_num_beams: 1
	gradient_accumulation_steps: 1
	learning_rate: 0.0006871268347239357
	length_column_name: input_length
	logging_steps: 1
	matmul_precision: highest
	max_duration_in_seconds: 15
	max_target_length: 64
	min_duration_in_seconds: 15
	model_name_or_path: ./
	num_train_epochs: 5
	output_dir: ./
	per_device_eval_batch_size: 2
	per_device_train_batch_size: 2
	preprocessing_num_workers: 16
	text_column_name: text
	train_split_name: train.100
	wandb_project: flax-wav2vec2-2-bart-large-cnn
	warmup_steps: 500
2022-04-04 18:10:24 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=0.0006871268347239357 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500
2022-04-04 18:10:29 INFO Running runs: ['mgg9caus']
2022-04-04 18:10:59 INFO Cleaning up finished run: mgg9caus
2022-04-04 18:10:59 INFO Agent received command: run
2022-04-04 18:10:59 INFO Agent starting run with config:
	dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets
	dataset_config_name: clean
	dataset_name: librispeech_asr
	eval_split_name: validation
	generation_max_length: 40
	generation_num_beams: 1
	gradient_accumulation_steps: 1
	learning_rate: 9.383495031304748e-05
	length_column_name: input_length
	logging_steps: 1
	matmul_precision: highest
	max_duration_in_seconds: 15
	max_target_length: 64
	min_duration_in_seconds: 15
	model_name_or_path: ./
	num_train_epochs: 5
	output_dir: ./
	per_device_eval_batch_size: 2
	per_device_train_batch_size: 2
	preprocessing_num_workers: 16
	text_column_name: text
	train_split_name: train.100
	wandb_project: flax-wav2vec2-2-bart-large-cnn
	warmup_steps: 500
2022-04-04 18:10:59 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=9.383495031304748e-05 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500
2022-04-04 18:11:04 INFO Running runs: ['88xgr1fg']
2022-04-04 18:11:35 INFO Cleaning up finished run: 88xgr1fg
2022-04-04 18:11:35 INFO Agent received command: run
2022-04-04 18:11:35 INFO Agent starting run with config:
	dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets
	dataset_config_name: clean
	dataset_name: librispeech_asr
	eval_split_name: validation
	generation_max_length: 40
	generation_num_beams: 1
	gradient_accumulation_steps: 1
	learning_rate: 7.331199736432637e-05
	length_column_name: input_length
	logging_steps: 1
	matmul_precision: highest
	max_duration_in_seconds: 15
	max_target_length: 64
	min_duration_in_seconds: 15
	model_name_or_path: ./
	num_train_epochs: 5
	output_dir: ./
	per_device_eval_batch_size: 2
	per_device_train_batch_size: 2
	preprocessing_num_workers: 16
	text_column_name: text
	train_split_name: train.100
	wandb_project: flax-wav2vec2-2-bart-large-cnn
	warmup_steps: 500
2022-04-04 18:11:35 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=7.331199736432637e-05 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500
2022-04-04 18:11:40 INFO Running runs: ['xmgtui21']
2022-04-04 18:12:15 INFO Cleaning up finished run: xmgtui21
2022-04-04 18:12:15 INFO Agent received command: run
2022-04-04 18:12:15 INFO Agent starting run with config:
	dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets
	dataset_config_name: clean
	dataset_name: librispeech_asr
	eval_split_name: validation
	generation_max_length: 40
	generation_num_beams: 1
	gradient_accumulation_steps: 1
	learning_rate: 0.0007642424770238645
	length_column_name: input_length
	logging_steps: 1
	matmul_precision: highest
	max_duration_in_seconds: 15
	max_target_length: 64
	min_duration_in_seconds: 15
	model_name_or_path: ./
	num_train_epochs: 5
	output_dir: ./
	per_device_eval_batch_size: 2
	per_device_train_batch_size: 2
	preprocessing_num_workers: 16
	text_column_name: text
	train_split_name: train.100
	wandb_project: flax-wav2vec2-2-bart-large-cnn
	warmup_steps: 500
2022-04-04 18:12:15 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=0.0007642424770238645 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500
2022-04-04 18:12:20 INFO Running runs: ['4s004g1k']
2022-04-04 18:12:51 ERROR Detected 5 failed runs in a row, shutting down.
2022-04-04 18:12:51 INFO To change this value set WANDB_AGENT_MAX_INITIAL_FAILURES=val