sanchit-gandhi HF staff commited on
Commit
2c4c266
1 Parent(s): 0323715

Training in progress, step 7000

Browse files
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3bf80a6ec1365db3b34e87993510e058554856decc7dc472686864b3400dda1b
3
  size 2353867057
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35e5bd64525504f987bff198c66c6674ce4fae90478689b6996b5c0ca9edaa62
3
  size 2353867057
runs/May04_13-30-49_sanchit--v100/events.out.tfevents.1651674089.sanchit--v100.50430.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ef28970dc86d514fd49af76f1a0cbc8aa26982ed36afe0787d0089976b034401
3
- size 1034119
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b06624f1c35dcce61d3d0ea9ae9565b8a340cefb3e60f9c36c63fd2f617b6c9
3
+ size 1112938
wandb/debug-cli.log CHANGED
@@ -24,3 +24,7 @@
24
  warmup_steps: 500
25
  2022-05-04 13:30:45 INFO About to run command: python3 run_xtreme_s.py --overwrite_output_dir --freeze_feature_encoder --gradient_checkpointing --predict_with_generate --fp16 --group_by_length --do_train --do_eval --load_best_model_at_end --push_to_hub --use_auth_token --eval_split_name=test --eval_steps=500 --evaluation_strategy=steps --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=8 --greater_is_better=True --hidden_dropout=0.17305159310134854 --language=fr.en --learning_rate=0.00012335092351490598 --logging_steps=1 --max_duration_in_seconds=20 --metric_for_best_model=bleu --model_name_or_path=./ --num_train_epochs=3 --output_dir=./ --per_device_eval_batch_size=8 --per_device_train_batch_size=8 --save_steps=500 --task=covost2 --warmup_steps=500
26
  2022-05-04 13:30:50 INFO Running runs: ['w4rlzz90']
 
 
 
 
 
24
  warmup_steps: 500
25
  2022-05-04 13:30:45 INFO About to run command: python3 run_xtreme_s.py --overwrite_output_dir --freeze_feature_encoder --gradient_checkpointing --predict_with_generate --fp16 --group_by_length --do_train --do_eval --load_best_model_at_end --push_to_hub --use_auth_token --eval_split_name=test --eval_steps=500 --evaluation_strategy=steps --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=8 --greater_is_better=True --hidden_dropout=0.17305159310134854 --language=fr.en --learning_rate=0.00012335092351490598 --logging_steps=1 --max_duration_in_seconds=20 --metric_for_best_model=bleu --model_name_or_path=./ --num_train_epochs=3 --output_dir=./ --per_device_eval_batch_size=8 --per_device_train_batch_size=8 --save_steps=500 --task=covost2 --warmup_steps=500
26
  2022-05-04 13:30:50 INFO Running runs: ['w4rlzz90']
27
+ 2022-05-05 09:24:36 ERROR 500 response executing GraphQL.
28
+ 2022-05-05 09:24:36 ERROR {"errors":[{"message":"context deadline exceeded"}]}
29
+ 2022-05-05 09:25:32 ERROR 500 response executing GraphQL.
30
+ 2022-05-05 09:25:32 ERROR {"errors":[{"message":"context deadline exceeded"}]}
wandb/run-20220504_142129-w4rlzz90/files/output.log CHANGED
The diff for this file is too large to render. See raw diff
 
wandb/run-20220504_142129-w4rlzz90/files/wandb-summary.json CHANGED
The diff for this file is too large to render. See raw diff
 
wandb/run-20220504_142129-w4rlzz90/logs/debug-internal.log CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5cdb327e598ac64e7cae39dd5893ee55f23e9c1b75a61b06dab03340b1b4f428
3
- size 15370139
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ea0a01617a7f2eb83ebb75d6ae3d8f74fde57ef3d04708db9b2a7fed6f09209
3
+ size 16339117
wandb/run-20220504_142129-w4rlzz90/run-w4rlzz90.wandb CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:14dcb30483696fe38a819a7b5e9ce6780618b8e452ecadf98358925f0cab8cd5
3
- size 690433983
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fa087fc845449e6ec3b1f4a2594beb537798985e3c3711320ca4a897a9dfb19
3
+ size 741708519