File size: 2,273 Bytes
dfac07d |
1 2 3 4 5 6 7 8 |
Because `predownload` was not specified, it will default to 8*batch_size if batch_size is not None, otherwise 64. Prior to Streaming v0.7.0, `predownload` defaulted to max(batch_size, 256 * batch_size // num_canonical_nodes).
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using local dataset /home/oweller/bert24-data/data/ProLong_text/code_repos/10-2...
Using tokenizer model bclavie/olmo_bert_template...
Using 1 processes for tokenization, with 1000 items per process.
Processing samples: 0%| | 0/11330 [00:00<?, ?it/s]
Processing samples: 9%|β | 1000/11330 [00:35<06:06, 28.18it/s]
Processing samples: 9%|β | 1000/11330 [00:50<06:06, 28.18it/s]
Processing samples: 18%|ββ | 2000/11330 [00:55<04:04, 38.21it/s]
Processing samples: 18%|ββ | 2000/11330 [01:10<04:04, 38.21it/s]
Processing samples: 26%|βββ | 3000/11330 [01:10<02:58, 46.79it/s]
Processing samples: 26%|βββ | 3000/11330 [01:30<02:58, 46.79it/s]
Processing samples: 35%|ββββ | 4000/11330 [01:32<02:37, 46.55it/s]
Processing samples: 44%|βββββ | 5000/11330 [01:46<02:00, 52.69it/s]
Processing samples: 53%|ββββββ | 6000/11330 [01:59<01:29, 59.67it/s]
Processing samples: 53%|ββββββ | 6000/11330 [02:10<01:29, 59.67it/s]
Processing samples: 62%|βββββββ | 7000/11330 [02:29<01:31, 47.39it/s]
Processing samples: 62%|βββββββ | 7000/11330 [02:40<01:31, 47.39it/s]
Processing samples: 71%|βββββββ | 8000/11330 [02:51<01:11, 46.75it/s]
Processing samples: 79%|ββββββββ | 9000/11330 [03:05<00:44, 52.03it/s]
Processing samples: 79%|ββββββββ | 9000/11330 [03:20<00:44, 52.03it/s]
Processing samples: 88%|βββββββββ | 10000/11330 [05:19<01:12, 18.35it/s]
Processing samples: 97%|ββββββββββ| 11000/11330 [05:40<00:14, 22.65it/s]
Processing samples: 100%|ββββββββββ| 11330/11330 [05:45<00:00, 24.56it/s]
Processing samples: 100%|ββββββββββ| 11330/11330 [05:48<00:00, 32.50it/s]
Finished writing with a total of 152976561 train tokens.
|