pnawrot commited on
Commit
8e952f4
1 Parent(s): f6edf67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -10,9 +10,8 @@ datasets:
10
  - allenai/c4
11
  ---
12
 
13
- [Google's T5-v1.1-base](https://huggingface.co/google/t5-v1_1-base) pre-trained for 24 hours (80k steps / 256 batch size) in [nanoT5](https://github.com/PiotrNawrot/nanoT5) library for efficient pre-training.
14
 
15
  For more details about the model refer to the original [paper](https://arxiv.org/pdf/2002.05202.pdf) and original [model weights](https://huggingface.co/google/t5-v1_1-base).
16
 
17
- This checkpoint was pre-trained on a single GPU for 20 hours.
18
  It can be further fine-tuned on SuperNatural-Instructions dataset to achieve comparable performance of the same model pre-trained on 150x more data through "a combination of model and data parallelism [...] on slices of Cloud TPU Pods", each with 1024 TPUs.
 
10
  - allenai/c4
11
  ---
12
 
13
+ [Google's T5-v1.1-base](https://huggingface.co/google/t5-v1_1-base) pre-trained for 24 hours (80k steps / 256 batch size) on a single GPU in [nanoT5](https://github.com/PiotrNawrot/nanoT5) library for efficient pre-training.
14
 
15
  For more details about the model refer to the original [paper](https://arxiv.org/pdf/2002.05202.pdf) and original [model weights](https://huggingface.co/google/t5-v1_1-base).
16
 
 
17
  It can be further fine-tuned on SuperNatural-Instructions dataset to achieve comparable performance of the same model pre-trained on 150x more data through "a combination of model and data parallelism [...] on slices of Cloud TPU Pods", each with 1024 TPUs.