Datasets:
Tasks:
Image-to-Text
Modalities:
Text
Formats:
webdataset
Languages:
English
Size:
1K - 10K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -260,9 +260,12 @@ Estimating the number of tokens is done using a `LlamaTokenizer` from `tokenizer
|
|
260 |
### Data Splits
|
261 |
|
262 |
#### Train
|
263 |
-
* `pdfa-eng-wds
|
264 |
* Downloaded on 2024/01/22
|
265 |
-
* 1800 shards
|
|
|
|
|
|
|
266 |
|
267 |
## Additional Information
|
268 |
|
|
|
260 |
### Data Splits
|
261 |
|
262 |
#### Train
|
263 |
+
* `pdfa-eng-wds-{0000..1799}.tar`
|
264 |
* Downloaded on 2024/01/22
|
265 |
+
* 1800 shards (approx 1200 docs/shard)
|
266 |
+
* 2,159,432 samples
|
267 |
+
* 18M pages
|
268 |
+
* 9.7 billion tokens (around 5 billion words)
|
269 |
|
270 |
## Additional Information
|
271 |
|