Datasets:
Tasks:
Image-to-Text
Modalities:
Text
Formats:
webdataset
Languages:
English
Size:
1K - 10K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
|
2 |
---
|
3 |
license: other
|
4 |
-
license_name: pdfa-eng-
|
5 |
license_link: LICENSE
|
6 |
task_categories:
|
7 |
- image-to-text
|
@@ -26,18 +26,42 @@ splits:
|
|
26 |
PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED. The original purpose of that corpus is for comprehensive pdf documents analysis. The purpose of that subset differs in that regard, as focus has been done on making the dataset machine learning-ready for vision-language models.
|
27 |
|
28 |
<center>
|
29 |
-
<img src="https://huggingface.co/datasets/pixparse/pdfa-
|
30 |
<p><em>An example page of one pdf document, with added bounding boxes around words (red), lines (blue) and embedded images (green). </em></p>
|
31 |
</center>
|
|
|
32 |
|
33 |
-
### Usage
|
34 |
|
35 |
-
|
36 |
-
For dataloading, we recommend to use it with the [chug](https://github.com/huggingface/chug) library, an optimized library for distributed data loading.
|
37 |
|
|
|
38 |
|
39 |
```python
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
print(next(iter(dataset['train'])).keys())
|
42 |
>> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
|
43 |
```
|
@@ -58,39 +82,6 @@ For faster download, you can use directly the `huggingface_hub` library. Make su
|
|
58 |
```
|
59 |
On a normal setting, the 1.5TB can be downloaded in approximately 4 hours.
|
60 |
|
61 |
-
### Efficient dataloading with `chug`
|
62 |
-
|
63 |
-
# TODO The following example uses fitz which is AGPL. We should also recommend the same with pypdf.
|
64 |
-
|
65 |
-
```python
|
66 |
-
from chug import create_wds_loader, create_doc_anno_pipe
|
67 |
-
|
68 |
-
# TODO import image transforms and text transforms from pixparse.data
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
decoder = create_decoder_pipe(
|
74 |
-
image_preprocess=image_preprocess, # callable of transforms to image tensors
|
75 |
-
anno_preprocess=anno_preprocess, # callable of transforms to text into tensors of labels and targets
|
76 |
-
image_key="pdf",
|
77 |
-
image_fmt="RGB",
|
78 |
-
)
|
79 |
-
|
80 |
-
loader = create_wds_loader(
|
81 |
-
"/my_data/pdfa-eng-train-*.tar",
|
82 |
-
decoder,
|
83 |
-
is_train=True,
|
84 |
-
resampled=False,
|
85 |
-
start_interval=0,
|
86 |
-
num_samples=2159432,
|
87 |
-
workers=8,
|
88 |
-
batch_size=32, # adjust to your architecture capacity
|
89 |
-
seed=seed, # set a seed
|
90 |
-
world_size=world_size, # get world_size from your training environment
|
91 |
-
)
|
92 |
-
|
93 |
-
```
|
94 |
|
95 |
Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
|
96 |
as well as the count of files per shard.
|
@@ -146,7 +137,7 @@ def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=
|
|
146 |
```
|
147 |
|
148 |
<center>
|
149 |
-
<img src="https://huggingface.co/datasets/pixparse/pdfa-
|
150 |
<p><em>A graph of leftmost x-positions of bounding boxes on a 2-column (arxiv) document. Peaks are visibly detected. </em></p>
|
151 |
</center>
|
152 |
|
@@ -177,7 +168,7 @@ pdf_first_page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_p
|
|
177 |
```
|
178 |
|
179 |
<center>
|
180 |
-
<img src="https://huggingface.co/datasets/pixparse/pdfa-
|
181 |
</center>
|
182 |
|
183 |
The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
|
@@ -262,14 +253,14 @@ Such a formatting follows the multimodal dataset from the Industry Document Libr
|
|
262 |
Estimating the number of tokens is done using a `LlamaTokenizer` from `tokenizers`. There is a clear power law distribution with respect to data length.
|
263 |
|
264 |
<center>
|
265 |
-
<img src="https://huggingface.co/datasets/pixparse/pdfa-
|
266 |
<p><em>A histogram of token count distribution per page, taken from a subset of the dataset. There is a visible power law. </em></p>
|
267 |
</center>
|
268 |
|
269 |
### Data Splits
|
270 |
|
271 |
#### Train
|
272 |
-
* `pdfa-eng-
|
273 |
* Downloaded on 2024/01/22
|
274 |
* 1800 shards, 2,159,432 samples, 18M pages, 9.7 billion tokens (around 5 billion words)
|
275 |
|
@@ -281,7 +272,7 @@ Pablo Montalvo, Ross Wightman
|
|
281 |
|
282 |
### Disclaimer and note to researchers
|
283 |
|
284 |
-
This dataset is intended as an OCR-heavy pretraining basis for vision-language models. As a corpus, it does not represent the intent and purpose from
|
285 |
|
286 |
Further, the annotation is limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely by said annotation.
|
287 |
|
|
|
1 |
|
2 |
---
|
3 |
license: other
|
4 |
+
license_name: pdfa-eng-wds
|
5 |
license_link: LICENSE
|
6 |
task_categories:
|
7 |
- image-to-text
|
|
|
26 |
PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED. The original purpose of that corpus is for comprehensive pdf documents analysis. The purpose of that subset differs in that regard, as focus has been done on making the dataset machine learning-ready for vision-language models.
|
27 |
|
28 |
<center>
|
29 |
+
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/Nexsen_pruet.png" alt="A brochure with visible bounding boxes for lines and words" width="600" height="300">
|
30 |
<p><em>An example page of one pdf document, with added bounding boxes around words (red), lines (blue) and embedded images (green). </em></p>
|
31 |
</center>
|
32 |
+
This instance of PDFA is in [webdataset](https://github.com/webdataset/webdataset/) .tar format and can be used with derived forms of the `webdataset` library.
|
33 |
|
|
|
34 |
|
35 |
+
### Usage with `chug`
|
|
|
36 |
|
37 |
+
Check out [chug](https://github.com/huggingface/chug), our optimized library for sharded dataset loading!
|
38 |
|
39 |
```python
|
40 |
+
import chug
|
41 |
+
|
42 |
+
task_cfg = chug.DataTaskDocReadCfg(
|
43 |
+
page_sampling='all',
|
44 |
+
)
|
45 |
+
data_cfg = chug.DataCfg(
|
46 |
+
source='pixparse/pdfa-eng-wds',
|
47 |
+
split='train',
|
48 |
+
batch_size=None,
|
49 |
+
format='hfids',
|
50 |
+
num_workers=0,
|
51 |
+
)
|
52 |
+
data_loader = chug.create_loader(
|
53 |
+
data_cfg,
|
54 |
+
task_cfg,
|
55 |
+
)
|
56 |
+
sample = next(iter(data_loader))
|
57 |
+
```
|
58 |
+
|
59 |
+
### Usage with `datasets`
|
60 |
+
|
61 |
+
This dataset can also be used with webdataset library or current releases of Hugging Face datasets. Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth.
|
62 |
+
|
63 |
+
```python
|
64 |
+
dataset = load_dataset('pixparse/pdfa-eng-wds', streaming=True)
|
65 |
print(next(iter(dataset['train'])).keys())
|
66 |
>> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
|
67 |
```
|
|
|
82 |
```
|
83 |
On a normal setting, the 1.5TB can be downloaded in approximately 4 hours.
|
84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
|
86 |
Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
|
87 |
as well as the count of files per shard.
|
|
|
137 |
```
|
138 |
|
139 |
<center>
|
140 |
+
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/columnar_detection.png" alt="A graph of leftmost x positions in a 2-columns document" width="600" height="300">
|
141 |
<p><em>A graph of leftmost x-positions of bounding boxes on a 2-column (arxiv) document. Peaks are visibly detected. </em></p>
|
142 |
</center>
|
143 |
|
|
|
168 |
```
|
169 |
|
170 |
<center>
|
171 |
+
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/pdf_first_page.png" alt="Rendering of an image for a Grade 8 lesson plan" width="400" height="600">
|
172 |
</center>
|
173 |
|
174 |
The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
|
|
|
253 |
Estimating the number of tokens is done using a `LlamaTokenizer` from `tokenizers`. There is a clear power law distribution with respect to data length.
|
254 |
|
255 |
<center>
|
256 |
+
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/token_count_distribution.png" alt="A histogram of token count distribution per page" width="600" height="300">
|
257 |
<p><em>A histogram of token count distribution per page, taken from a subset of the dataset. There is a visible power law. </em></p>
|
258 |
</center>
|
259 |
|
260 |
### Data Splits
|
261 |
|
262 |
#### Train
|
263 |
+
* `pdfa-eng-wds-*.tar`
|
264 |
* Downloaded on 2024/01/22
|
265 |
* 1800 shards, 2,159,432 samples, 18M pages, 9.7 billion tokens (around 5 billion words)
|
266 |
|
|
|
272 |
|
273 |
### Disclaimer and note to researchers
|
274 |
|
275 |
+
This dataset is intended as an OCR-heavy pretraining basis for vision-language models. As a corpus, it does not represent the intent and purpose from CC-MAIN-2021-31-PDF-UNTRUNCATED. The original is made to represent extant pdf data in its diversity and complexity. In particular, common issues related to misuse of pdfs such as mojibake (garbled text due to decoding erros) are yet to be addressed systematically, and this dataset present simplifications that can hide such issues found in the wild. In order to address these biases, we recommend to examine carefully both the simplified annotation and the original `pdf` data, beyond a simple rendering.
|
276 |
|
277 |
Further, the annotation is limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely by said annotation.
|
278 |
|