Datasets:
Tasks:
Image-to-Text
Modalities:
Text
Formats:
webdataset
Languages:
English
Size:
1K - 10K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,11 @@ size_categories:
|
|
19 |
|
20 |
PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED. The original purpose of that corpus is for comprehensive file format analysis. The purpose of that subset differs in that regard, as focus has been done on making the dataset machine learning-ready.
|
21 |
|
|
|
|
|
|
|
|
|
|
|
22 |
### Usage
|
23 |
|
24 |
This instance of PDFA is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format.
|
@@ -44,7 +49,7 @@ Initially, we started from the readily available ~11TB zip files from PDFA in th
|
|
44 |
From the pdf digital files, we extracted words, bounding boxes and image bounding boxes that are available in the pdf file. This information is then reshaped into lines organized in reading order, under the key `lines`. We keep non-reshaped word and bounding box information under the `word` key, should users want to use their own heuristic.
|
45 |
|
46 |
The way we obtain an approximate reading order is simply by looking at the frequency peaks of the leftmost word x-coordinate. A frequency peak means that a high number of lines are starting from the same point. Then, we keep track of the x-coordinate of each such identified column. If no peaks are found, the document is assumed to be readable in plain format.
|
47 |
-
|
48 |
```python
|
49 |
def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=1):
|
50 |
"""
|
@@ -87,6 +92,11 @@ def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=
|
|
87 |
return separators
|
88 |
```
|
89 |
|
|
|
|
|
|
|
|
|
|
|
90 |
For each pdf document, we store statistics on the file size, number of words (as characters separated by spaces), number of pages, as well as the rendering times of each page for a given dpi.
|
91 |
#### Filtering process
|
92 |
|
@@ -111,7 +121,7 @@ pdf_first_page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_p
|
|
111 |
```
|
112 |
|
113 |
<center>
|
114 |
-
|
115 |
</center>
|
116 |
|
117 |
The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
|
|
|
19 |
|
20 |
PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED. The original purpose of that corpus is for comprehensive file format analysis. The purpose of that subset differs in that regard, as focus has been done on making the dataset machine learning-ready.
|
21 |
|
22 |
+
<center>
|
23 |
+
<img src="https://huggingface.co/datasets/pixparse/pdfa-english-train/resolve/main/doc_images/Nexsen_pruet.png" alt="A brochure with visible bounding boxes for lines and words" width="600" height="300">
|
24 |
+
<p><em>An example page of one pdf document, with added bounding boxes per word/line plotted from the annotation.</em></p>
|
25 |
+
</center>
|
26 |
+
|
27 |
### Usage
|
28 |
|
29 |
This instance of PDFA is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format.
|
|
|
49 |
From the pdf digital files, we extracted words, bounding boxes and image bounding boxes that are available in the pdf file. This information is then reshaped into lines organized in reading order, under the key `lines`. We keep non-reshaped word and bounding box information under the `word` key, should users want to use their own heuristic.
|
50 |
|
51 |
The way we obtain an approximate reading order is simply by looking at the frequency peaks of the leftmost word x-coordinate. A frequency peak means that a high number of lines are starting from the same point. Then, we keep track of the x-coordinate of each such identified column. If no peaks are found, the document is assumed to be readable in plain format.
|
52 |
+
The code to detect columns can be found here.
|
53 |
```python
|
54 |
def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=1):
|
55 |
"""
|
|
|
92 |
return separators
|
93 |
```
|
94 |
|
95 |
+
<center>
|
96 |
+
<img src="https://huggingface.co/datasets/pixparse/pdfa-english-train/resolve/main/doc_images/columnar_detection.png" alt="A graph of leftmost x positions in a 2-columns document" width="600" height="300">
|
97 |
+
<p><em>A graph of leftmost x-positions of bounding boxes on a 2-column (arxiv) document. Peaks are visibly detected. </em></p>
|
98 |
+
</center>
|
99 |
+
|
100 |
For each pdf document, we store statistics on the file size, number of words (as characters separated by spaces), number of pages, as well as the rendering times of each page for a given dpi.
|
101 |
#### Filtering process
|
102 |
|
|
|
121 |
```
|
122 |
|
123 |
<center>
|
124 |
+
<img src="https://huggingface.co/datasets/pixparse/pdfa-english-train/resolve/main/doc_images/pdf_first_page.png" alt="Rendering of an image for a Grade 8 lesson plan" width="400" height="600">
|
125 |
</center>
|
126 |
|
127 |
The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
|