Molbap HF staff commited on
Commit
d0f62d5
1 Parent(s): 4ee14df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -102,19 +102,16 @@ For each pdf document, we store statistics on the file size, number of words (as
102
 
103
  File size and page rendering time are used to set thresholds in the final dataset: the goal is to remove files that are larger than 100 MB, or that take more than 500ms to render on a modern machine, to optimize dataloading at scale. Having "too large" or "too slow" files would add a burden to large-scale training pipelines and we choose to alleviate this in the current release. Finally, a full pass over the dataset is done, trying to open and decode a bytestream from each raw object and discarding any object (pair pdf/json) that fails to be opened, to remove corrupted data.
104
 
105
- We get to 48 million pages kept as valid samples.
106
-
107
  As a last step, we use XLM-Roberta to restrict the dataset to an english subset, specifically `papluca/xlm-roberta-base-language-detection` , on the first 512 words of the first page of each document.
108
  Be aware that some documents may have several languages embedded in them, or that some predictions might be inaccurate. A majority of documents from the original corpus are in English language.
109
-
110
-
111
  <center>
112
  <img src="https://huggingface.co/datasets/pixparse/pdfa-english-train/resolve/main/doc_images/languages_pdfa_xlmroberta.png" alt="A histogram of languages count in the PDFA dataset." width="600" height="300">
113
  <p><em>A histogram of language distribution taken on a fraction of the original -non-filtered on language- PDFA dataset. </em></p>
114
  </center>
115
 
116
  At the end, each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
117
- webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
118
 
119
  ### Data, metadata and statistics.
120
 
@@ -203,14 +200,18 @@ The top-level key, `pages`, is a list of every page in the document. The above e
203
 
204
  For each page,
205
  `images_bbox` gives the bounding boxes of the images embedded in the page.
206
- `images_bbox_no_text_overlap` gives a reduced list of bounding boxes that have no overlap with text found in the pdf - that does not mean
207
  ``
208
 
209
-
210
  `score` is a placeholder of value 1.0 for the entire dataset.
211
 
 
 
212
 
213
- Such a formatting follows the multimodal dataset from the Industry Document Library, `https://huggingface.co/datasets/pixparse/IDL-wds`.
 
 
 
214
 
215
  ### Data Splits
216
 
@@ -227,7 +228,7 @@ Pablo Montalvo, Ross Wightman
227
 
228
  ### Disclaimer and note to researchers
229
 
230
- This dataset is intended as an OCR-heavy pretraining basis for vision-language models. As a corpus, does not represent the intent and purpose from CC-MAIN-2021-31-PDF-UNTRUNCATED. The original is made to represent extant pdf data in its diversity and complexity. In particular, common issues related to misuse of pdfs such as mojibake (garbled text due to decoding erros) are yet to be addressed systematically, and this dataset present simplifications that can hide such issues found in the wild. In order to address this biases, we recommend to examine carefully both the simplified annotation and the original `pdf` data, beyond a simple rendering.
231
 
232
  Further, the annotation is limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely by said annotation.
233
 
 
102
 
103
  File size and page rendering time are used to set thresholds in the final dataset: the goal is to remove files that are larger than 100 MB, or that take more than 500ms to render on a modern machine, to optimize dataloading at scale. Having "too large" or "too slow" files would add a burden to large-scale training pipelines and we choose to alleviate this in the current release. Finally, a full pass over the dataset is done, trying to open and decode a bytestream from each raw object and discarding any object (pair pdf/json) that fails to be opened, to remove corrupted data.
104
 
 
 
105
  As a last step, we use XLM-Roberta to restrict the dataset to an english subset, specifically `papluca/xlm-roberta-base-language-detection` , on the first 512 words of the first page of each document.
106
  Be aware that some documents may have several languages embedded in them, or that some predictions might be inaccurate. A majority of documents from the original corpus are in English language.
107
+
 
108
  <center>
109
  <img src="https://huggingface.co/datasets/pixparse/pdfa-english-train/resolve/main/doc_images/languages_pdfa_xlmroberta.png" alt="A histogram of languages count in the PDFA dataset." width="600" height="300">
110
  <p><em>A histogram of language distribution taken on a fraction of the original -non-filtered on language- PDFA dataset. </em></p>
111
  </center>
112
 
113
  At the end, each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
114
+ webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
115
 
116
  ### Data, metadata and statistics.
117
 
 
200
 
201
  For each page,
202
  `images_bbox` gives the bounding boxes of the images embedded in the page.
203
+ `images_bbox_no_text_overlap` gives a reduced list of bounding boxes that have no overlap with text found in the pdf. Text might be present as a drawing or another representation, however.
204
  ``
205
 
 
206
  `score` is a placeholder of value 1.0 for the entire dataset.
207
 
208
+ Such a formatting follows the multimodal dataset from the Industry Document Library, `https://huggingface.co/datasets/pixparse/IDL-wds`.
209
+ Estimating the number of tokens is done using a `LlamaTokenizer` from `tokenizers`. There is a clear power law distribution with respect to data length.
210
 
211
+ <center>
212
+ <img src="https://huggingface.co/datasets/pixparse/pdfa-english-train/resolve/main/doc_images/token_count_distribution.png" alt="A histogram of token count distribution per page" width="600" height="300">
213
+ <p><em>A histogram of token count distribution per page, taken from a subset of the dataset. There is a visible power law. </em></p>
214
+ </center>
215
 
216
  ### Data Splits
217
 
 
228
 
229
  ### Disclaimer and note to researchers
230
 
231
+ This dataset is intended as an OCR-heavy pretraining basis for vision-language models. As a corpus, it does not represent the intent and purpose from CC-MAIN-2021-31-PDF-UNTRUNCATED. The original is made to represent extant pdf data in its diversity and complexity. In particular, common issues related to misuse of pdfs such as mojibake (garbled text due to decoding erros) are yet to be addressed systematically, and this dataset present simplifications that can hide such issues found in the wild. In order to address this biases, we recommend to examine carefully both the simplified annotation and the original `pdf` data, beyond a simple rendering.
232
 
233
  Further, the annotation is limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely by said annotation.
234