File size: 2,411 Bytes
a627a19
74b08db
 
 
 
 
 
 
a627a19
74b08db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: other
license_name: pdfa-eng-train
license_link: LICENSE
task_categories:
- image-to-text
size_categories:
- 10M<n<100M
---
# Dataset Card for PDF Association dataset (PDFA)

## Dataset Description

- **Point of Contact from curators:** [Peter Wyatt, PDF Association CTO](mailto:peter.wyatt@pdfa.org)
- **Point of Contact Hugging Face:** [Pablo Montalvo](mailto:pablo@huggingface.co)

### Dataset Summary

PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED, with 48 million pages kept as valid samples.
Each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in 
webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.

In this dataset, an additional filtering has been done to restrict documents to the english language to 18.6 million pages over 2.16 million documents
Further, the metadata for each document has been formatted in the same way as `https://huggingface.co/datasets/pixparse/IDL-wds`. 

### Usage

This instance of PDFA is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format.
It can be used with webdataset library or current releases of Hugging Face `datasets` library. It can also be streamed directly from the hub that way.

```python
from datasets import load_dataset

pdfa_english = load_dataset('pixparse/pdfa-english-train', streaming=True)

print(next(iter(dataset['train'])).keys())
>> dict_keys(['__key__', '__url__', 'json', 'pdf'])

```

Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension, 
as well as the count of files per shard. 

### Data Splits


#### Train
* `pdfa-eng-train-*.tar`
* Downloaded on 2024/01/22
* 1800 shards, 2,159,433 samples, 18,686,346 pages, 5,997,818,991 words

## Additional Information

### Dataset Curators

Pablo Montalvo, Ross Wightman

### Licensing Information

Data has been filtered from the original corpus. As a consequence, users should note [Common Crawl's license and terms of use](https://commoncrawl.org/terms-of-use) and the [Digital Corpora project's Terms of Use](https://digitalcorpora.org/about-digitalcorpora/terms-of-use/).

### Citation Information
??