Molbap HF staff commited on
Commit
a6f6692
1 Parent(s): 1c32f70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -56
README.md CHANGED
@@ -26,15 +26,42 @@ Each document exists as a collection of a pdf, a tiff image with the same conten
26
  </center>
27
 
28
 
29
- ### Usage
30
- This instance of IDL is in [webdataset](https://github.com/webdataset/webdataset) .tar format and can be used with derived forms of the webdataset library. For dataloading, the `datasets` library can readily be used, and we also recommend to use it with the chug library, an optimized library for distributed data loading.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ```python
32
- from datasets import load_dataset
33
  dataset = load_dataset('pixparse/IDL-wds', streaming=True)
34
  print(next(iter(dataset['train'])).keys())
35
  >> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
36
  ```
37
 
 
38
  For faster download, you can directly use the `huggingface_hub` library. Make sure `hf_transfer` is installed prior to downloading and mind that you have enough space locally.
39
 
40
  ```python
@@ -50,38 +77,6 @@ For faster download, you can directly use the `huggingface_hub` library. Make su
50
 
51
  ```
52
 
53
-
54
-
55
- # TODO The following example uses fitz which is AGPL. We should also recommend the same with pypdf.
56
-
57
- ```python
58
- from chug import create_wds_loader, create_doc_anno_pipe
59
-
60
- # TODO import image transforms and text transforms from pixparse.data
61
-
62
-
63
- decoder = create_decoder_pipe(
64
- image_preprocess=image_preprocess, # callable of transforms to image tensors
65
- anno_preprocess=anno_preprocess, # callable of transforms to text into tensors of labels and targets
66
- image_key="pdf",
67
- image_fmt="RGB",
68
- )
69
-
70
- loader = create_wds_loader(
71
- "/my_data/idl-train-*.tar",
72
- decoder,
73
- is_train=True,
74
- resampled=False,
75
- start_interval=0,
76
- num_samples=2159432,
77
- workers=8,
78
- batch_size=32, # adjust to your architecture capacity
79
- seed=seed, # set a seed
80
- world_size=world_size, # get world_size from your training environment
81
- )
82
-
83
- ```
84
-
85
  Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
86
  as well as the count of files per shard.
87
 
@@ -152,28 +147,7 @@ That way, columnar documents can be better separated. This is a basic heuristic
152
 
153
  For each pdf document, we store statistics on number of pages per shard, number of valid samples per shard. A valid sample is a sample that can be encoded then decoded, which we did for each sample.
154
 
155
- This instance of IDL is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or current releases of Hugging Face `datasets`.
156
- Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth.
157
-
158
- ```python
159
- dataset = load_dataset('pixparse/IDL-wds', streaming=True)
160
- print(next(iter(dataset['train'])).keys())
161
- >> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
162
- ```
163
-
164
- For faster download, you can use directly the `huggingface_hub` library. Make sure `hf_transfer` is install prior to downloading and mind that you have enough space locally.
165
 
166
- ```python
167
- import os
168
-
169
- os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
170
-
171
- from huggingface_hub import HfApi, logging
172
-
173
- #logging.set_verbosity_debug()
174
- hf = HfApi()
175
- hf.snapshot_download("pixparse/pdfa-english-train", repo_type="dataset", local_dir_use_symlinks=False)
176
- ```
177
  ### Data, metadata and statistics.
178
  <center>
179
  <img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/idl_page_example.png" alt="An addendum from an internal legal document" width="600" height="300">
 
26
  </center>
27
 
28
 
29
+ This instance of IDL is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format.
30
+
31
+ ### Usage with `chug`
32
+ Check out [chug](https://github.com/huggingface/chug), our optimized library for sharded dataset loading!
33
+
34
+ ```python
35
+ import chug
36
+ task_cfg = chug.DataTaskDocReadCfg(page_sampling='all')
37
+ data_cfg = chug.DataCfg(
38
+ source='pixparse/IDL-wds',
39
+ split='train',
40
+ batch_size=None,
41
+ format='hfids',
42
+ num_workers=0,
43
+ )
44
+ data_loader = chug.create_loader(
45
+ data_cfg,
46
+ task_cfg,
47
+ )
48
+ sample = next(iter(data_loader))
49
+ ```
50
+
51
+
52
+
53
+ ### Usage with datasets
54
+
55
+ This dataset can also be used with webdataset library or current releases of Hugging Face `datasets`.
56
+ Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth.
57
+
58
  ```python
 
59
  dataset = load_dataset('pixparse/IDL-wds', streaming=True)
60
  print(next(iter(dataset['train'])).keys())
61
  >> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
62
  ```
63
 
64
+
65
  For faster download, you can directly use the `huggingface_hub` library. Make sure `hf_transfer` is installed prior to downloading and mind that you have enough space locally.
66
 
67
  ```python
 
77
 
78
  ```
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
81
  as well as the count of files per shard.
82
 
 
147
 
148
  For each pdf document, we store statistics on number of pages per shard, number of valid samples per shard. A valid sample is a sample that can be encoded then decoded, which we did for each sample.
149
 
 
 
 
 
 
 
 
 
 
 
150
 
 
 
 
 
 
 
 
 
 
 
 
151
  ### Data, metadata and statistics.
152
  <center>
153
  <img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/idl_page_example.png" alt="An addendum from an internal legal document" width="600" height="300">