davanstrien HF staff commited on
Commit
3a03045
1 Parent(s): 67c5a1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -2
README.md CHANGED
@@ -173,8 +173,26 @@ size_categories:
173
 
174
  ### Citation Information
175
 
176
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
177
 
178
  ### Contributions
179
 
180
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
173
 
174
  ### Citation Information
175
 
176
+ ```bibtex
177
+ @inproceedings{10.1145/3340531.3412767,
178
+ author = {Lee, Benjamin Charles Germain and Mears, Jaime and Jakeway, Eileen and Ferriter, Meghan and Adams, Chris and Yarasavage, Nathan and Thomas, Deborah and Zwaard, Kate and Weld, Daniel S.},
179
+ title = {The Newspaper Navigator Dataset: Extracting Headlines and Visual Content from 16 Million Historic Newspaper Pages in Chronicling America},
180
+ year = {2020},
181
+ isbn = {9781450368599},
182
+ publisher = {Association for Computing Machinery},
183
+ address = {New York, NY, USA},
184
+ url = {https://doi.org/10.1145/3340531.3412767},
185
+ doi = {10.1145/3340531.3412767},
186
+ abstract = {Chronicling America is a product of the National Digital Newspaper Program, a partnership between the Library of Congress and the National Endowment for the Humanities to digitize historic American newspapers. Over 16 million pages have been digitized to date, complete with high-resolution images and machine-readable METS/ALTO OCR. Of considerable interest to Chronicling America users is a semantified corpus, complete with extracted visual content and headlines. To accomplish this, we introduce a visual content recognition model trained on bounding box annotations collected as part of the Library of Congress's Beyond Words crowdsourcing initiative and augmented with additional annotations including those of headlines and advertisements. We describe our pipeline that utilizes this deep learning model to extract 7 classes of visual content: headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements, complete with textual content such as captions derived from the METS/ALTO OCR, as well as image embeddings. We report the results of running the pipeline on 16.3 million pages from the Chronicling America corpus and describe the resulting Newspaper Navigator dataset, the largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model, and all source code are placed in the public domain for unrestricted re-use.},
187
+ booktitle = {Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management},
188
+ pages = {3055–3062},
189
+ numpages = {8},
190
+ keywords = {digital humanities, dataset, chronicling america, newspaper navigator, document analysis, information retrieval, digital libraries and archives, public domain, historic newspapers},
191
+ location = {Virtual Event, Ireland},
192
+ series = {CIKM '20}
193
+ }
194
+ ```
195
 
196
  ### Contributions
197
 
198
+ Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.