{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:25.833520Z" }, "title": "Word-Level Alignment of Paper Documents with their Electronic Full-Text Counterparts", "authors": [ { "first": "Mark-Christoph", "middle": [], "last": "M\u00fcller", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heidelberg Institute for Theoretical Studies gGmbH", "location": { "settlement": "Heidelberg", "country": "Germany" } }, "email": "mark-christoph.mueller@h-its.org" }, { "first": "Sucheta", "middle": [], "last": "Ghosh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heidelberg Institute for Theoretical Studies gGmbH", "location": { "settlement": "Heidelberg", "country": "Germany" } }, "email": "sucheta.ghosh@h-its.org" }, { "first": "Ulrike", "middle": [], "last": "Wittig", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heidelberg Institute for Theoretical Studies gGmbH", "location": { "settlement": "Heidelberg", "country": "Germany" } }, "email": "ulrike.wittig@h-its.org" }, { "first": "Maja", "middle": [], "last": "Rey", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heidelberg Institute for Theoretical Studies gGmbH", "location": { "settlement": "Heidelberg", "country": "Germany" } }, "email": "maja.rey@h-its.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe a simple procedure for the automatic creation of word-level alignments between printed documents and their respective full-text versions. The procedure is unsupervised, uses standard, off-the-shelf components only, and reaches an F-score of 85.01 in the basic setup and up to 86.63 when using pre-and post-processing. Potential areas of application are manual database curation (incl. document triage) and biomedical expression OCR.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We describe a simple procedure for the automatic creation of word-level alignments between printed documents and their respective full-text versions. The procedure is unsupervised, uses standard, off-the-shelf components only, and reaches an F-score of 85.01 in the basic setup and up to 86.63 when using pre-and post-processing. Potential areas of application are manual database curation (incl. document triage) and biomedical expression OCR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Even though most research literature in the life sciences is born-digital nowadays, manual data curation (International Society for Biocuration, 2018) from these documents still often involves paper. For curation steps that require close reading and markup of relevant sections, curators frequently rely on paper printouts and highlighter pens (Venkatesan et al., 2019) . Figure 1a shows a page of a typical document used for manual curation. The potential reasons for this can be as varied as merely sticking to a habit, ergonomic issues related to reading from and interacting with a device, and functional limitations of that device (Buchanan and Loizides, 2007; K\u00f6pper et al., 2016; Clinton, 2019) . Whatever the reason, the consequence is a two-fold media break in many manual curation workflows: first from electronic format (either PDF or full-text XML) to paper, and then back from paper to the electronic format of the curation database. Given the above arguments in favor of paper-based curation, removing the first media break from the curation workflow does not seem feasible. Instead, we propose to bridge the gap between paper and electronic media by automatically creating an alignment between the words on the printed document pages and their counterparts in an electronic fulltext version of the same document.", "cite_spans": [ { "start": 344, "end": 369, "text": "(Venkatesan et al., 2019)", "ref_id": "BIBREF19" }, { "start": 636, "end": 665, "text": "(Buchanan and Loizides, 2007;", "ref_id": "BIBREF2" }, { "start": 666, "end": 686, "text": "K\u00f6pper et al., 2016;", "ref_id": "BIBREF11" }, { "start": 687, "end": 701, "text": "Clinton, 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 372, "end": 381, "text": "Figure 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach works as follows: We automatically create machine-readable versions of printed paper documents (which might or might not contain markup) by scanning them, applying optical character recognition (OCR), and converting the resulting semi-structured OCR output text into a flexible XML format for further processing. For this, we use the multilevel XML format of the annotation tool MMAX2 1 (M\u00fcller and Strube, 2006) . We retrieve electronic full-text counterparts of the scanned paper documents from PubMedCentral \u00ae in .nxml format 2 , and also convert them into MMAX2 format. By using a shared XML format for the two heterogeneous text sources, we can capture their content and structural information in a way that provides a compatible, though often not identical, word-level tokenization. Finally, using a sequence alignment algorithm from bioinformatics and some pre-and post-processing, we create a word-level alignment of both documents. Aligning words from OCR and full-text documents is challenging for several reasons. The OCR output contains various types of recognition errors, many of which involve special symbols, Greek letters like \u00b5 or sub-and superscript characters and numbers, which are particularly frequent in chemical names, formulae, and measurement units, and which are notoriously difficult for OCR (Ohyama et al., 2019) . If the printed paper document is based on PDF, it usually has an explicit page layout, which is different from the way the corresponding full-text XML document is displayed in a web browser. Differences include double-vs. single-column layout, but also the way in which tables and figures are rendered and positioned. Finally, printed papers might contain additional content in headers or footers (like e.g. download timestamps). Also, while the references/bibliography section is an integral part of a printed paper and will be covered by OCR, in XML documents it is often structurally kept apart from the actual document text. Given these challenges, attempting data extraction from document images if the documents are available in PDF or even full-text format may seem unreasonable. We see, however, the following useful applications: 1. Manual Database Curation As mentioned above, manual database curation requires the extraction, normalization, and database insertion of scientific content, often from paper documents. Given a paper document in which a human expert curator has manually marked a word or sequence of words for insertion into the database, having a link from these words to their electronic counterparts can eliminate or at least reduce error-prone and time-consuming steps like manual re-keying. Also, already existing annotations of the electronic fulltext 3 would also be accessible and could be used to inform the curation decision or to supplement the database entry.", "cite_spans": [ { "start": 400, "end": 425, "text": "(M\u00fcller and Strube, 2006)", "ref_id": "BIBREF12" }, { "start": 1334, "end": 1355, "text": "(Ohyama et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Triage Database curation candidate papers are identified by a process called document triage (Buchanan and Loizides, 2007; Hirschman et al., 2012) which, despite some attempts towards automation (e.g. Wang et al. (2020) ), remains a mostly manual process. In a nut shell, triage normally involves querying a literature database (like PubMed 4 ) for specific terms, skimming the list of search results, selecting and skim-reading some papers, and finally downloading and printing the PDF versions of the most promising ones for curation (Venkatesan et al., 2019) . Here, the switch from searching in the electronic full-text (or abstract) to printing the PDF brings about a loss of information, because the terms that caused the paper to be retrieved will have to be located again in the print-out. A word-level alignment between the full-text and the PDF version would allow to create an enhanced version of the PDF with highlighted search term occurrences before printing. 3. Biomedical Expression OCR Current state-ofthe-art OCR systems are very accurate at recognizing standard text using Latin script and baseline typography, but, as already mentioned, they are less reliable for more typographically complex expressions like chemical formulae. In order to develop specialized OCR systems for these types of expressions, ground-truth data is required in which image regions containing these expressions are labelled with the correct characters and their positional information (see also Section 5). If aligned documents are available, this type of data can easily be created at a large scale.", "cite_spans": [ { "start": 93, "end": 122, "text": "(Buchanan and Loizides, 2007;", "ref_id": "BIBREF2" }, { "start": 123, "end": 146, "text": "Hirschman et al., 2012)", "ref_id": "BIBREF8" }, { "start": 201, "end": 219, "text": "Wang et al. (2020)", "ref_id": "BIBREF20" }, { "start": 536, "end": 561, "text": "(Venkatesan et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic PDF Highlighting for Manual", "sec_num": "2." }, { "text": "The remainder of this paper is structured as follows. In Section 2, we describe our data set and how it was converted into the shared XML format. Section 3 deals with the actual alignment procedure, including a description of the optional preand post-processing measures. In Section 4, we present experiments in which we evaluate the performance of the implemented procedure, including an ablation of the effects of the individual pre-and post-processing measures. Quantitative evaluation alone, however, does not convey a realistic idea of the actual usefulness of the procedure, which ultimately needs to be evaluated in the context of real applications including, but not limited to, database curation. Section 4.2, therefore, briefly presents examples of the alignment and highlighting detection functionality and the biomedical expression OCR use case mentioned above. Section 5 discusses relevant related work, and Section 6 summarizes and concludes the paper with some future work. All the tools and libraries we use are freely available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic PDF Highlighting for Manual", "sec_num": "2." }, { "text": "In addition, our implementation can be found at https://github.com/ nlpAThits/BioNLP2021.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic PDF Highlighting for Manual", "sec_num": "2." }, { "text": "For the alignment of a paper document with its electronic full-text counterpart, what is minimally required is an image of every page of the document, and a full-text XML file of the same document. The document images can either be created by scanning or by directly converting the corresponding PDF into an image. The latter method will probably yield images of a better quality, because it completely avoids the physical printing and subsequent scanning step, while the output of the former method will be more realistic. We experiment with both types of images (see Section 2.1). We identify a document by its DOI, and refer to the different versions as DOI xml (from the full-text XML), DOI conv , and DOI scan . Whenever a distinction between DOI conv and DOI scan is not required, we refer to these versions collectively as DOI ocr . Printable PDF documents and their associated .nxml files are readily available at PMC-OAI. 5 In our case, however, printed paper versions were already available, as we have access to a collection of more than 6.000 printed scientific papers (approx. 30.000 pages in total) that were created in the SABIO-RK 6 Biochemical Reaction Kinetics Database project (Wittig et al., 2017 (Wittig et al., , 2018 . These papers contain manual highlighter markup at different levels of granularity, including the word, line, and section level. Transferring this type of markup from printed paper to the electronic medium is one of the key applications of our alignment procedure. Our paper collection spans many publication years and venues. For our experiments, however, it was required that each document was freely available both as PubMedCentral \u00ae full-text XML and as PDF. While this leaves only a fraction of (currently) 68 papers, the data situation is still sufficient to demonstrate the feasibility of our procedure. Even more importantly, the procedure is unsupervised, i.e. it does not involve learning and does not require any training data.", "cite_spans": [ { "start": 931, "end": 932, "text": "5", "ref_id": null }, { "start": 1196, "end": 1216, "text": "(Wittig et al., 2017", "ref_id": "BIBREF22" }, { "start": 1217, "end": 1239, "text": "(Wittig et al., , 2018", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Since we want to compare downstream effects of input images of different quality, we created both a converted and a scanned image version for every document in our data set. For the DOI conv version, we used pdftocairo to create a high-resolution (600 DPI) PNG file for every PDF page. Figure 1c shows an example. The DOI scan versions, on the other hand, were extracted from 'sandwich' PDFs which had been created earlier by a professional scanning service provider. The choice of a service provider for this task was only motivated by the large number of pages to process, and not by expected quality or other considerations. A sandwich PDF contains, among other data, the document plain text (as recognized by the provider's OCR software) and a background image for each page. This background image is a by-product of the OCR process in which pixels that were recognized as parts of a character are inpainted, i.e. removed by being overwritten with colors of neighbouring regions. Figure 1b shows the background image corresponding to the page in Figure 1a . Note how the image retains the highlighting. We used pdfimages to extract the background images (72 DPI) from the sandwich PDF for use in highlighting extraction (see Section 2.1.1 below). We refer to these versions as DOI scan_bg . For the actual DOI scan versions, we again used pdftocairo to create a high-resolution (600 DPI) PNG file for every scanned page. OCR was then performed on the DOI conv and the DOI scan versions with tesseract 4.1.1 7 , using default recognition settings (-oem 3 -psm 3) and specifying hOCR 8 with character-level bounding boxes as output format. In order to maximize recognition accuracy (at the expense of processing speed), the default language models for English were replaced with optimized LSTM models 9 . No other modification or re-training of tesseract was performed. In a final step, the hOCR output from both image versions was converted into the MMAX2 (M\u00fcller and Strube, 2006) multilevel XML annotation format, using words as tokenization granularity, and storing word-and characterlevel confidence scores and bounding boxes as MMAX2 attributes. 10", "cite_spans": [ { "start": 1959, "end": 1984, "text": "(M\u00fcller and Strube, 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 286, "end": 295, "text": "Figure 1c", "ref_id": "FIGREF0" }, { "start": 984, "end": 993, "text": "Figure 1b", "ref_id": "FIGREF0" }, { "start": 1050, "end": 1059, "text": "Figure 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Document Image to Multilevel XML", "sec_num": "2.1" }, { "text": "Highlighting detection and subsequent extraction can be performed if the scanned paper documents contain manual markup. In its current state, the detection procedure described in the following requires inpainted OCR background images which, in our case, were produced by the third-party OCR software used by the scanning service provider. tesseract, on the other hand, does not produce these images. While it would be desirable to employ free software only, this fact does not severely limit the usefulness of our procedure, because 1) other software (either free or commercial) with the same functionality might exist, and 2) even for document collections of medium size, employing an external service provider might be the most economical solution even in academic / research settings, anyway. What is more, inpainted backgrounds are only required if highlighting detection is desired: For text-only alignment, plain scans are sufficient. 7 https://github.com/tesseract-ocr/ tesseract 8 http://kba.cloud/hocr-spec/1.2/ 9 https://github.com/tesseract-ocr/ tessdata_best", "cite_spans": [ { "start": 941, "end": 942, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Highlighting Detection", "sec_num": "2.1.1" }, { "text": "10 See the lower part of Figure A .1 in the Appendix. The actual highlighting extraction works as follows (see M\u00fcller et al. (2020) for details): Since document highlighting comes mostly in strong colors, which are characterized by large differences among their three component values in the RGB color model, we create a binarized version of each page by going over all pixels in the background image and setting each pixel to 1 if the pairwise differences between the R, G, and B components are above a certain threshold (50), and to 0 otherwise. This yields an image with regions of higher and lower density of black pixels. In the final step, we iterate over the word-level tokens created from the hOCR output and converted into MMAX2 format earlier, compute for each word its degree of highlighting as the percentage of black pixels in the word's bounding box, and store that percentage value as another MMAX2 attribute if it is at least 50%. An example result will be presented in Section 4.2.", "cite_spans": [ { "start": 111, "end": 131, "text": "M\u00fcller et al. (2020)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 25, "end": 33, "text": "Figure A", "ref_id": null } ], "eq_spans": [], "section": "Highlighting Detection", "sec_num": "2.1.1" }, { "text": "The .nxml format employed for PubMedCentral \u00ae full-text documents uses the JATS scheme 11 which supports a rich meta data model, only a fraction of which is of interest for the current task. In principle, however, all information contained in JATS-conformant documents can also be represented in the multilevel XML format of MMAX2. The .nxml data provides precise infor-11 https://jats.nlm.nih.gov/archiving/ mation about both the textual content (including correctly encoded special characters) and its wordand section-level layout. At present, we only extract content from the section (, , , , , , and ), and from the (,

, , ,