ACL-OCL / Base_JSON /prefixI /json /insights /2020.insights-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
77.9 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:12:53.808414Z"
},
"title": "Layout-Aware Text Representations Harm Clustering Documents by Type",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Finegan-Dollak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {
"addrLine": "1101 Kitchawan Rd",
"postCode": "10598",
"settlement": "Yorktown Heights",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Verma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {
"addrLine": "1101 Kitchawan Rd",
"postCode": "10598",
"settlement": "Yorktown Heights",
"region": "NY",
"country": "USA"
}
},
"email": "ashish.verma1@ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Clustering documents by type-grouping invoices with invoices and articles with articles-is a desirable first step for organizing large collections of document scans. Humans approaching this task use both the semantics of the text and the document layout to assist in grouping like documents. Lay-outLM (Xu et al., 2019), a layout-aware transformer built on top of BERT with state-of-theart performance on document-type classification, could reasonably be expected to outperform regular BERT (Devlin et al., 2018) for document-type clustering. However, we find experimentally that BERT significantly outperforms LayoutLM on this task (p < 0.001). We analyze clusters to show where layout awareness is an asset and where it is a liability.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Clustering documents by type-grouping invoices with invoices and articles with articles-is a desirable first step for organizing large collections of document scans. Humans approaching this task use both the semantics of the text and the document layout to assist in grouping like documents. Lay-outLM (Xu et al., 2019), a layout-aware transformer built on top of BERT with state-of-theart performance on document-type classification, could reasonably be expected to outperform regular BERT (Devlin et al., 2018) for document-type clustering. However, we find experimentally that BERT significantly outperforms LayoutLM on this task (p < 0.001). We analyze clusters to show where layout awareness is an asset and where it is a liability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Organizations are inundated by paperwork, often in the form of PDFs. Automated processing can help to organize and extract information from these documents, but the right process for a given document depends on its type: invoices are handled differently than contracts, for example. Document classification by type enables such a system; however, it requires training data for all of the desired classes, and finding such data to fit a given business's needs is difficult. There is no one-size-fits-all ontology of document types. While some types, such as invoices, may be common across industries, others, such as loan applications or home-inspection reports, are domain-specific. Users wishing to define their own classes will benefit from a system that enables them to group their own documents. To help with this, the present work addresses the task of clustering documents by type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Humans grouping documents by type can use both the text and the appearance of documents. For example, we can distinguish a gas bill from an article at a glance, but we need to read at least a few words to determine whether a dense, two-column document is an article or a warranty. We therefore expect that a hybrid document representation that combines layout and text information should outperform a text-only representation when clustering documents by type. LayoutLM (Xu et al., 2019) is such a hybrid system and achieves state-of-theart performance for document-type classification, outperforming text-only baselines. We therefore hypothesized that LayoutLM would also outperform these baselines for document-type clustering.",
"cite_spans": [
{
"start": 470,
"end": 487,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sections 3 and 4 describe the systems we compared and the experiments we used to try to confirm this hypothesis. However, the main contribution of this work is experimental evidence of the opposite: LayoutLM performed significantly worse than a simple BERT baseline on this task (Section 5). Analysis of output clusters (Section 5.1) helps to explain this unexpected result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hybrid layout/text representations Recent work combines layout with text for information extraction. Chargrid (Katti et al., 2018) assigns each pixel on a page a vector. For pixels inside the bounding box of a character, the vector is a one-hot encoding for that character; otherwise, it is a vector of zeros. This generates a vocabsize \u00d7 height \u00d7 width tensor representation of the page for input to a CNN encoder-decoder model. BERTgrid (Denk and Reisswig, 2019) is nearly identical, but it replaces the one-hot character encoding with the word's BERT encoding. Liu et al. (2019) represent a document as a fully-connected graph where text boxes are nodes. The edge embedding between two nodes incorporates the distance between them, the text boxes' aspect ratios, and their relative sizes. Similarly, ZeroShotCeres (Lockard et al., 2020) represents semi-structured web pages as graphs, with text-field nodes connected by edges for vertically or horizontally adjacent text fields and siblings or cousins in the DOM tree. Both systems then use graph neural networks over the document graphs. Document-type classification Classification of documents by type has frequently been treated as an image classification problem. Many works have used varying CNN architectures (Kang et al., 2014; Afzal et al., 2015; Harley et al., 2015; Afzal et al., 2017; Tensmeyer and Martinez, 2017; Das et al., 2018) or other vision-based techniques Sarkhel and Nandi, 2019) .",
"cite_spans": [
{
"start": 101,
"end": 130,
"text": "Chargrid (Katti et al., 2018)",
"ref_id": null
},
{
"start": 564,
"end": 581,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 817,
"end": 839,
"text": "(Lockard et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 1268,
"end": 1287,
"text": "(Kang et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 1288,
"end": 1307,
"text": "Afzal et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 1308,
"end": 1328,
"text": "Harley et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 1329,
"end": 1348,
"text": "Afzal et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 1349,
"end": 1378,
"text": "Tensmeyer and Martinez, 2017;",
"ref_id": "BIBREF24"
},
{
"start": 1379,
"end": 1396,
"text": "Das et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 1430,
"end": 1454,
"text": "Sarkhel and Nandi, 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some works have combined vision and NLP for document-type classification, using OCR for text extraction. Noce et al. (2016) assigned the most relevant words unique colors, then filled the bounding boxes of those words with the corresponding color, enabling the CNN processing the image to \"see\" the word. Asim et al. 2019provided the most important words as features to a CNN, later combining the output with an image stream that used an InceptionV3 CNN architecture. Dauphinee et al. 2019concatenated the output of a CNN image classifier with a multilayer perceptron bagof-words classifier, then fed the concatenation to a meta-classifier. Ferrando et al. (2020) used an ensemble of a BERT classifier and EfficientNets CNNs. Audebert et al. (2020) concatenated image features (from a MobileNet v2 CNN) with text features (generated by passing FastText embeddings for the text through a 1D CNN) to form the input to a multilayer perceptron. Cosma et al. (2020) used text to help pretrain part of their classifier: they performed LDA to determine documents' topics, then trained their CNN to try to predict those topics using only the document image. They ultimately used the CNN as part of a model to predict document type using the image only. All of these systems are supervised, whereas this work addresses unsupervised clustering. Document-type clustering Csurka et al. (2016) trained models on RVL-CDIP, then used those models to generate representations for clustering other document-type datasets. Abuelwafa et al. (2019) used unsupervised feature learning to improve their representations of document images for clustering. They applied transformations to document images to generate surrogate classes, then trained a CNN to classify them. They used that trained CNN to generate representations of document images for clustering. There is, to our knowledge, no previous work clustering RVL-CDIP.",
"cite_spans": [
{
"start": 105,
"end": 123,
"text": "Noce et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 641,
"end": 663,
"text": "Ferrando et al. (2020)",
"ref_id": "BIBREF12"
},
{
"start": 941,
"end": 960,
"text": "Cosma et al. (2020)",
"ref_id": null
},
{
"start": 1360,
"end": 1380,
"text": "Csurka et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We compare LayoutLM and BERT, as well as a TF-IDF baseline (sklearn's 1 (Pedregosa et al., 2011) implementation with default hyperparameters). In each case, we use the specified system to generate one vector representation for each document image, then cluster using sklearn's k-means, with k set to the number of gold classes plus one.",
"cite_spans": [
{
"start": 72,
"end": 96,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "3"
},
{
"text": "BERT (Devlin et al., 2018 ) is a transformerbased bidirectional model that generates contextualized word embeddings for a sequence of words. The input to a BERT model for the i-th token in the sequence is a sum of (a) its token embedding; (b) a position embedding for position i; and (c) a segment embedding indicating whether the token is in the first or second segment of the input sequence.",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Devlin et al., 2018",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "3"
},
{
"text": "LayoutLM (Xu et al., 2019 ) is a BERT-like transformer model modified to generate layoutaware contextualized word embeddings. In place of BERT's single positional embedding, LayoutLM adds positional embeddings for the x-and ycoordinates of a bounding box around the token. The token's embedding thus incorporates its twodimensional location on the page and its size. This architecture achieves state-of-the-art performance for supervised classification by document type.",
"cite_spans": [
{
"start": 9,
"end": 25,
"text": "(Xu et al., 2019",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "3"
},
{
"text": "Both BERT and LayoutLM output a vector for each token in the input sequence plus the special [CLS] token. However, k-means, like most clustering algorithms, requires a single vector representation of each example. Classifiers use the [CLS] embedding as a single-vector representation for the entire sequence. However, prior work (Reimers and Gurevych, 2019; Wang and Kuo, 2020) has shown that, for BERT without fine-tuning, this is not a good representation of the semantics of the entire sequence. Other options include combining all of the vectors in the output sequence by either averaging or max pooling-set the i-th value in the output vector equal to the max i-th value over all of the sequence vectors. For BERT, we use the average as our representation, since Reimers and Gurevych (2019) showed it captured semantic similarity better than the [CLS] token. For LayoutLM, we try all three methods. 0.20* 0.003 0.14* 0.003 LayoutLM (max pooled) 0.19* 0.001 0.13* 0.000 Table 1 : Mean F 1 and ARI over five runs, with standard error of the mean (subscript). Items marked with * are significantly different from BERT average, p < 0.001 based on a two-tailed t-test.",
"cite_spans": [
{
"start": 329,
"end": 357,
"text": "(Reimers and Gurevych, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 358,
"end": 377,
"text": "Wang and Kuo, 2020)",
"ref_id": "BIBREF25"
},
{
"start": 768,
"end": 795,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 974,
"end": 981,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Systems",
"sec_num": "3"
},
{
"text": "We evaluate on RVL-CDIP 2 (Harley et al., 2015) , scanned tobacco-litigation documents from the Illinois Institute of Technology Complex Document Information Processing (IIT-CDIP) collection, labeled with type, such as letter or invoice. The complete class list appears in Table 3 . We clustered the validation set (40K pages). Like LayoutLM, we used Tesseract 3 for OCR.",
"cite_spans": [
{
"start": 26,
"end": 47,
"text": "(Harley et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use LayoutLM's publicly-released code and base model for experiments. 4 This model was pretrained on IIT-CDIP, excluding documents in RVL-CDIP. For BERT, we use the Transformers package 5 with the bert-base-uncased model, pretrained on books and Wikipedia. Because LayoutLM's masked language model pretrained on documents from the same domain, while BERT's did not, the dataset could favor LayoutLM.",
"cite_spans": [
{
"start": 73,
"end": 74,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We calculate F 1 and adjusted Rand index (ARI) for each system, using Manning et al. (2008) 's definitions of true and false positives and negatives. We use sklearn (Pedregosa et al., 2011) 's implementation of ARI. We report the mean over 5 runs and use a two-tailed t-test to determine whether systems differ significantly from the BERT baseline.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "Manning et al. (2008)",
"ref_id": "BIBREF19"
},
{
"start": 165,
"end": 189,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Results are shown in Table 1 and Figure 1 . Our experiments show that the performance of a system using LayoutLM vectors is significantly worse (p < 0.001) at clustering RVL-CDIP documents by type than a simple BERT baseline. There was no significant difference between the TF-IDF and 2 https://www.cs.cmu.edu/\u02dcaharley/ rvl-cdip/ 3 https://github.com/tesseract-ocr/ tesseract; we used version 4.1.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 1",
"ref_id": null
},
{
"start": 33,
"end": 41,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "4 https://github.com/microsoft/unilm/ tree/master/layoutlm. The version as of this writing does not include the optional image embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "5 https://github.com/huggingface/ transformers BERT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In contrast to prior work on BERT, where the [CLS] token was a worse representation than averaging (Reimers and Gurevych, 2019; Wang and Kuo, 2020) , the best-performing LayoutLM system used the [CLS] token embedding. We suspect this is because averaging or max-pooling Lay-outLM vectors blends together bounding box information for all tokens, erasing the benefits of a layout-sensitive transformer. In light of these results, we also tested [CLS] token and max-pooling for BERT on this task. Consistent with prior work, averaging outperformed both; see Table 2 .",
"cite_spans": [
{
"start": 99,
"end": 127,
"text": "(Reimers and Gurevych, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 128,
"end": 147,
"text": "Wang and Kuo, 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 555,
"end": 562,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "All of these scores are low, especially in comparison to classification results. The comparison is misleading, of course, since classification requires training data, and clustering addresses the case where such data is not available. Neverthe- less, much improvement will be required before document-type clustering is useful for practical applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "To understand this unexpected result, we reviewed example clusters from one run of the BERT system and one of LayoutLM ([CLS] ). Documents in LayoutLM's best clusters had consistent layouts, illustrated in Figure 2 . Specifications in the highest-purity cluster seem to have been generated from a few templates. For such documents, the layouts are so consistent that no learning is required to identify which aspects of layout to emphasize in grouping the documents. Not all specifications conform to these templates, though. Figure 3 shows some with different formats, which LayoutLM placed in a different cluster. Document layouts that are common across multiple document types also caused problems for LayoutLM. Figure 4 shows an invoice and resume with similar formats from the cluster with the lowest purity. Table 3 lists class precision 6 for the sample clustering runs. From this, we see that LayoutLM performed well on scientific publications. A substantial fraction of this class contains two-column documents, like those in Figure 6 , which LayoutLM can recognize. In contrast, BERT far outperformed LayoutLM for resumes, where page layout may be misleading. BERT correctly clustered the two resume images in Figure 5 together, despite their obvious layout differences. LayoutLM understandably placed them in different clusters.",
"cite_spans": [
{
"start": 119,
"end": 125,
"text": "([CLS]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 526,
"end": 534,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 715,
"end": 724,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 815,
"end": 822,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1036,
"end": 1044,
"text": "Figure 6",
"ref_id": "FIGREF5"
},
{
"start": 1221,
"end": 1229,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.1"
},
{
"text": "LayoutLM captures textual and layout information about documents. When training data is available, Figure 5 : BERT correctly clustered these two resume pages together despite their very different layouts; Lay-outLM put them in different clusters. a model can learn when to leverage each. Thus, LayoutLM performed quite well at classifying documents by type. But when clustering, there is no model to indicate how to weight features in determining document similarities. In this context, layout information significantly harms performance. Future work should explore ways to incorporate benefits of layout information into a representation while limiting its harm, as well as how layout information affects tasks that fall between classification and clustering, such as semi-supervised learning. Such questions must be answered for documenttype clustering to become practical.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://scikit-learn.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Precision of pairs of examples where at least one has the specified gold label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their helpful comments, as well as Anik Saha for many discussions on LayoutLM's strengths and weaknesses for supervised tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised Exemplar-Based Learning for Improved Document Image Classification",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Sherif Abuelwafa",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Pedersoli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cheriet",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "133738--133748",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2019.2940884"
]
},
"num": null,
"urls": [],
"raw_text": "Sherif Abuelwafa, Marco Pedersoli, and Mohamed Cheriet. 2019. Unsupervised Exemplar-Based Learning for Improved Document Image Classifica- tion. IEEE Access, 7:133738-133748.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "DeepDocClassifier: Document classification with deep convolutional neural network",
"authors": [
{
"first": "Samuele",
"middle": [],
"last": "Muhammad Zeshan Afzal",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [
"Imran"
],
"last": "Capobianco",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Malik",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"M"
],
"last": "Marinai",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Breuel",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Dengel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liwicki",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 13th International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "1111--1115",
"other_ids": {
"DOI": [
"10.1109/ICDAR.2015.7333933"
]
},
"num": null,
"urls": [],
"raw_text": "Muhammad Zeshan Afzal, Samuele Capobianco, Muhammad Imran Malik, Simone Marinai, Thomas M. Breuel, Andreas Dengel, and Marcus Liwicki. 2015. DeepDocClassifier: Document classification with deep convolutional neural net- work. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pages 1111-1115.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cutting the error by half: Investigation of very deep cnn and advanced training strategies for document image classification",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Muhammad Zeshan Afzal",
"suffix": ""
},
{
"first": "Sheraz",
"middle": [],
"last": "Kolsch",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liwicki",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 14th IAPR International Conference on Document Analysis and Recognition (IC-DAR)",
"volume": "",
"issue": "",
"pages": "883--888",
"other_ids": {
"DOI": [
"10.1109/ICDAR.2017.149"
]
},
"num": null,
"urls": [],
"raw_text": "Muhammad Zeshan Afzal, Andreas Kolsch, Sheraz Ahmed, and Marcus Liwicki. 2017. Cutting the er- ror by half: Investigation of very deep cnn and ad- vanced training strategies for document image clas- sification. In 2017 14th IAPR International Con- ference on Document Analysis and Recognition (IC- DAR), pages 883-888.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Two Stream Deep Network for Document Image Classification",
"authors": [
{
"first": "Muhammad Usman Ghani",
"middle": [],
"last": "Muhammad Nabeel Asim",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [
"Imran"
],
"last": "Khan",
"suffix": ""
},
{
"first": "Khizar",
"middle": [],
"last": "Malik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Razzaque",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "1410--1416",
"other_ids": {
"DOI": [
"10.1109/ICDAR.2019.00227"
]
},
"num": null,
"urls": [],
"raw_text": "Muhammad Nabeel Asim, Muhammad Usman Ghani Khan, Muhammad Imran Malik, Khizar Razzaque, Andreas Dengel, and Sheraz Ahmed. 2019. Two Stream Deep Network for Document Image Classifi- cation. In 2019 International Conference on Doc- ument Analysis and Recognition (ICDAR), pages 1410-1416.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multimodal Deep Networks for Text and Image-based Document Classification",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Audebert",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Herold",
"suffix": ""
},
{
"first": "Kuider",
"middle": [],
"last": "Slimani",
"suffix": ""
},
{
"first": "C\u00e9dric",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 2020,
"venue": "Communications in Computer and Information Science",
"volume": "1167",
"issue": "",
"pages": "427--443",
"other_ids": {
"DOI": [
"10.1007/978-3-030-43823-4_35"
]
},
"num": null,
"urls": [],
"raw_text": "Nicolas Audebert, Catherine Herold, Kuider Slimani, and C\u00e9dric Vidal. 2020. Multimodal Deep Networks for Text and Image-based Document Classification. Communications in Computer and Information Sci- ence, 1167:427-443.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Michael Panaitescu-Liess, and Marius Popescu. 2020. Self-Supervised Representation Learning on Document Images",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Cosma",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Ghidoveanu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian Cosma, Mihai Ghidoveanu, Michael Panaitescu-Liess, and Marius Popescu. 2020. Self-Supervised Representation Learning on Document Images.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What is the right way to represent document images?",
"authors": [
{
"first": "Gabriela",
"middle": [],
"last": "Csurka",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Larlus",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gordo",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Almaz\u00e1n",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriela Csurka, Diane Larlus, Albert Gordo, and Jon Almaz\u00e1n. 2016. What is the right way to represent document images?",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Document Image Classification with Intra-Domain Transfer Learning and Stacked Generalization of Deep Convolutional Neural Networks",
"authors": [
{
"first": "Arindam",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Saikat",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Ujjwal",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arindam Das, Saikat Roy, and Ujjwal Bhattacharya. 2018. Document Image Classification with Intra- Domain Transfer Learning and Stacked Generaliza- tion of Deep Convolutional Neural Networks.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Modular Multimodal Architecture for Document Classification",
"authors": [
{
"first": "Tyler",
"middle": [],
"last": "Dauphinee",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Rashidi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tyler Dauphinee, Nikunj Patel, and Mohammad Rashidi. 2019. Modular Multimodal Architecture for Document Classification.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERTgrid: Contextualized Embedding for 2D Document Representation and Understanding",
"authors": [
{
"first": "I",
"middle": [],
"last": "Timo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Denk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reisswig",
"suffix": ""
}
],
"year": 2019,
"venue": "Workshop on Document Intelligence at NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timo I. Denk and Christian Reisswig. 2019. BERT- grid: Contextualized Embedding for 2D Document Representation and Understanding. In Workshop on Document Intelligence at NeurIPS 2019.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep Bidirectional Transformers for Language Understanding",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Deep Bidirectional Transformers for Language Un- derstanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving Accuracy and Speeding Up Document Image Classification Through Parallel Systems",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Ferrando",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"Luis"
],
"last": "Dom\u00ednguez",
"suffix": ""
},
{
"first": "Jordi",
"middle": [],
"last": "Torres",
"suffix": ""
},
{
"first": "Ra\u00fal",
"middle": [],
"last": "Garc\u00eda",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Garc\u00eda",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Garrido",
"suffix": ""
},
{
"first": "Jordi",
"middle": [],
"last": "Cortada",
"suffix": ""
},
{
"first": "Mateo",
"middle": [],
"last": "Valero",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Science -ICCS 2020",
"volume": "",
"issue": "",
"pages": "387--400",
"other_ids": {
"DOI": [
"10.1007/978-3-030-50417-5"
]
},
"num": null,
"urls": [],
"raw_text": "Javier Ferrando, Juan Luis Dom\u00ednguez, Jordi Torres, Ra\u00fal Garc\u00eda, David Garc\u00eda, Daniel Garrido, Jordi Cortada, and Mateo Valero. 2020. Improving Ac- curacy and Speeding Up Document Image Classi- fication Through Parallel Systems. In Computa- tional Science -ICCS 2020, pages 387-400, Cham. Springer International Publishing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Evaluation of deep convolutional nets for document image classification and retrieval",
"authors": [
{
"first": "Adam",
"middle": [
"W"
],
"last": "Harley",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Ufkes",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [
"G"
],
"last": "Derpanis",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Document Analysis and Recognition, ICDAR",
"volume": "",
"issue": "",
"pages": "991--995",
"other_ids": {
"DOI": [
"10.1109/ICDAR.2015.7333910"
]
},
"num": null,
"urls": [],
"raw_text": "Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, pages 991-995.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Convolutional Neural Networks for Document Image Classification",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Jayant",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Doermann",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 22nd International Conference on Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3168--3172",
"other_ids": {
"DOI": [
"10.1109/ICPR.2014.546"
]
},
"num": null,
"urls": [],
"raw_text": "Le Kang, Jayant Kumar, Peng Ye, Yi Li, and David Do- ermann. 2014. Convolutional Neural Networks for Document Image Classification. In 2014 22nd Inter- national Conference on Pattern Recognition, pages 3168-3172.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Chargrid: Towards Understanding 2D Documents",
"authors": [
{
"first": "R",
"middle": [],
"last": "Anoop",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Katti",
"suffix": ""
},
{
"first": "Cordula",
"middle": [],
"last": "Reisswig",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Guder",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Brarda",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "H\u00f6hne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baptiste Faddoul",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4459--4469",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1476"
]
},
"num": null,
"urls": [],
"raw_text": "Anoop R. Katti, Christian Reisswig, Cordula Guder, Sebastian Brarda, Steffen Bickel, Johannes H\u00f6hne, and Jean Baptiste Faddoul. 2018. Chargrid: To- wards Understanding 2D Documents. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4459-4469, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Structural similarity for document image classification and retrieval",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Doermann",
"suffix": ""
}
],
"year": 2014,
"venue": "Pattern Recognition Letters",
"volume": "43",
"issue": "",
"pages": "119--126",
"other_ids": {
"DOI": [
"10.1016/j.patrec.2013.10.030"
]
},
"num": null,
"urls": [],
"raw_text": "Jayant Kumar, Peng Ye, and David Doermann. 2014. Structural similarity for document image classifi- cation and retrieval. Pattern Recognition Letters, 43:119-126.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Graph Convolution for Multimodal Information Extraction from Visually Rich Documents",
"authors": [
{
"first": "Xiaojing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Feiyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Qiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Huasha",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "32--39",
"other_ids": {
"DOI": [
"10.18653/v1/N19-2005"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. 2019. Graph Convolution for Multimodal Information Extraction from Visually Rich Docu- ments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Industry Papers), pages 32-39, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ze-roShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Lockard",
"suffix": ""
},
{
"first": "Prashant",
"middle": [],
"last": "Shiralkar",
"suffix": ""
},
{
"first": "Xin",
"middle": [
"Luna"
],
"last": "Dong",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8105--8117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Lockard, Prashant Shiralkar, Xin Luna Dong, and Hannaneh Hajishirzi. 2020. Ze- roShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 8105-8117, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Embedded Textual Content for Document Image Classification with Convolutional Neural Networks",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Noce",
"suffix": ""
},
{
"first": "Ignazio",
"middle": [],
"last": "Gallo",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Zamberletti",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Calefati",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 ACM Symposium on Document Engineering, DocEng '16",
"volume": "",
"issue": "",
"pages": "165--173",
"other_ids": {
"DOI": [
"10.1145/2960811.2960814"
]
},
"num": null,
"urls": [],
"raw_text": "Lucia Noce, Ignazio Gallo, Alessandro Zamberletti, and Alessandro Calefati. 2016. Embedded Textual Content for Document Image Classification with Convolutional Neural Networks. In Proceedings of the 2016 ACM Symposium on Document Engineer- ing, DocEng '16, pages 165-173, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Scikit-learn: Machine Learning in Python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "Duchesnay",
"middle": [],
"last": "And\u00e9douard",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deterministic routing between layout abstractions for multi-scale classification of visually rich documents",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Sarkhel",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Nandi",
"suffix": ""
}
],
"year": 2019,
"venue": "IJCAI International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3360--3366",
"other_ids": {
"DOI": [
"10.24963/ijcai.2019/466"
]
},
"num": null,
"urls": [],
"raw_text": "Ritesh Sarkhel and Arnab Nandi. 2019. Deterministic routing between layout abstractions for multi-scale classification of visually rich documents. In IJCAI International Joint Conference on Artificial Intelli- gence, pages 3360-3366.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Analysis of Convolutional Neural Networks for Document Image Classification",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Tensmeyer",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Martinez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Document Analysis and Recognition, ICDAR",
"volume": "1",
"issue": "",
"pages": "388--393",
"other_ids": {
"DOI": [
"10.1109/ICDAR.2017.71"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Tensmeyer and Tony Martinez. 2017. Analy- sis of Convolutional Neural Networks for Document Image Classification. In Proceedings of the Interna- tional Conference on Document Analysis and Recog- nition, ICDAR, volume 1, pages 388-393.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "SBERT-WK: A Sentence Embedding Method by Dissecting BERTbased Word Models",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "C.-C. Jay",
"middle": [],
"last": "Kuo",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Wang and C.-C. Jay Kuo. 2020. SBERT-WK: A Sentence Embedding Method by Dissecting BERT- based Word Models.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "LayoutLM: Pretraining of Text and Layout for Document Image Understanding",
"authors": [
{
"first": "Yiheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Minghao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2019. LayoutLM: Pre- training of Text and Layout for Document Image Un- derstanding.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Boxplots of F 1 and ARI over five runs.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Specifications from a cluster with 0.97 purity.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Specifications with different formats, which did not appear in the high-purity specification cluster.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Samples from the lowest-purity cluster.",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "LayoutLM correctly clustered these two scientific documents together.",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "Comparison of different techniques of combining BERT vectors (mean F 1 and ARI over five runs)",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "Class precisions for the sample clustering.",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}