diff --git a/.gitattributes b/.gitattributes index f58013b8d4a90e8b0f4e3ebabbee1c653e464063..b6044f9c244e8509e924be0ab19c140beb325988 100644 --- a/.gitattributes +++ b/.gitattributes @@ -5545,3 +5545,51 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text 2024/“You[[:space:]]Gotta[[:space:]]be[[:space:]]a[[:space:]]Doctor,[[:space:]]Lin”[[:space:]]_[[:space:]]An[[:space:]]Investigation[[:space:]]of[[:space:]]Name-Based[[:space:]]Bias[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Employment[[:space:]]Recommendations/d7845467-94ef-458f-9d7d-f43f9afdcb6b_origin.pdf filter=lfs diff=lfs merge=lfs -text 2024/Fast[[:space:]]Forwarding[[:space:]]Low-Rank[[:space:]]Training/9609da61-f7eb-4e43-a005-289ec2b63ea6_origin.pdf filter=lfs diff=lfs merge=lfs -text 2024/Fewer[[:space:]]is[[:space:]]More_[[:space:]]Boosting[[:space:]]Math[[:space:]]Reasoning[[:space:]]with[[:space:]]Reinforced[[:space:]]Context[[:space:]]Pruning/aaf75814-e779-4ac6-b074-387d29a3853b_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/1-PAGER_[[:space:]]One[[:space:]]Pass[[:space:]]Answer[[:space:]]Generation[[:space:]]and[[:space:]]Evidence[[:space:]]Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/2INER_[[:space:]]Instructive[[:space:]]and[[:space:]]In-Context[[:space:]]Learning[[:space:]]on[[:space:]]Few-Shot[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Benchmark[[:space:]]for[[:space:]]Semi-Inductive[[:space:]]Link[[:space:]]Prediction[[:space:]]in[[:space:]]Knowledge[[:space:]]Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Black-Box[[:space:]]Attack[[:space:]]on[[:space:]]Code[[:space:]]Models[[:space:]]via[[:space:]]Representation[[:space:]]Nearest[[:space:]]Neighbor[[:space:]]Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Boundary[[:space:]]Offset[[:space:]]Prediction[[:space:]]Network[[:space:]]for[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Causal[[:space:]]View[[:space:]]of[[:space:]]Entity[[:space:]]Bias[[:space:]]in[[:space:]](Large)[[:space:]]Language[[:space:]]Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Closer[[:space:]]Look[[:space:]]into[[:space:]]Using[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Automatic[[:space:]]Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Comprehensive[[:space:]]Evaluation[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]on[[:space:]]Legal[[:space:]]Judgment[[:space:]]Prediction/03295168-adb4-4f17-ac96-deb081e11468_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Comprehensive[[:space:]]Evaluation[[:space:]]of[[:space:]]Tool-Assisted[[:space:]]Generation[[:space:]]Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Computational[[:space:]]Interface[[:space:]]to[[:space:]]Translate[[:space:]]Strategic[[:space:]]Intent[[:space:]]from[[:space:]]Unstructured[[:space:]]Language[[:space:]]in[[:space:]]a[[:space:]]Low-Data[[:space:]]Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Confederacy[[:space:]]of[[:space:]]Models_[[:space:]]a[[:space:]]Comprehensive[[:space:]]Evaluation[[:space:]]of[[:space:]]LLMs[[:space:]]on[[:space:]]Creative[[:space:]]Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Critical[[:space:]]Analysis[[:space:]]of[[:space:]]Document[[:space:]]Out-of-Distribution[[:space:]]Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Dataset[[:space:]]for[[:space:]]Investigating[[:space:]]the[[:space:]]Impact[[:space:]]of[[:space:]]Context[[:space:]]for[[:space:]]Offensive[[:space:]]Language[[:space:]]Detection[[:space:]]in[[:space:]]Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Framework[[:space:]]for[[:space:]]Bidirectional[[:space:]]Decoding_[[:space:]]Case[[:space:]]Study[[:space:]]in[[:space:]]Morphological[[:space:]]Inflection/911ebe92-f987-4f30-ba8a-6ececcac30ec_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Framework[[:space:]]for[[:space:]]Exploring[[:space:]]Player[[:space:]]Perceptions[[:space:]]of[[:space:]]LLM-Generated[[:space:]]Dialogue[[:space:]]in[[:space:]]Commercial[[:space:]]Video[[:space:]]Games/d5ddb1fa-ae0e-4f20-bc10-0953a4168524_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Frustratingly[[:space:]]Easy[[:space:]]Plug-and-Play[[:space:]]Detection-and-Reasoning[[:space:]]Module[[:space:]]for[[:space:]]Chinese[[:space:]]Spelling[[:space:]]Check/79cd98e0-e8e2-4f21-9a02-28f4c98a27c1_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Hierarchical[[:space:]]Encoding-Decoding[[:space:]]Scheme[[:space:]]for[[:space:]]Abstractive[[:space:]]Multi-document[[:space:]]Summarization/c4cdb542-18c4-4689-81ae-62f4201e9b72_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Joint[[:space:]]Matrix[[:space:]]Factorization[[:space:]]Analysis[[:space:]]of[[:space:]]Multilingual[[:space:]]Representations/7f6fc390-7960-4621-b5b9-b109c8392c16_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Language[[:space:]]Model[[:space:]]with[[:space:]]Limited[[:space:]]Memory[[:space:]]Capacity[[:space:]]Captures[[:space:]]Interference[[:space:]]in[[:space:]]Human[[:space:]]Sentence[[:space:]]Processing/92d813cc-3614-40df-b622-17b80c3b6926_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Lightweight[[:space:]]Method[[:space:]]to[[:space:]]Generate[[:space:]]Unanswerable[[:space:]]Questions[[:space:]]in[[:space:]]English/a4ca28aa-7501-4c72-aa7e-6dcc59b17cd9_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Multi-Modal[[:space:]]Multilingual[[:space:]]Benchmark[[:space:]]for[[:space:]]Document[[:space:]]Image[[:space:]]Classification/99651770-2f3f-4559-bee6-0108e0d63ab8_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]New[[:space:]]Benchmark[[:space:]]and[[:space:]]Reverse[[:space:]]Validation[[:space:]]Method[[:space:]]for[[:space:]]Passage-level[[:space:]]Hallucination[[:space:]]Detection/7f0f41c2-eeab-448d-b1cb-149a639ca0c9_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Novel[[:space:]]Contrastive[[:space:]]Learning[[:space:]]Method[[:space:]]for[[:space:]]Clickbait[[:space:]]Detection[[:space:]]on[[:space:]]RoCliCo_[[:space:]]A[[:space:]]Romanian[[:space:]]Clickbait[[:space:]]Corpus[[:space:]]of[[:space:]]News[[:space:]]Articles/335fe8e5-0d90-4371-8a9c-3d49533b1a2a_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Parallel[[:space:]]Corpus[[:space:]]for[[:space:]]Vietnamese[[:space:]]Central-Northern[[:space:]]Dialect[[:space:]]Text[[:space:]]Transfer/2837564c-4387-482a-9089-34aa15fd7e0e_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Query-Parallel[[:space:]]Machine[[:space:]]Reading[[:space:]]Comprehension[[:space:]]Framework[[:space:]]for[[:space:]]Low-resource[[:space:]]NER/74fbd22a-dac6-4ba8-8eab-10941bff64c1_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Read-and-Select[[:space:]]Framework[[:space:]]for[[:space:]]Zero-shot[[:space:]]Entity[[:space:]]Linking/030b6fb6-a8fa-4919-bdfc-db2b37ea1051_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Reference-free[[:space:]]Segmentation[[:space:]]Quality[[:space:]]Index[[:space:]](SegReFree)/ba88a6ce-2298-483f-951e-99f75155a986_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Rewriting[[:space:]]Approach[[:space:]]for[[:space:]]Gender[[:space:]]Inclusivity[[:space:]]in[[:space:]]Portuguese/df7e43be-a55c-4f96-b469-7bbe13e64823_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Sequence-to-Structure[[:space:]]Approach[[:space:]]to[[:space:]]Document-level[[:space:]]Targeted[[:space:]]Sentiment[[:space:]]Analysis/76c3d563-4f8f-4984-b3fb-b69d157d8234_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Spectral[[:space:]]Viewpoint[[:space:]]on[[:space:]]Continual[[:space:]]Relation[[:space:]]Extraction/4b8581b0-8a1e-48ad-ae8d-aedefab1e935_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Structure-Aware[[:space:]]Generative[[:space:]]Adversarial[[:space:]]Network[[:space:]]for[[:space:]]Bilingual[[:space:]]Lexicon[[:space:]]Induction/96069ab8-fad1-4366-a1c1-d7fc4e850f49_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Table-to-Text[[:space:]]Framework[[:space:]]with[[:space:]]Heterogeneous[[:space:]]Multidominance[[:space:]]Attention[[:space:]]and[[:space:]]Self-Evaluated[[:space:]]Multi-Pass[[:space:]]Deliberation/a47fb9bb-8f0e-45f7-8219-a8faab1cf449_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Thorough[[:space:]]Examination[[:space:]]on[[:space:]]Zero-shot[[:space:]]Dense[[:space:]]Retrieval/183885c0-48c6-4842-97b7-5dfbcdd03624_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Unified[[:space:]]Framework[[:space:]]for[[:space:]]Synaesthesia[[:space:]]Analysis/dcba89c9-0794-4ccb-b6ef-0aea93f140db_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/A[[:space:]]Zero-Shot[[:space:]]Language[[:space:]]Agent[[:space:]]for[[:space:]]Computer[[:space:]]Control[[:space:]]with[[:space:]]Structured[[:space:]]Reflection/a0a6e275-4726-404f-a447-0adafd25c638_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/ACT-SQL_[[:space:]]In-Context[[:space:]]Learning[[:space:]]for[[:space:]]Text-to-SQL[[:space:]]with[[:space:]]Automatically-Generated[[:space:]]Chain-of-Thought/e5a64e92-4796-4a1a-8132-76dddf1f0f67_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/APP_[[:space:]]Adaptive[[:space:]]Prototypical[[:space:]]Pseudo-Labeling[[:space:]]for[[:space:]]Few-shot[[:space:]]OOD[[:space:]]Detection/3af0b555-df5b-4b18-a998-77ba61a4930c_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/ARKitSceneRefer_[[:space:]]Text-based[[:space:]]Localization[[:space:]]of[[:space:]]Small[[:space:]]Objects[[:space:]]in[[:space:]]Diverse[[:space:]]Real-World[[:space:]]3D[[:space:]]Indoor[[:space:]]Scenes/9d2d43eb-57ab-4c45-85e3-3b1baced78e6_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/ASPIRO_[[:space:]]Any-shot[[:space:]]Structured[[:space:]]Parsing-error-Induced[[:space:]]ReprOmpting[[:space:]]for[[:space:]]Consistent[[:space:]]Data-to-Text[[:space:]]Generation/e52a8117-87c0-4cd3-84ab-fcf1cc33cdf9_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/ASSERT_[[:space:]]Automated[[:space:]]Safety[[:space:]]Scenario[[:space:]]Red[[:space:]]Teaming[[:space:]]for[[:space:]]Evaluating[[:space:]]the[[:space:]]Robustness[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/8fe0a4b0-6ae1-4e8a-a11f-345dc122b949_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/Accelerating[[:space:]]Multiple[[:space:]]Intent[[:space:]]Detection[[:space:]]and[[:space:]]Slot[[:space:]]Filling[[:space:]]via[[:space:]]Targeted[[:space:]]Knowledge[[:space:]]Distillation/f16b73f7-5f29-464a-8a54-5c1e1dad9dad_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/Accuracy[[:space:]]is[[:space:]]not[[:space:]]enough_[[:space:]]Evaluating[[:space:]]Personalization[[:space:]]in[[:space:]]Summarizers/b5c480b9-c0cf-42cb-8d63-001436d251fb_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/Active[[:space:]]Learning[[:space:]]Principles[[:space:]]for[[:space:]]In-Context[[:space:]]Learning[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/c9f86532-1018-45a4-a80e-747592411e6d_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/AdaTranS_[[:space:]]Adapting[[:space:]]with[[:space:]]Boundary-based[[:space:]]Shrinking[[:space:]]for[[:space:]]End-to-End[[:space:]]Speech[[:space:]]Translation/2910c3af-b7a9-4cab-b640-ffd3aa82e0c6_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/Adaptation[[:space:]]with[[:space:]]Self-Evaluation[[:space:]]to[[:space:]]Improve[[:space:]]Selective[[:space:]]Prediction[[:space:]]in[[:space:]]LLMs/f7d2433b-15b6-4b85-939e-afd16b6d78ec_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/Adapter[[:space:]]Pruning[[:space:]]using[[:space:]]Tropical[[:space:]]Characterization/42a77f92-f51b-4a4c-822f-e51497eb78bc_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/Adapter-TST_[[:space:]]A[[:space:]]Parameter[[:space:]]Efficient[[:space:]]Method[[:space:]]for[[:space:]]Multiple-Attribute[[:space:]]Text[[:space:]]Style[[:space:]]Transfer/4a33baff-242c-4747-a6f9-e0cf8517f0aa_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/Adapting[[:space:]]Pretrained[[:space:]]Text-to-Text[[:space:]]Models[[:space:]]for[[:space:]]Long[[:space:]]Text[[:space:]]Sequences/1b1c2a00-dc3f-4735-b793-a47c561f3d44_origin.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_content_list.json b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f2a42294417ac774d3c2d7610ada01783db75d7f --- /dev/null +++ b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_content_list.json @@ -0,0 +1,2126 @@ +[ + { + "type": "text", + "text": "1-PAGER: One Pass Answer Generation and Evidence Retrieval", + "text_level": 1, + "bbox": [ + 164, + 90, + 831, + 109 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Palak Jain1 Livio Baldini Soares2 Tom Kwiatkowski2", + "bbox": [ + 228, + 140, + 771, + 156 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Google Research $^{2}$ Google Deepmind", + "bbox": [ + 319, + 158, + 680, + 175 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{palakj, liviobs, tomkwiat}@google.com", + "bbox": [ + 319, + 175, + 685, + 192 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 267 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We present 1-PAGER the first system that answers a question and retrieves evidence using a single Transformer-based model and decoding process. 1-PAGER incrementally partitions the retrieval corpus using constrained decoding to select a document and answer string, and we show that this is competitive with comparable retrieve-and-read alternatives according to both retrieval and answer accuracy metrics. 1-PAGER also outperforms the equivalent 'closed-book' question answering model, by grounding predictions in an evidence corpus. While 1-PAGER is not yet on-par with more expensive systems that read many more documents before generating an answer, we argue that it provides an important step toward attributed generation by folding retrieval into the sequence-to-sequence paradigm that is currently dominant in NLP. We also show that the search paths used to partition the corpus are easy to read and understand, paving a way forward for interpretable neural retrieval.", + "bbox": [ + 141, + 279, + 460, + 592 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 604, + 260, + 619 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In recent times, there has been a push to reformulate a wide variety of tasks from NLP and other domains into the sequence-to-sequence paradigm, to make use of large pre-trained Transformer networks (Vaswani et al., 2017). However, despite evidence that large language models can often answer questions (Roberts et al., 2020), predict identifiers of documents that support those answers (Tay et al., 2022), or generate text that contains and explains an answer (Yu et al., 2022) the dominant paradigm in question answering is still the retrieve-and-read approach that pipelines separate retrieval and answer generation modules. This approach has the benefit that it can provide direct and targeted paragraph-level attribution for the generated answers (Bohnet et al., 2022). However, it also relies on a heterogenous mix of models that are hard to train in concert (Metzler et al., 2021).", + "bbox": [ + 112, + 629, + 489, + 917 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/64c5c707b66c4d979c376511847a2f6ef5534751d30fb0db42801b4d3d7ac625.jpg", + "image_caption": [ + "Figure 1: Example 1P output that iteratively partitions the corpus into sub-sets containing the generated n-grams. The last n-gram is taken as the answer." + ], + "image_footnote": [], + "bbox": [ + 515, + 255, + 884, + 437 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Motivated by the observation that language model decoders already perform search over possible sequences (Graves, 2012), and that evidence documents themselves are simply sequences of tokens, we present an alternative approach that relies on a single Transformer model. In this approach, which we name 1-PAGER (One Pass Answer Generation and Evidence Retrieval) or simply 1P, the decoder iteratively partitions a corpus of evidence documents by generating a search path consisting of a set of keywords that identify relevant documents and an answer string that is contained in at least one of these documents. With 1P, we aim to explore the spectrum between CBQA, where the answer is generated without reference to an evidence corpus, and pipelined approaches that feed retrieved documents into the transformer.", + "bbox": [ + 507, + 516, + 884, + 788 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Figure 1 illustrates an example in which the corpus is iteratively partitioned into documents that contain the string 'Economy of India', then those that also contain the string 'Agriculture', and finally those that also contain the answer string '23%'.", + "bbox": [ + 507, + 790, + 884, + 869 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1P output sequences are guaranteed to match at least one document in the evidence corpus. This is enforced via a constrained decoder that has ac", + "bbox": [ + 507, + 871, + 884, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "14529", + "bbox": [ + 475, + 927, + 524, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14529-14543", + "bbox": [ + 208, + 945, + 786, + 958 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 958, + 719, + 972 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "cess to an FM-index representation of the evidence corpus contents (Ferragina and Manzini, 2000) and we evaluate 1P's ability to correctly answer open-domain questions while also retrieving passages that provide support for those answers (Bohnet et al., 2022). Since 1P is the first model that can do both of these tasks, we compare to pipelined systems that first retrieve a single passage and then generate an answer based on this evidence passage. 1P is competitive as a passage retriever, performing similarly to a widely used dense retriever (Karpukhin et al., 2020) and outperforming the SEAL system which independently generates keywords rather than a search path (Bevilacqua et al., 2022). 1P also outperforms an equivalent closed-book question answering (CBQA) model (Roberts et al., 2020) according to answer accuracy. Part of this improvement comes from the prediction of search paths themselves, reminiscent of chain-of-thought reasoning (Wei et al., 2022), and part is from 1P's constrained decoder, which forces the model to generate answers from passages that contain the keywords.", + "bbox": [ + 115, + 85, + 490, + 453 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "While 1P does not yet perform as well as the very best retrieval or open-domain question answering systems in terms of accuracy, the fact that it is competitive with pipelined systems that are trained with the same data and which use similar amounts of inference-time compute suggests a promising path ahead. Unlike those systems, 1P can be trained end-to-end along with any other task that fits into the sequence-to-sequence paradigm. Additionally, 1P search paths are inherently interpretable, unlike embeddings used in dense retrieval.", + "bbox": [ + 115, + 456, + 489, + 632 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 115, + 649, + 267, + 664 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "\"Retrieve-and-read\" Question Answering Question answering approaches in NLP are dominated by the \"retrieve-and-read\" paradigm where a retriever first fetches hundreds of relevant documents from a corpus, followed by a language model that reranks and extracts the answer (Harabagiu et al., 2003; Chen et al., 2017; Zhu et al., 2021). Sparse retrievers such as BM25 (Robertson et al., 2009) build a high-dimensional lexical index over text corpus. Dense retrievers (Karpukhin et al., 2020) use a dual encoder architecture to embed the query and document and perform an approximate nearest neighbor search. Various modifications to dense retrieval have been proposed over the years includ", + "bbox": [ + 115, + 677, + 485, + 917 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "ing hard negative training (Xiong et al., 2020), late interaction (Khattab and Zaharia, 2020; Santhanam et al., 2022), few-shot learning (Izacard et al., 2022), joint retriever and reader training (Jiang et al., 2022).", + "bbox": [ + 512, + 84, + 880, + 162 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "A particular variant of interest is the Iterative Retrieval process where the query is reformulated incrementally (Das et al., 2019; Lee et al., 2022) leading to an interactive search process (Jiang et al., 2023; Adolphs et al., 2021). This query augmentation scheme has similarities with our use of search paths. However, we use the paths to iteratively partition the corpus while prior works have used it for refining the query.", + "bbox": [ + 512, + 166, + 880, + 310 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To perform well, retrieve-and-read systems will typically retrieve 10s to 100s of passages that must be processed by a language model. In constraint, 1P retrieves and extracts an answer in a single pass of language model generation.", + "bbox": [ + 512, + 313, + 880, + 391 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Closed Book Question Answering With data and parameter scale, LLMs in a closed-book setting (CBQA) have shown competitive performance (OpenAI, 2023; Anil et al., 2023; Yu et al., 2023) to retrieve pipelines (ODQA), however without producing any attributed passages (Rashkin et al., 2021; Bohnet et al., 2022). An extension of CBQA is post-hoc retrieval where a large language model LLM) is first used to generate an answer and then evidence for the question-answer pair is fetched by a retriever (Gao et al., 2023a; Bohnet et al., 2022). While post-hoc retrieval serves the same goal as 1P, it still uses a pipeline of LLM and retriever to do so.", + "bbox": [ + 512, + 407, + 880, + 629 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Generative Retrieval Recently, generative retrieval has emerged as an alternative to the conventional \"retrieve-and-read\" pipeline (Metzler et al., 2021). Genre (De Cao et al., 2021) performed generative entity linking by constraining model's decoding to a set of entities. DSI (Tay et al., 2022) showed one of the first proof of LLM's ability to memorize docids in the corpus. However, atomic ids or hierarchical clusters, as used in DSI, are opaque identifiers and capture limited information. Works such as SEAL (Bevilacqua et al., 2022) and Ultron (Zhou et al., 2022) use a semantically richer representation: keywords in the document. In particular, SEAL constrains the generation to only keywords in the corpus using the FM-index (Ferragina and Manzini, 2000), a key data structure we borrow in this work.", + "bbox": [ + 512, + 646, + 880, + 916 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "14530", + "bbox": [ + 478, + 928, + 524, + 940 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/8dcfa0acd2a903129c9e6dbe6563a933e6f4602c2c9e926949dec9381190afc9.jpg", + "image_caption": [ + "Figure 2: System illustration of different QA systems. From left to right: CBQA, 1-PAGER, SEAL, Retrieve-and-Read system. $C$ denotes the retrieval corpus, $P$ a retrieved passage, $Q$ the input question and $A$ , the generated answer. 1P is closest to CBQA (only single model used) but it also outputs a passage retrieved from $C$ ." + ], + "image_footnote": [], + "bbox": [ + 122, + 80, + 877, + 171 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "1P represents docids as keyword paths, which are arguably more interpretable, and learns a soft partition over the corpus instead of the hard partition imposed by DSI's clustering.", + "bbox": [ + 112, + 249, + 487, + 313 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Another crucial distinction is 1P's ability to both retrieve and generate an answer while prior works rely on a external re-ranker/reader for the same. A high-level view of various question-answering systems is presented in Figure 2.", + "bbox": [ + 112, + 313, + 487, + 394 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Attributed Question Answering Standard metrics for open-domain question answering, such as exact match or token-based F1, have received criticism for being imprecise and/or insufficient. Several efforts have proposed augmenting answers with textual evidence, via retrieval or citations (Bohnet et al., 2022; Menick et al., 2022; Gao et al., 2023b). While this work does not directly evaluate the quality of retrieved answer evidence, our proposed model inherently produces a passage to support the final answer, along with a search path of keywords, which could be used to provide users with answer evidence.", + "bbox": [ + 112, + 403, + 489, + 612 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Iterative Corpus Partitioning and Answer Prediction", + "text_level": 1, + "bbox": [ + 112, + 625, + 436, + 657 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We focus on the problem of learning a mapping $f(q, D) \\to (a, d_a)$ from a question $q$ and corpus of documents $D$ to an answer and supporting document $(a, d_a)$ . The predicted document $d_a$ is retrieved from $D$ and the answer $a$ is a sub-string of $d_a$ . The document $d_a$ should be relevant to the question and provide evidence for answer.", + "bbox": [ + 112, + 668, + 487, + 778 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The goal of this paper is to model the function $f$ using a single sequence-to-sequence model, rather than a pipeline which first retrieves $d_{a}$ and then feeds it into an answer generation module. To achieve our goal, we recast retrieval as an iterative corpus partitioning process illustrated in Figure 3.", + "bbox": [ + 112, + 781, + 487, + 878 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Iterative corpus partitioning adopts the LM decoder's autoregressive search process to partition", + "bbox": [ + 112, + 887, + 489, + 919 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "$D$ by predicting n-gram keywords.", + "bbox": [ + 509, + 249, + 766, + 265 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "An n-gram of tokens $k$ is said to be contained in a document $d$ , denoted by $k \\prec d$ , when $k$ is a sub-sequence of $d$ . We define a keyword corpus partitioning function", + "bbox": [ + 507, + 265, + 882, + 329 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {F} (D, k) = \\{d | k \\prec d; d \\in D \\}\n$$\n", + "text_format": "latex", + "bbox": [ + 581, + 338, + 808, + 357 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "that selects only those documents that contain $k$ . 1-PAGER iteratively partitions the corpus $D$ by generating a sequence of n-grams that we refer to as a Search Path $p_t = [k_1, k_2, \\dots, k_t]$ . Each prefix of this search path defines a subset of $D$ via the search path corpus partitioning function", + "bbox": [ + 507, + 366, + 882, + 463 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {P} (D, p _ {t}) = D _ {p _ {t}} = \\{\\cap_ {i \\in [ 1, t ]} \\mathcal {F} (D, k _ {i}) \\}\n$$\n", + "text_format": "latex", + "bbox": [ + 549, + 472, + 838, + 492 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "and each subsequent keyword $k_{t+1}$ narrows down $D_{p_t}$ into further sub-spaces such that $D_{p_{t+1}} \\subseteq D_{p_t}$ .", + "bbox": [ + 507, + 500, + 882, + 533 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Answer prediction is treated in exactly the same way as keyword selection and in 1P the last keyword from $p$ is taken as the answer.", + "bbox": [ + 507, + 539, + 882, + 588 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "4 Constrained Decoding and FM-Index", + "text_level": 1, + "bbox": [ + 507, + 599, + 867, + 615 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To avoid generating empty partitions, we constrain 1-PAGER to only decode search paths that match at least one document. We modify the decoder's beam-search strategy to only allow keyword continuations that are contained in the current partition.", + "bbox": [ + 507, + 625, + 882, + 703 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given a document subset $D_{p_i}$ , which could be the full corpus $D$ at the start of decoding $(i = 0)$ and a keyword prefix $k$ , which could be empty, the set of all valid continuation tokens is defined as,", + "bbox": [ + 507, + 705, + 882, + 768 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {C} (k, D _ {p _ {i}}) = \\{x | k \\| x \\prec d, d \\in D _ {p _ {i}} \\}\n$$\n", + "text_format": "latex", + "bbox": [ + 554, + 778, + 835, + 797 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $x$ is any vocabulary token and $\\| \\cdot \\|$ indicates concatenation of two token sequences. As a special case, when $k = \\phi$ and $i = 0$ , all tokens in $D$ are valid continuations. 1P separates keywords in $p_T$ with a special separator token $\\rightarrow$ and marks the end of the sequence with an EOS token. These two tokens are always valid continuations.", + "bbox": [ + 507, + 806, + 882, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "14531", + "bbox": [ + 477, + 927, + 522, + 940 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/ee9b1d9383128dc9e50e582c09f5501209e0eb4fb1f761fd7f474a4bf66331d8.jpg", + "image_caption": [ + "Figure 3: Illustration of the 1P decoding process. A keyword can only be generated from the documents matching previously generated keywords. Right panel shows a magnified view of applying constraints to a decoding step. Any keyword not present in the documents is masked out." + ], + "image_footnote": [], + "bbox": [ + 126, + 84, + 877, + 336 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Consider Figure 3. The three keywords correspond to the decoded token sequence [Ten, Commandments, $\\rightarrow$ , twice, in, the, Hebrew, Bible, $\\rightarrow$ , books, of, Exodus, EOS]. At the start of decoding, any token in $D$ is allowed. After decoding Ten, only those tokens that follow Ten as an n-gram in $D$ are allowed, along with the default separators. After decoding [Ten, Commandments, $\\rightarrow$ ] we are ready to start a new keyword, but only tokens from documents that contain the keyword Ten Commandments are allowed. Decoding continues in this manner until EOS is generated.", + "bbox": [ + 112, + 418, + 489, + 611 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To efficiently implement these constraints, we need a data-structure that can quickly determine both $\\mathcal{C}(k,D_p)$ , the continuation tokens given a document set and $\\mathcal{P}(D_p,k)$ , the subset of documents that contain a given path.", + "bbox": [ + 112, + 613, + 489, + 692 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For this, we extend the usage of an FM-index (Ferragina and Manzini, 2000) as described in (Bevilacqua et al., 2022). The FM-index is a compressed token-based index over a corpus $D_0$ with a few important properties for our usage: (1) it can efficiently list possible token continuations for a sequence prefix that occur in $D_0$ i.e., $\\mathcal{C}(k,D_0)$ , (2) it can list the set of documents in the corpus that match an n-gram i.e., $\\mathcal{F}(D_0,k)$ , and (3) it supports search over arbitrary n-grams that occur within documents. Note that the FM-index operations are optimized for $D_0$ , the original corpus it is built over. We extend these to an arbitrary $D_p \\subset D_0$ at additional cost described in Appendix A.1.", + "bbox": [ + 112, + 694, + 489, + 919 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "5 Training data generation", + "text_level": 1, + "bbox": [ + 509, + 418, + 759, + 434 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For training 1P, we produce a dataset with examples of queries and search paths as described above. At a high-level, we generate search paths by iteratively selecting n-grams from an answer passage, and simulating, using the FM-Index of the retrieval corpus, the partitioning of the corpus after selecting each keyword, until only a few documents remain. Finally, the answer span $a$ is appended to the search path. Each example produced can be serialized as sequence-to-sequence pair of inputs and targets as:", + "bbox": [ + 507, + 444, + 884, + 605 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "inputs: Generate keywords for: $$ ?", + "bbox": [ + 509, + 612, + 836, + 627 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "targets: K_SEP $k_{0}$ K_SEP $k_{1}$ ... K_SEP A_SEP a EOS", + "bbox": [ + 509, + 630, + 880, + 643 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "5.1 Keyword Selection", + "text_level": 1, + "bbox": [ + 507, + 656, + 702, + 671 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "A good keyword should have a) high relevance to the query and b) effectively narrow down the search space. To identify relevant keywords, we restrict to only the gold document $g$ . All ngrams in $g$ of length up to five are extracted. Irrelevant keywords are filtered out such as those starting or ending with stop words. Similarly, keywords that are too rare in the corpus, e.g., \"Philippines at Luzon\" or too frequent, e.g., \"part\" are excluded based on a threshold on their count in corpus. The remaining keywords are scored with a combinations of heuristics, mainly Rouge-1 similarity with the query (Lin, 2004) along with minor award for keywords containing entities and penalty for keywords highly frequent in the corpus.", + "bbox": [ + 507, + 677, + 884, + 919 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "14532", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This scoring mechanism often misses out on keywords that are semantically relevant, but do not lexically overlap with the query. To boost the relevance of our keyword set, we re-score the top hundred keywords using a language model. A T5-XXL model is finetuned with the input as the query $q$ and target as either the title or a heuristically sampled keyword in a similar fashion to Bevilacqua et al. (2022). The heuristically sampled keywords are re-scored using this model to obtain a refined LM-scored set. Two other special types of keywords are awarded high scores: Title of the gold passage and the keyword containing the answer string $a$ .", + "bbox": [ + 112, + 84, + 492, + 294 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "5.2 Search Paths", + "text_level": 1, + "bbox": [ + 112, + 307, + 265, + 322 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The first keyword in a search path needs to effectively partition the corpus. We experiment with either the title or the highest scored keyword from the gold passage as the first keyword in the path. The next keywords are sampled based on their score, given they do not overlap with any of the existing keywords in the path. We continue augmenting a path $p$ with keywords until at most ten passages in the corpus match i.e., $|D_p| < 10$ . The answer keyword is then appended to the path. Our train paths (including the answer) contain a median of three keywords and one matching document.", + "bbox": [ + 112, + 331, + 489, + 524 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6 Experimental Setup", + "text_level": 1, + "bbox": [ + 112, + 539, + 321, + 557 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6.1 Datasets", + "text_level": 1, + "bbox": [ + 112, + 568, + 228, + 583 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We use Open-NQ (Kwiatkowski et al., 2019; Lee et al., 2019) as the question-answering dataset for training. For evaluation, besides Open-NQ, WebQuestions (Berant et al., 2013) and CuratedTREC (Baudiš and Šedivý, 2015) are used to measure out-of-domain performance. The FM-Index corpus for constrained decoding is built over DPR Wikipedia corpus with 100-word splits (Karpukhin et al., 2020). The positive gold passages from DPR are used for sampling training paths. This setup is chosen to mirror SEAL and also permits fair comparison against DPR.", + "bbox": [ + 112, + 590, + 489, + 785 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6.2 Training", + "text_level": 1, + "bbox": [ + 112, + 800, + 230, + 815 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "1P's training dataset contains 310k paths corresponding to 55k queries from Open-NQ. Majority of the training paths begin with the title, with a small fraction starting with other keywords (12%). All keywords, except the title, are scored using the LM-scoring technique described above.", + "bbox": [ + 112, + 822, + 489, + 917 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For our experiments, we use the T5X (Roberts et al., 2022) framework. A T5-XXL $1.1^{1}$ (Raffel et al., 2020) model is finetuned with a batch size of 256 and dropout of 0.1. No additional hyperparameter tuning is performed. We format search paths using the reserved tokens $\\mathsf{K\\_SEP} = \"$ \" and $\\mathsf{A\\_SEP} = \"$ \".", + "bbox": [ + 507, + 84, + 884, + 197 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6.3 Inference", + "text_level": 1, + "bbox": [ + 507, + 209, + 631, + 223 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Our best model employs beam decoding with a beam of 5. Even when the beam is greater than one, only the top-beam result is used for retrieval. We discuss the effect of beam size in depth in Section 7. Given the top generated path $p$ , $D_{p}$ corresponds to the retrieved documents. In case $|D_{p}| > 1$ , a document is sampled arbitrarily for evaluation.", + "bbox": [ + 507, + 230, + 884, + 342 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6.4Baselines", + "text_level": 1, + "bbox": [ + 507, + 354, + 628, + 368 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We compare to a closed-book question answering (CBQA) system that generates answers, but does not ground these in an evidence corpus, as well as retrieve-and-read systems that combine a variety of retrievers with a Transformer-based answerer module. Both the CBQA baseline and the answerer module are derived from the same T5-XXL 1.1 pretrained model as 1P.", + "bbox": [ + 507, + 375, + 882, + 502 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6.4.1 T5-CBQA", + "text_level": 1, + "bbox": [ + 507, + 514, + 650, + 529 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "A T5-XXL 1.1 model is fine-tuned to predict answers from the DPR training set for 10,000 steps with a batch size of 128. Note that it is possible to achieve a higher closed-book performance on NQ using the full Open-NQ training split instead of the subset included in the DPR training set (Roberts et al., 2020). However, to enable meaningful comparison we restrict the CBQA baseline to the same training examples used to train 1P.", + "bbox": [ + 507, + 533, + 882, + 678 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6.4.2 Retrieve-and-Read", + "text_level": 1, + "bbox": [ + 507, + 688, + 717, + 702 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The retrieve-and-read baselines first retrieve a single passage from the evidence corpus, and then feed this passage and the question into the answer generation module2. We report retrieval accuracy for the retrieved passage and answer accuracy for the generated answer.", + "bbox": [ + 507, + 708, + 884, + 804 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "T5-Reader We tune a T5-XXL 1.1 model to generate answers from (question, evidence passage)", + "bbox": [ + 507, + 814, + 882, + 847 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "1https://goo.gl/t5-checkpoints", + "bbox": [ + 529, + 854, + 769, + 869 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "2This differs from ODQA evaluations that do not include evidence retrieval as a first-class task, where many retrieved passages are fed into a reader that generates an answer without attribution to any single piece of text.", + "bbox": [ + 507, + 869, + 880, + 917 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "14533", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "pairs. This is the same base model used by 1P and we train on the (question, passage, answer) triples in the DPR training split to ensure fair comparison.", + "bbox": [ + 112, + 84, + 489, + 134 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "DPR-Retriever We compare against vanilla DPR finetuned on NQ without hard negatives (Karpukhin et al., 2020) using the pre-computed index available on DPR's repository3. We note that our ODQA setup differs from the one used by Karpukhin et al. in that we choose the highest scoring retrieval as evidence for answer generation, instead of generating from the top-100 passages without attribution.", + "bbox": [ + 112, + 143, + 487, + 288 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "BM25-Retriever We use Pyserini toolkit (Lin et al., 2021) with default configurations, retrieving the top-1 passage.", + "bbox": [ + 112, + 300, + 489, + 350 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "SEAL-Retriever SEAL (Bevilacqua et al., 2022) is a generative retrieval system that generates a set of keywords constrained on the corpus. In terms of technique, 1P borrows inspiration from SEAL's use of the FM-Index as well as keywords-as-identifiers. However, the two setups have substantial differences that we highlight in Section 8. We run SEAL with its default configuration and a beam of 5 using the publicly released checkpoint based on Bart-large (Lewis et al., 2020). All outputs from the beam are used for retrieval.", + "bbox": [ + 112, + 359, + 489, + 537 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "6.5 Evaluation", + "text_level": 1, + "bbox": [ + 112, + 550, + 247, + 564 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We evaluate in-domain performance on the OpenNQ test split and out-of-domain performance on WebQuestions (WQ) and CuratedTREC (TREC) following the setup from Karpukhin et al. (2020). Passage retrieval performance is measured with Hits@1 using Pyserini evaluation scripts4.", + "bbox": [ + 112, + 571, + 489, + 670 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "6.6 1P configurations", + "text_level": 1, + "bbox": [ + 112, + 682, + 302, + 697 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We experiment with three configurations: a) 1P: Our primary setup that uses both training and constrained decoding procedures described above, producing a retrieved passage as well as an answer. b) 1P-Unconstrained: Only the training technique described in Section 5 is adopted, with standard unconstrained decoding. Since generation is unconstrained, it is possible that no passage gets retrieved for a given path. c) $1\\mathrm{P} +$ Reader: Here, we take the top retrieved passage from 1P and input it to the Reader model (Section 6.4) to extract the answer.", + "bbox": [ + 112, + 703, + 489, + 881 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "7 Results", + "text_level": 1, + "bbox": [ + 509, + 83, + 608, + 99 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/373d92d883e2c3fe39aedc352fd6ec69e48eaa5838f6d02af015818d49684aef.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
RetrieverAnswererRetrieval Hits @ 1Answer
EMF1
-T5 - CBQA-26.834.0
BM25T5 - Reader23.617.924.0
SEALT5 - Reader37.929.435.8
DPRT5 - Reader46.535.642.4
1PT5 - Reader46.334.241.4
1P - Unconstrained29.329.336.1
1P46.331.738.0
", + "bbox": [ + 510, + 118, + 885, + 288 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/1bfeba8d3638f54c80776ec86b6211ca901b7b47015f1c3e6b40c0416d68f5a7.jpg", + "table_caption": [ + "Table 1: Comparison of different Retriever and Answerer combinations on the NQ-Open test set. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage. 1P combines passage retrieval and answer generation in a single prediction." + ], + "table_footnote": [], + "table_body": "
SystemWebQuestionsTREC
Hits @1EMHits @1EM
BM25 + Rdr19.714.235.229.1
DPR + Rdr32.017.351.635.0
1P + Rdr38.020.463.838.5
1P38.020.563.836.4
", + "bbox": [ + 510, + 403, + 895, + 524 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2: Comparison of different Retriever and Answerer combinations on Out-of-domain datasets. Both the Retriever and Answerer (Rdr) are trained on only Open-NQ. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage.", + "bbox": [ + 507, + 533, + 885, + 606 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We compare to the baselines described in Section 6.4 on Open-NQ using both retrieval and answer accuracy metrics in Table 1. Answers are generated based on the top retrieved document in systems that separate retrieval from answer generation, to provide a clean comparison between systems that return (answer, evidence passage) pairs. Table 2 reports the out-of-domain performance of various systems on WQ and TREC.", + "bbox": [ + 505, + 629, + 884, + 772 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "1P outperforms CBQA in question answering and beats the retrieve-and-read systems, BM25 and SEAL. On the passage retrieval task, it significantly improves over BM25 and SEAL. For indomain setting, 1P is competitive with DPR on retrieval task, but lags behind the QA pipeline that uses DPR. However, this appears to be more due to the reader rather than the retriever as discussed in Section 8. It is worth noting that 1P general", + "bbox": [ + 505, + 774, + 885, + 919 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "3https://github.com/facebookresearch/DPR", + "bbox": [ + 134, + 890, + 440, + 904 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "4https://github.com/castorini/pyserini", + "bbox": [ + 134, + 904, + 423, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "14534", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "izes significantly better out-of-domain compared to other systems.", + "bbox": [ + 112, + 84, + 487, + 116 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Utility of Search Paths 1P-Unconstrained can be viewed as an extended version of CBQA that generates a search path before predicting the answer. Thus, improvement of 1P-Unconstrained over CBQA can be attributed to this path-conditioned answer generation process, analogous to chain-of-thought reasoning (Wei et al., 2022; Lampinen et al., 2022).", + "bbox": [ + 112, + 126, + 489, + 255 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/fb7a9f435f7af5f644d5e644283f75948883f6355bcb0aee300fc20d6edc77a6.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
SystemConstrained DecodingBeam
15
CBQANo26.726.8
1P Unconst.No29.029.3
SEAL + ReaderYes28.529.4
1PYes28.731.7
", + "bbox": [ + 122, + 277, + 480, + 397 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Effect of Constrained Decoding The purpose of constrained decoding is to ground the answer in an evidence retrieved from the corpus. As expected, the constrained setup enables 1P to achieve a higher Hits@1 than 1P-unconstrained. Surprisingly, when decoding with a beam of one, we observe a small drop in answer accuracy for 1P compared to 1P-Unconstrained (Table 3). Inspecting the losses, two dominant reasons surface. Firstly, As DPR passages are chunked into 100-words (Karpukhin et al., 2020), some queries may become unanswerable given a single passage due to missing context. This is disadvantageous when the model has memorized the answer but there is no single passage to attribute it to.", + "bbox": [ + 112, + 516, + 489, + 757 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Secondly, during constrained decoding, after generating the initial keywords, the search space may soon become sparse with no good candidates to pick from. Could a larger room for planning its actions help the model here? Indeed, increasing the beam size to 5 improves performance by $3\\%$ (Table 3), even when only the top-beam is used for retrieval. We refer to this as Planning, since the larger beam only enables the model to plan better and the remaining beam outputs are otherwise dis", + "bbox": [ + 112, + 758, + 489, + 919 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "carded. Note that unconstrained decoding does not gain from planning. In the final setup in Table 1, we use a beam of 5 for both 1P and SEAL. Unlike 1P, SEAL uses all the outputs from the larger beam for retrieval.", + "bbox": [ + 507, + 84, + 884, + 164 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "8 Discussion and Ablations", + "text_level": 1, + "bbox": [ + 507, + 178, + 761, + 193 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Generating Answers While 1P is capable of generating answers, Table 1 highlights that it falls behind the 1P+Reader. The reason seems to be clear: the Reader has visibility into the full passage context while 1P is limited to the decoded search path and the constrained index which only ensures that generations are grounded in the corpus. Since 1P does retrieve passages, it would be possible to pull in the corresponding text as input for answer generation. We leave this as future work.", + "bbox": [ + 507, + 204, + 884, + 365 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Comparison to SEAL While 1P takes inspiration from SEAL, in practice, there are a few key differences between the two systems aside from 1P's answer generation.", + "bbox": [ + 507, + 376, + 882, + 439 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "SEAL generates a large set of keywords (Table 4) using many separate decodes and heuristic guidance (Appendix A.3). In contrast, 1P decodes a single sequence of about three keywords.", + "bbox": [ + 507, + 441, + 882, + 505 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/bd58d3bc2f48fa3e4ce0e928bbafbdfc664beeb698e29c15813aa140e2b04146.jpg", + "table_caption": [ + "Table 3: EM for various decoding setups with different beam sizes on Open-NQ. Only top-beam result is used for evaluation, except in SEAL which uses all beam outputs. 1P constrained decoding benefits the most from a large beam whereas Unconstrained setups have only a slight effect." + ], + "table_footnote": [], + "table_body": "
SEAL1P
Median keywords323
Median docs retrieved5001
Generates answer×
", + "bbox": [ + 547, + 517, + 847, + 599 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 4: Key differences between SEAL and 1P measured over Open-NQ test split with a beam of 1.", + "bbox": [ + 507, + 609, + 882, + 638 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The SEAL keywords are a set, decoded independently of each other and re-scored using sophisticated techniques to retrieve a large number of documents. For instance, the default configuration in SEAL retrieves up to 500 documents. This makes SEAL suitable to be employed in conjunction with a re-ranker. In contrast, 1P search path's map directly to a single (or few) relevant documents (Appendix A.6).", + "bbox": [ + 507, + 661, + 884, + 804 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We acknowledge the model-size variation between SEAL and 1P in the reported experiments, however we preferred using the publicly available SEAL checkpoint. Given the discrepancies with larger beam-size, multiple decodes and use of Reader model, it is difficult to have an apples to apples comparison between the two systems.", + "bbox": [ + 507, + 806, + 882, + 919 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "14535", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Path vs Keyword set We qualitatively observe that keywords in a 1P path, owing to sequential generation, are distinct and add new information as compared to the SEAL output set where overlapping keywords are common (Appendix A.3). Thus, paths are advantageous for precisely narrowing down to a single relevant document while keyword sets are effective for retrieving a large number of documents that can later be reranked. This is corroborated by the fact that 1P is better at Hits@1 while SEAL is better at Hits@5 (Appendix A.4).", + "bbox": [ + 112, + 84, + 489, + 261 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Qualitative Analysis Table 5 illustrates patterns of Search Paths generated by 1P. We note some of the common path patterns here:", + "bbox": [ + 112, + 271, + 487, + 318 + ], + "page_idx": 7 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) First keywords are entities in the query, followed by query predicates that iteratively narrow down towards an answer. This is the most common type of path observed and can be attributed to the dominant presence of title in the training data.", + "2) Rewrites of the original query or related predicates such as \"seasons consists of\", \"appeared on ...\". Such paths are more prevalent where there is no canonical entity in the query or no entity can be determined with high confidence.", + "3) Answer is directly generated followed by supporting keywords that guide towards an attributed passage. This happens in a small fraction of cases, likely where the pretrained model has memorized an answer with high confidence." + ], + "bbox": [ + 112, + 319, + 487, + 560 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Overall, we find the generated search paths to be fairly meaningful and interpretable.", + "bbox": [ + 112, + 562, + 487, + 594 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Sampling Search Paths for Training Table 6 highlights that high quality keywords are crucial to performance. The LM re-scored set of keywords result in significant accuracy gain over heuristically sampled keywords. Paths with first keyword as Title boost performance further. Mixing in a small fraction of paths starting with non-title keywords encourages the model to generate predicates where no entity can be determined, giving us the best results.", + "bbox": [ + 112, + 604, + 489, + 763 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Sensitivity to tokenization We find that constrained decoding is highly sensitive to rare tokenization or punctuation formatting in the corpus. Consider the query \"who sang i ran all the way home\" with the gold document title \"Sorry (I Ran All the Way Home)\". In the unconstrained setup, the model's top prediction starts with \"I Ran All the Way Home\". However, \"(I\" is tokenized differently from \"I\" and searching over the FM-Index", + "bbox": [ + 112, + 774, + 489, + 917 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "returns no match. As a result, constrained decoding drops the predicted keyword altogether, resorting to lower ranked keywords in the beam. We partially fix the issue by modifying the answer in a fraction of the training data to include surrounding punctuation tokens based on how they appear in the FM-index. For instance, the keyword \"I Ran ...\" would update to \"(I Ran ...)\". This simple change leads to a jump in answer accuracy from $26.4\\%$ to $28.7\\%$ . However, much more work is needed to make 1P robust to variations in tokenization.", + "bbox": [ + 507, + 84, + 884, + 260 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "See Appendix A.2 for analysis of training data size and Appendix A.5 for masking logits vs log-probs.", + "bbox": [ + 507, + 262, + 884, + 309 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Conclusion", + "text_level": 1, + "bbox": [ + 509, + 323, + 611, + 338 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We introduce 1-PAGER, the first system to perform question answering and passage retrieval in one pass with a single language model, using a constrained decoder to iteratively partition the retrieval corpus and then generate an answer. We show competitive or improved performance over a variety of comparable baselines and carefully analyze the results, ablating both training strategies and decoding style. We also provide a qualitative analysis of predictions to illustrate the system's capabilities. Challenges with constrained decoding are surfaced including poor search spaces and sensitivity to tokenization and mitigation strategies are presented.", + "bbox": [ + 507, + 349, + 884, + 557 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We hope that 1P adds value in demonstrating how a single transformer model can be harnessed to do both retrieval and answering and pave the path for further progress in the generative retrieval domain.", + "bbox": [ + 507, + 558, + 882, + 637 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 509, + 651, + 615, + 665 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "1P is geared towards identifying a concise, small set of documents and generating answer in a single go. While this makes the architecture simpler, it also adds certain weaknesses. 1P is not effective for retrieving a large number of documents and falls behind pipelined systems that combine retrieval with re-ranking. Even for a single passage, it lags behind state-of-the-art dense-retrieval techniques. 1P's method of answer generation is also not competitive with the use of a reader, due to lack of passage context.", + "bbox": [ + 507, + 677, + 884, + 853 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our training strategy relies heavily on titles or entities and it's generalization on corpora without rich structure or on queries without central entities, remains to be studied.", + "bbox": [ + 507, + 854, + 882, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "14536", + "bbox": [ + 477, + 928, + 524, + 941 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/6615d0859b5cffb3239259a2a7b25b02962c66e39f3ea5a0456551d6295b932b.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Query (Q) and Generated Search Path (SP)Comment
Correctly attributed passages and answers
Q: how many episodes of greys anatomy season 14\nSP: Grey's Anatomy (season 14) » season consists of 24 episodes » 24Query entity resolved first, followed by query predicates
Q: when did they start adding zinc to pennies\nSP: Penny (United States coin) » zinc » Lincoln cent » 1943Query entity resolved iteratively
Q: who was executed for being an american spy during the revolutionary war\nSP: Nathan Hale » Army during the American Revolutionary » Nathan HaleAnswer generated first
Q: who was the grandfather on the Cosby show\nSP: appeared on "The Cosby » Earle HymanQuery rewrites
Incorrect Passage or Answer
Q: who decides the number of judges in the high court\nSP: judge is appointed » High Court » Chief Justice of IndiaA: President of IndiaPath correctly resolved,\nFailed on answer
Q: when did the isle of wight become an island\nIsle of Wight » 1890 » 1890A: During the last Ice AgeQuery entity resolved,\nFailed on supporting keywords
Q: love yourself by justin bieber is about who\nSP: Love Yourself: Her » music video » HerA: RihanaFailed to resolve\nquery entity
", + "bbox": [ + 115, + 80, + 884, + 361 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/760f38de5fe54c729d356498f7cec5a4a204ee9a0a6106e83a20b90ae1fec2f1.jpg", + "table_caption": [ + "Table 5: Example 1P Search Paths (SP) on Open-NQ test set. The last keyword in SP is the predicted answer. Gold answers are indicated by A." + ], + "table_footnote": [], + "table_body": "
Search PathHits@1EM
Heuristic34.522.6
LM-scored40.027.2
Title » LM-scored41.928.0
Title » LM-scored + LM-scored (7+1)42.928.7
", + "bbox": [ + 154, + 428, + 450, + 542 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 6: Comparison of Training Search Paths on OpenNQ. Here LM-scored denotes re-scoring by LM on a heuristic set. All results are with a beam of one. \"»\" indicates keyword separator and \"+\" mixture of path types in the give ratio.", + "bbox": [ + 112, + 551, + 489, + 625 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Constrained decoding also comes with its own challenges. Constrained beam outputs often lack diversity, so that even with a larger beam one may still end up in poor search spaces. Computing document-level constraints across the corpus is expensive as it may require scanning a large number of rows in the index. Further, communication between FM-Index and Transformer model slows down inference.", + "bbox": [ + 112, + 659, + 489, + 803 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Acknowledgement", + "text_level": 1, + "bbox": [ + 114, + 824, + 278, + 840 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We thank Don Metzler, Nicholas FitzGerald, Partha Talukdar, Srini Narayanan, as well as our anonymous reviewers, for their thoughtful comments and valuable feedback", + "bbox": [ + 112, + 854, + 489, + 917 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Ethical Considerations", + "text_level": 1, + "bbox": [ + 509, + 430, + 712, + 445 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "While Large Language Models can solve a wide range of tasks effectively, they also suffer from biases across axis such as gender, race, region (Chan, 2023). LLMs are also prone to generating toxic content, especially when probed about it. Although, our task grounds the model's generations on a corpus, some of the biases in pre-trained LLMs, may seep in 1-PAGER.", + "bbox": [ + 507, + 457, + 884, + 585 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Building the FM-index and constrained decoding is a compute-intensive affair. We have experimented over a single dataset, Natural Questions, involving only knowledge-seeking queries, and single model family, T5. It is possible that some of our findings may not hold over other datasets or model families. Finally, our experiments are limited to English corpus and queries. The proposed approaches are resource-intensive and may not be accessible or valid for several low-resourced languages.", + "bbox": [ + 507, + 587, + 884, + 747 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 510, + 776, + 608, + 791 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Leonard Adolphs, Benjamin Boerschinger, Christian Buck, Michelle Chen Huebscher, Massimiliano Ciaramita, Lasse Espeholt, Thomas Hofmann, Yannic Kilcher, Sascha Rothe, Pier Giuseppe Sessa, et al. 2021. Boosting search engines with interactive agents. arXiv preprint arXiv:2109.00527.", + "bbox": [ + 509, + 800, + 884, + 879 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak", + "bbox": [ + 509, + 891, + 882, + 917 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "14537", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403.", + "Petr Baudis and Jan Šedivý. 2015. Modeling of the question answering task in the yodaqa system. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF'15, Toulouse, France, September 8-11, 2015, Proceedings 6, pages 222-228. Springer.", + "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533-1544.", + "Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen tau Yih, Sebastian Riedel, and Fabio Petroni. 2022. Autoregressive search engines: Generating substrings as document identifiers. In arXiv pre-print 2204.10628.", + "Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, et al. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. arXiv preprint arXiv:2212.08037.", + "Anastasia Chan. 2023. Gpt-3 and instructgpt: technological dystopianism, utopianism, and \"contextual\" perspectives in ai ethics and industry. AI and Ethics, 3(1):53-64.", + "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051.", + "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever-reader interaction for scalable open-domain question answering. In International Conference on Learning Representations.", + "Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.", + "P. Ferragina and G. Manzini. 2000. Opportunistic data structures with applications. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 390-398.", + "Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models." + ], + "bbox": [ + 115, + 85, + 487, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations.", + "Alex Graves. 2012. Sequence transduction with recurrent neural networks.", + "Sanda M Harabagiu, Steven J Maiorano, and Marius A Pasca. 2003. Open-domain textual question answering techniques. Natural Language Engineering, 9(3):231-267.", + "Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models.", + "Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neubig. 2022. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. arXiv preprint arXiv:2212.02027.", + "Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983.", + "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.", + "Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39-48.", + "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics.", + "Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and Felix Hill. 2022. Can language models learn from explanations in context? arXiv preprint arXiv:2204.02329." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "14538", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Hyunjii Lee, Sohee Yang, Hanseok Oh, and Minjoon Seo. 2022. Generative multi-hop retrieval. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1417-1436.", + "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics.", + "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", + "Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. Paq: 65 million probably-asked questions and what you can do with them.", + "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.", + "Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: An easy-to-use python toolkit to support replicable in research with sparse and dense representations. arXiv preprint arXiv:2102.10073.", + "Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, and Nat McAleese. 2022. Teaching language models to support answers with verified quotes.", + "Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork. 2021. Rethinking search: making domain experts out of dilettantes. ACM SIGIR Forum, 55(1):1-27.", + "OpenAI. 2023. Gpt-4 technical report.", + "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.", + "Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. 2021. Measuring attribution in natural language generation models. arXiv preprint arXiv:2112.12870." + ], + "bbox": [ + 115, + 85, + 489, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aankanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, and Andrea Gesmundo. 2022. Scaling up models and data with t5x and seqio.", + "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910.", + "Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333-389.", + "Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. Colbertv2: Effective and efficient retrieval via lightweight late interaction.", + "Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems, 35:21831-21843.", + "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.", + "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.", + "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808.", + "Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022. Generate rather than retrieve: Large language models are strong context generators. arXiv preprint arXiv:2209.10063.", + "Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate" + ], + "bbox": [ + 510, + 85, + 882, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "14539", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "rather than retrieve: Large language models are strong context generators. In The Eleventh International Conference on Learning Representations.", + "Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, Peitan Zhang, and Ji-Rong Wen. 2022. Ultron: An ultimate retriever on corpus with a model-based indexer. arXiv preprint arXiv:2208.09257.", + "Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering." + ], + "bbox": [ + 115, + 85, + 489, + 252 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "14540", + "bbox": [ + 477, + 928, + 526, + 940 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A Appendix", + "text_level": 1, + "bbox": [ + 114, + 84, + 238, + 99 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.1 Constrain Computation", + "text_level": 1, + "bbox": [ + 114, + 109, + 352, + 124 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "1P relies on two key operations for constrain computation:", + "bbox": [ + 112, + 130, + 489, + 162 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "a) $\\mathcal{F}(D,k)$ : Documents that contain keyword $k$", + "b) $\\mathcal{C}(k,D)$ : Next tokens for keyword $k$ in arbitrary document set $D$" + ], + "bbox": [ + 126, + 172, + 487, + 230 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "$\\mathcal{F}(D,k)$ is preprocessed and cached to allow for quick computation. $\\mathcal{C}(k,D)$ is trickier to compute. When $D$ represents the full corpus, FM-index can fetch the next tokens in $O(|V| \\log(|V|))$ , where $V$ is the token vocabulary and independent of $|D|$ . However, arbitrary $D$ requires a traversal over all documents and can be very expensive. In practise, the LLM training guides it to generate effective keywords such that $|D|$ is small.", + "bbox": [ + 112, + 241, + 487, + 385 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "We also apply certain other optimizations to reduce the compute cost:", + "bbox": [ + 112, + 387, + 489, + 419 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Constrains are computed lazily over a decoding pass.", + "- Several computations are cached, eg: keyword to document id mapping", + "- To cap the cost of constraints at each decoding step, we allow for unconstrained generation in rare scenarios, when the estimated cost is too high. If the generated path is absent in the corpus ( $< 1\\%$ examples), these can be filtered out later." + ], + "bbox": [ + 136, + 429, + 485, + 609 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Despite these optimizations, inference continues to be expensive and we perhaps need a special data structure for next token look-up.", + "bbox": [ + 112, + 621, + 487, + 671 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.2 Training data size", + "text_level": 1, + "bbox": [ + 114, + 680, + 305, + 696 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/71439e0cbf64e4a85772a56a9624e0e9ba13d010d9008d119d31af6dbf235983.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetQueriesPathsHits@1EM
Open-NQ55k55k41.928.1
Open-NQ55k310k42.928.7
Open-NQ + PAQ55k310k43.629.5
+ 9M+ 9M
", + "bbox": [ + 119, + 712, + 482, + 810 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Table 7: Comparison of different dataset sizes for queries and paths", + "bbox": [ + 112, + 819, + 487, + 850 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "In Table 7, we observe the effect of dataset size on performance. Increasing the numbers of paths sampled per query improves performance, perhaps", + "bbox": [ + 112, + 871, + 487, + 919 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "due to higher diversity in training. However, this method of dataset expansion is limited by the number of relevant paths we could extract for a query.", + "bbox": [ + 507, + 84, + 882, + 131 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "We also experiment with increasing the query set manifold by mixing in unsupervised datasets. A total of 9M QA pairs are sampled from PAQ (Lewis et al., 2021), a synthetic QA dataset, and search paths extracted with heuristic scoring described in Section 5. The original 1P training dataset is mixed in 1:1 ratio. This further boosts performance, but not proportionally to the amount of data added, indicating diminishing returns from silver datasets.", + "bbox": [ + 507, + 133, + 882, + 277 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.3 SEAL keywords", + "text_level": 1, + "bbox": [ + 507, + 288, + 687, + 304 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "SEAL generates a set of document substrings constrained on the corpus, that are combined to form document identifiers. Besides using a LM to generate keywords, SEAL utilizes several other mechanisms for extracting keywords. This includes partial beam sequences, heuristically adding query n-grams, sampling the top-k tokens from the logprobs of the first decoding step, force decoding title etc. The keywords are re-scored using the LM as well as FM-index count and all keyword combinations are retrieved. Table 8 illustrates keywords generated by both the systems. Note that SEAL keywords can be repetitive and therefore require a large number of keywords to narrow down to meaningful documents. This also makes SEAL suitable for retrieving a much larger set of documents that can be re-ranked later. The maximum number of retrieved documents for SEAL are capped by a hyperparameter with default value of 500. In contrast, 1P is geared towards retrieving only the top-document.", + "bbox": [ + 507, + 309, + 882, + 630 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.4 Hits@5", + "text_level": 1, + "bbox": [ + 509, + 640, + 618, + 655 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "SEAL does significantly better than 1P for Hits@5 (Table 9). We attribute this to the large set of keywords generated by SEAL as explained in the Appendix A.3.", + "bbox": [ + 507, + 662, + 882, + 726 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.5 Normalizing sequence likelihood over constrained space", + "text_level": 1, + "bbox": [ + 507, + 737, + 853, + 769 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "During constrained decoding a sequence $X$ , we need to choose the next token from $\\mathcal{C}(X, D)$ and not the entire vocabulary space $V$ . Should the sequence likelihood be re-normalized over this constrained space? We find that re-normalizing the probabilities results in inflated likelihoods, making it hard for the model to back-track.", + "bbox": [ + 507, + 774, + 882, + 885 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Consider the query, \"where did the butchers in the slaughterhouse cases live\" to which our model", + "bbox": [ + 507, + 887, + 880, + 917 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "14541", + "bbox": [ + 477, + 927, + 522, + 940 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/8734388e27db242bfa05e7766526c2e6c225c95060ead1e3a67a54fb141fe56e.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
SystemQuestion or Search PathAnswer
1P SEALwho has the most catches in nfl history2,000-yard club » Barry SandersJerry RiceBarry SandersT.J. Houshmandzadeh
</s> Michael Irvin @ @, yards per catch, caught his, touchdown, record
1P SEALwhen was harry potter and the philosophers stone publishedHarry Potter and the Philosopher's Stone » first published in the United » 199719971997
</s> Harry Potter and the Philosopher's Stone @ @, "Harry Potter, Potter and thePhilosopher's Stone is, Potter and the Philosopher's Stone Harry, novel1999
1P SEALwhat is the meaning of the harp in irelandHarp » national symbol of Ireland » national symbol of Irelandthe arms of Irelandnational symbol of Ireland
</s> Harp @ @, Irish harp., harp is, harp was, harparistocracy
1P SEALwho was the president of pakistan during 1971 warIndo-Pakistani War of 1971 » Prime Minister of Pakistan » Zulfikar Ali BhuttoYahya KhanZulfikar Ali Bhutto
</s> Indo-Pakistani War of 1971 @ @, East Pakistan, Pakistani, Pakistan Army,Pakistan'sMuhammad Yaqub Khan
1P SEALwhen do you declare honors in contract bridgeContract bridge » declaring » end of the handany time after the auctionend of the hand
</s> Contract bridge @ @, declarer, bidding, honors, handsbidding
", + "bbox": [ + 119, + 95, + 878, + 390 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/58293c4b5818600e487ba00304d9f3dedfe3d565b91feed1a7d030d82bf1e5b2.jpg", + "table_caption": [ + "Table 8: Comparison of keywords generated by SEAL and 1P for randomly sampled examples from Open-NQ test set. For 1P, we show the full search path separated by \"»\" with the last keyword as the answer. For SEAL, we illustrate the top-5 keywords along with the answer from Reader model. \"\" and \"@@\" are special tokens used by SEAL for identifying start of passage and title marker respectively. The Answer next to the question is the gold answer while others are predictions from corresponding systems." + ], + "table_footnote": [], + "table_body": "
SystemBeamHits@5
SEAL159.7
SEAL562.8
1P146.5
1P550.8
", + "bbox": [ + 191, + 499, + 410, + 596 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Table 9: Hits@5 on Open-NQ test. SEAL achieves a much higher score than 1P owning to the larger number of documents matched and re-scored. Note that only top-beam result is used for 1P while SEAL uses all beam outputs.", + "bbox": [ + 112, + 606, + 487, + 678 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "predicts an irrelevant search path [Slaughterhouse Five, but, EoS]. What's going on under the hood? The first keyword is incorrect lending the model into a poor search space. With the second keyword, the model is possibly looking to generate \"butcher\" but there's no such keyword in the constrained set. Ideally, the model should backtrack at this point to other candidates in the beam. However, since the set of continuations is small, renormalizing inflates the probabilities of all tokens in $\\mathcal{C}$ including $EoS$ , even though the true likelihood of such a sequence is very low. Indeed, using the language model's scores directly without any re", + "bbox": [ + 112, + 709, + 489, + 919 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/654724f0998c6062f18ca04f65c75b48e7ea83283c2b0243c15547b26b82e17d.jpg", + "image_caption": [ + "Figure 4: Number of matching documents in the corpus for 1P generated path in the test set. About half the examples match only a single path." + ], + "image_footnote": [], + "bbox": [ + 515, + 502, + 878, + 671 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "normalization cures this issue yielding [Slaughterhouse cases, Butcher, EoS]. and this is the strategy we opt for in all our experiments.", + "bbox": [ + 507, + 753, + 882, + 802 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A.6 Number of matching documents", + "text_level": 1, + "bbox": [ + 507, + 816, + 811, + 832 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "1P generated paths effectively narrow down the corpus, generally matching only a few documents in the corpus as illustrated in Figure 4. Note that a small fraction of paths match 0 documents due to pruning optimizations applied during inference", + "bbox": [ + 507, + 838, + 880, + 919 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14542", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "time detailed in Appendix A.1.", + "bbox": [ + 114, + 85, + 346, + 99 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "14543", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 14 + } +] \ No newline at end of file diff --git a/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_model.json b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..73b819b7ca38d63e3f0a7cd15668064ee423842d --- /dev/null +++ b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_model.json @@ -0,0 +1,2694 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.166, + 0.091, + 0.833, + 0.11 + ], + "angle": 0, + "content": "1-PAGER: One Pass Answer Generation and Evidence Retrieval" + }, + { + "type": "text", + "bbox": [ + 0.229, + 0.141, + 0.773, + 0.158 + ], + "angle": 0, + "content": "Palak Jain1 Livio Baldini Soares2 Tom Kwiatkowski2" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.159, + 0.682, + 0.177 + ], + "angle": 0, + "content": "\\(^{1}\\) Google Research \\(^{2}\\) Google Deepmind" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.177, + 0.686, + 0.193 + ], + "angle": 0, + "content": "{palakj, liviobs, tomkwiat}@google.com" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.268 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.28, + 0.462, + 0.593 + ], + "angle": 0, + "content": "We present 1-PAGER the first system that answers a question and retrieves evidence using a single Transformer-based model and decoding process. 1-PAGER incrementally partitions the retrieval corpus using constrained decoding to select a document and answer string, and we show that this is competitive with comparable retrieve-and-read alternatives according to both retrieval and answer accuracy metrics. 1-PAGER also outperforms the equivalent 'closed-book' question answering model, by grounding predictions in an evidence corpus. While 1-PAGER is not yet on-par with more expensive systems that read many more documents before generating an answer, we argue that it provides an important step toward attributed generation by folding retrieval into the sequence-to-sequence paradigm that is currently dominant in NLP. We also show that the search paths used to partition the corpus are easy to read and understand, paving a way forward for interpretable neural retrieval." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.605, + 0.262, + 0.62 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.63, + 0.49, + 0.919 + ], + "angle": 0, + "content": "In recent times, there has been a push to reformulate a wide variety of tasks from NLP and other domains into the sequence-to-sequence paradigm, to make use of large pre-trained Transformer networks (Vaswani et al., 2017). However, despite evidence that large language models can often answer questions (Roberts et al., 2020), predict identifiers of documents that support those answers (Tay et al., 2022), or generate text that contains and explains an answer (Yu et al., 2022) the dominant paradigm in question answering is still the retrieve-and-read approach that pipelines separate retrieval and answer generation modules. This approach has the benefit that it can provide direct and targeted paragraph-level attribution for the generated answers (Bohnet et al., 2022). However, it also relies on a heterogenous mix of models that are hard to train in concert (Metzler et al., 2021)." + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.256, + 0.885, + 0.438 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.447, + 0.886, + 0.492 + ], + "angle": 0, + "content": "Figure 1: Example 1P output that iteratively partitions the corpus into sub-sets containing the generated n-grams. The last n-gram is taken as the answer." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.517, + 0.885, + 0.789 + ], + "angle": 0, + "content": "Motivated by the observation that language model decoders already perform search over possible sequences (Graves, 2012), and that evidence documents themselves are simply sequences of tokens, we present an alternative approach that relies on a single Transformer model. In this approach, which we name 1-PAGER (One Pass Answer Generation and Evidence Retrieval) or simply 1P, the decoder iteratively partitions a corpus of evidence documents by generating a search path consisting of a set of keywords that identify relevant documents and an answer string that is contained in at least one of these documents. With 1P, we aim to explore the spectrum between CBQA, where the answer is generated without reference to an evidence corpus, and pipelined approaches that feed retrieved documents into the transformer." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.791, + 0.885, + 0.87 + ], + "angle": 0, + "content": "Figure 1 illustrates an example in which the corpus is iteratively partitioned into documents that contain the string 'Economy of India', then those that also contain the string 'Agriculture', and finally those that also contain the answer string '23%'." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.872, + 0.885, + 0.919 + ], + "angle": 0, + "content": "1P output sequences are guaranteed to match at least one document in the evidence corpus. This is enforced via a constrained decoder that has ac" + }, + { + "type": "page_number", + "bbox": [ + 0.477, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14529" + }, + { + "type": "footer", + "bbox": [ + 0.21, + 0.946, + 0.788, + 0.959 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14529-14543" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.959, + 0.721, + 0.973 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.117, + 0.086, + 0.491, + 0.454 + ], + "angle": 0, + "content": "cess to an FM-index representation of the evidence corpus contents (Ferragina and Manzini, 2000) and we evaluate 1P's ability to correctly answer open-domain questions while also retrieving passages that provide support for those answers (Bohnet et al., 2022). Since 1P is the first model that can do both of these tasks, we compare to pipelined systems that first retrieve a single passage and then generate an answer based on this evidence passage. 1P is competitive as a passage retriever, performing similarly to a widely used dense retriever (Karpukhin et al., 2020) and outperforming the SEAL system which independently generates keywords rather than a search path (Bevilacqua et al., 2022). 1P also outperforms an equivalent closed-book question answering (CBQA) model (Roberts et al., 2020) according to answer accuracy. Part of this improvement comes from the prediction of search paths themselves, reminiscent of chain-of-thought reasoning (Wei et al., 2022), and part is from 1P's constrained decoder, which forces the model to generate answers from passages that contain the keywords." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.457, + 0.49, + 0.633 + ], + "angle": 0, + "content": "While 1P does not yet perform as well as the very best retrieval or open-domain question answering systems in terms of accuracy, the fact that it is competitive with pipelined systems that are trained with the same data and which use similar amounts of inference-time compute suggests a promising path ahead. Unlike those systems, 1P can be trained end-to-end along with any other task that fits into the sequence-to-sequence paradigm. Additionally, 1P search paths are inherently interpretable, unlike embeddings used in dense retrieval." + }, + { + "type": "title", + "bbox": [ + 0.117, + 0.65, + 0.268, + 0.665 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.678, + 0.486, + 0.919 + ], + "angle": 0, + "content": "\"Retrieve-and-read\" Question Answering Question answering approaches in NLP are dominated by the \"retrieve-and-read\" paradigm where a retriever first fetches hundreds of relevant documents from a corpus, followed by a language model that reranks and extracts the answer (Harabagiu et al., 2003; Chen et al., 2017; Zhu et al., 2021). Sparse retrievers such as BM25 (Robertson et al., 2009) build a high-dimensional lexical index over text corpus. Dense retrievers (Karpukhin et al., 2020) use a dual encoder architecture to embed the query and document and perform an approximate nearest neighbor search. Various modifications to dense retrieval have been proposed over the years includ" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.085, + 0.882, + 0.164 + ], + "angle": 0, + "content": "ing hard negative training (Xiong et al., 2020), late interaction (Khattab and Zaharia, 2020; Santhanam et al., 2022), few-shot learning (Izacard et al., 2022), joint retriever and reader training (Jiang et al., 2022)." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.167, + 0.882, + 0.311 + ], + "angle": 0, + "content": "A particular variant of interest is the Iterative Retrieval process where the query is reformulated incrementally (Das et al., 2019; Lee et al., 2022) leading to an interactive search process (Jiang et al., 2023; Adolphs et al., 2021). This query augmentation scheme has similarities with our use of search paths. However, we use the paths to iteratively partition the corpus while prior works have used it for refining the query." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.314, + 0.882, + 0.392 + ], + "angle": 0, + "content": "To perform well, retrieve-and-read systems will typically retrieve 10s to 100s of passages that must be processed by a language model. In constraint, 1P retrieves and extracts an answer in a single pass of language model generation." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.408, + 0.882, + 0.63 + ], + "angle": 0, + "content": "Closed Book Question Answering With data and parameter scale, LLMs in a closed-book setting (CBQA) have shown competitive performance (OpenAI, 2023; Anil et al., 2023; Yu et al., 2023) to retrieve pipelines (ODQA), however without producing any attributed passages (Rashkin et al., 2021; Bohnet et al., 2022). An extension of CBQA is post-hoc retrieval where a large language model LLM) is first used to generate an answer and then evidence for the question-answer pair is fetched by a retriever (Gao et al., 2023a; Bohnet et al., 2022). While post-hoc retrieval serves the same goal as 1P, it still uses a pipeline of LLM and retriever to do so." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.647, + 0.882, + 0.917 + ], + "angle": 0, + "content": "Generative Retrieval Recently, generative retrieval has emerged as an alternative to the conventional \"retrieve-and-read\" pipeline (Metzler et al., 2021). Genre (De Cao et al., 2021) performed generative entity linking by constraining model's decoding to a set of entities. DSI (Tay et al., 2022) showed one of the first proof of LLM's ability to memorize docids in the corpus. However, atomic ids or hierarchical clusters, as used in DSI, are opaque identifiers and capture limited information. Works such as SEAL (Bevilacqua et al., 2022) and Ultron (Zhou et al., 2022) use a semantically richer representation: keywords in the document. In particular, SEAL constrains the generation to only keywords in the corpus using the FM-index (Ferragina and Manzini, 2000), a key data structure we borrow in this work." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14530" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.123, + 0.082, + 0.878, + 0.172 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.181, + 0.885, + 0.227 + ], + "angle": 0, + "content": "Figure 2: System illustration of different QA systems. From left to right: CBQA, 1-PAGER, SEAL, Retrieve-and-Read system. \\(C\\) denotes the retrieval corpus, \\(P\\) a retrieved passage, \\(Q\\) the input question and \\(A\\), the generated answer. 1P is closest to CBQA (only single model used) but it also outputs a passage retrieved from \\(C\\)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.25, + 0.489, + 0.314 + ], + "angle": 0, + "content": "1P represents docids as keyword paths, which are arguably more interpretable, and learns a soft partition over the corpus instead of the hard partition imposed by DSI's clustering." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.314, + 0.489, + 0.395 + ], + "angle": 0, + "content": "Another crucial distinction is 1P's ability to both retrieve and generate an answer while prior works rely on a external re-ranker/reader for the same. A high-level view of various question-answering systems is presented in Figure 2." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.404, + 0.49, + 0.613 + ], + "angle": 0, + "content": "Attributed Question Answering Standard metrics for open-domain question answering, such as exact match or token-based F1, have received criticism for being imprecise and/or insufficient. Several efforts have proposed augmenting answers with textual evidence, via retrieval or citations (Bohnet et al., 2022; Menick et al., 2022; Gao et al., 2023b). While this work does not directly evaluate the quality of retrieved answer evidence, our proposed model inherently produces a passage to support the final answer, along with a search path of keywords, which could be used to provide users with answer evidence." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.626, + 0.438, + 0.658 + ], + "angle": 0, + "content": "3 Iterative Corpus Partitioning and Answer Prediction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.669, + 0.489, + 0.78 + ], + "angle": 0, + "content": "We focus on the problem of learning a mapping \\( f(q, D) \\to (a, d_a) \\) from a question \\( q \\) and corpus of documents \\( D \\) to an answer and supporting document \\( (a, d_a) \\). The predicted document \\( d_a \\) is retrieved from \\( D \\) and the answer \\( a \\) is a sub-string of \\( d_a \\). The document \\( d_a \\) should be relevant to the question and provide evidence for answer." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.782, + 0.489, + 0.879 + ], + "angle": 0, + "content": "The goal of this paper is to model the function \\( f \\) using a single sequence-to-sequence model, rather than a pipeline which first retrieves \\( d_{a} \\) and then feeds it into an answer generation module. To achieve our goal, we recast retrieval as an iterative corpus partitioning process illustrated in Figure 3." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.888, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Iterative corpus partitioning adopts the LM decoder's autoregressive search process to partition" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.25, + 0.768, + 0.266 + ], + "angle": 0, + "content": "\\(D\\) by predicting n-gram keywords." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.266, + 0.883, + 0.33 + ], + "angle": 0, + "content": "An n-gram of tokens \\( k \\) is said to be contained in a document \\( d \\), denoted by \\( k \\prec d \\), when \\( k \\) is a sub-sequence of \\( d \\). We define a keyword corpus partitioning function" + }, + { + "type": "equation", + "bbox": [ + 0.582, + 0.34, + 0.81, + 0.358 + ], + "angle": 0, + "content": "\\[\n\\mathcal {F} (D, k) = \\{d | k \\prec d; d \\in D \\}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.367, + 0.884, + 0.464 + ], + "angle": 0, + "content": "that selects only those documents that contain \\( k \\). 1-PAGER iteratively partitions the corpus \\( D \\) by generating a sequence of n-grams that we refer to as a Search Path \\( p_t = [k_1, k_2, \\dots, k_t] \\). Each prefix of this search path defines a subset of \\( D \\) via the search path corpus partitioning function" + }, + { + "type": "equation", + "bbox": [ + 0.55, + 0.473, + 0.84, + 0.493 + ], + "angle": 0, + "content": "\\[\n\\mathcal {P} (D, p _ {t}) = D _ {p _ {t}} = \\{\\cap_ {i \\in [ 1, t ]} \\mathcal {F} (D, k _ {i}) \\}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.501, + 0.884, + 0.535 + ], + "angle": 0, + "content": "and each subsequent keyword \\( k_{t+1} \\) narrows down \\( D_{p_t} \\) into further sub-spaces such that \\( D_{p_{t+1}} \\subseteq D_{p_t} \\)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.541, + 0.884, + 0.589 + ], + "angle": 0, + "content": "Answer prediction is treated in exactly the same way as keyword selection and in 1P the last keyword from \\( p \\) is taken as the answer." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.6, + 0.868, + 0.617 + ], + "angle": 0, + "content": "4 Constrained Decoding and FM-Index" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.626, + 0.883, + 0.705 + ], + "angle": 0, + "content": "To avoid generating empty partitions, we constrain 1-PAGER to only decode search paths that match at least one document. We modify the decoder's beam-search strategy to only allow keyword continuations that are contained in the current partition." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.706, + 0.883, + 0.769 + ], + "angle": 0, + "content": "Given a document subset \\(D_{p_i}\\), which could be the full corpus \\(D\\) at the start of decoding \\((i = 0)\\) and a keyword prefix \\(k\\), which could be empty, the set of all valid continuation tokens is defined as," + }, + { + "type": "equation", + "bbox": [ + 0.555, + 0.78, + 0.836, + 0.799 + ], + "angle": 0, + "content": "\\[\n\\mathcal {C} (k, D _ {p _ {i}}) = \\{x | k \\| x \\prec d, d \\in D _ {p _ {i}} \\}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.807, + 0.883, + 0.919 + ], + "angle": 0, + "content": "where \\( x \\) is any vocabulary token and \\( \\| \\cdot \\| \\) indicates concatenation of two token sequences. As a special case, when \\( k = \\phi \\) and \\( i = 0 \\), all tokens in \\( D \\) are valid continuations. 1P separates keywords in \\( p_T \\) with a special separator token \\( \\rightarrow \\) and marks the end of the sequence with an EOS token. These two tokens are always valid continuations." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "14531" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.127, + 0.085, + 0.878, + 0.337 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.351, + 0.885, + 0.396 + ], + "angle": 0, + "content": "Figure 3: Illustration of the 1P decoding process. A keyword can only be generated from the documents matching previously generated keywords. Right panel shows a magnified view of applying constraints to a decoding step. Any keyword not present in the documents is masked out." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.419, + 0.49, + 0.612 + ], + "angle": 0, + "content": "Consider Figure 3. The three keywords correspond to the decoded token sequence [Ten, Commandments, \\(\\rightarrow\\), twice, in, the, Hebrew, Bible, \\(\\rightarrow\\), books, of, Exodus, EOS]. At the start of decoding, any token in \\(D\\) is allowed. After decoding Ten, only those tokens that follow Ten as an n-gram in \\(D\\) are allowed, along with the default separators. After decoding [Ten, Commandments, \\(\\rightarrow\\)] we are ready to start a new keyword, but only tokens from documents that contain the keyword Ten Commandments are allowed. Decoding continues in this manner until EOS is generated." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.614, + 0.49, + 0.693 + ], + "angle": 0, + "content": "To efficiently implement these constraints, we need a data-structure that can quickly determine both \\(\\mathcal{C}(k,D_p)\\), the continuation tokens given a document set and \\(\\mathcal{P}(D_p,k)\\), the subset of documents that contain a given path." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.695, + 0.49, + 0.92 + ], + "angle": 0, + "content": "For this, we extend the usage of an FM-index (Ferragina and Manzini, 2000) as described in (Bevilacqua et al., 2022). The FM-index is a compressed token-based index over a corpus \\(D_0\\) with a few important properties for our usage: (1) it can efficiently list possible token continuations for a sequence prefix that occur in \\(D_0\\) i.e., \\(\\mathcal{C}(k,D_0)\\), (2) it can list the set of documents in the corpus that match an n-gram i.e., \\(\\mathcal{F}(D_0,k)\\), and (3) it supports search over arbitrary n-grams that occur within documents. Note that the FM-index operations are optimized for \\(D_0\\), the original corpus it is built over. We extend these to an arbitrary \\(D_p \\subset D_0\\) at additional cost described in Appendix A.1." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.419, + 0.761, + 0.435 + ], + "angle": 0, + "content": "5 Training data generation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.445, + 0.885, + 0.606 + ], + "angle": 0, + "content": "For training 1P, we produce a dataset with examples of queries and search paths as described above. At a high-level, we generate search paths by iteratively selecting n-grams from an answer passage, and simulating, using the FM-Index of the retrieval corpus, the partitioning of the corpus after selecting each keyword, until only a few documents remain. Finally, the answer span \\(a\\) is appended to the search path. Each example produced can be serialized as sequence-to-sequence pair of inputs and targets as:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.613, + 0.838, + 0.629 + ], + "angle": 0, + "content": "inputs: Generate keywords for: \\(\\)?" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.631, + 0.882, + 0.644 + ], + "angle": 0, + "content": "targets: K_SEP \\(k_{0}\\) K_SEP \\(k_{1}\\) ... K_SEP A_SEP a EOS" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.657, + 0.704, + 0.672 + ], + "angle": 0, + "content": "5.1 Keyword Selection" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.678, + 0.885, + 0.92 + ], + "angle": 0, + "content": "A good keyword should have a) high relevance to the query and b) effectively narrow down the search space. To identify relevant keywords, we restrict to only the gold document \\( g \\). All ngrams in \\( g \\) of length up to five are extracted. Irrelevant keywords are filtered out such as those starting or ending with stop words. Similarly, keywords that are too rare in the corpus, e.g., \"Philippines at Luzon\" or too frequent, e.g., \"part\" are excluded based on a threshold on their count in corpus. The remaining keywords are scored with a combinations of heuristics, mainly Rouge-1 similarity with the query (Lin, 2004) along with minor award for keywords containing entities and penalty for keywords highly frequent in the corpus." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14532" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.296 + ], + "angle": 0, + "content": "This scoring mechanism often misses out on keywords that are semantically relevant, but do not lexically overlap with the query. To boost the relevance of our keyword set, we re-score the top hundred keywords using a language model. A T5-XXL model is finetuned with the input as the query \\( q \\) and target as either the title or a heuristically sampled keyword in a similar fashion to Bevilacqua et al. (2022). The heuristically sampled keywords are re-scored using this model to obtain a refined LM-scored set. Two other special types of keywords are awarded high scores: Title of the gold passage and the keyword containing the answer string \\( a \\)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.309, + 0.266, + 0.323 + ], + "angle": 0, + "content": "5.2 Search Paths" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.332, + 0.49, + 0.525 + ], + "angle": 0, + "content": "The first keyword in a search path needs to effectively partition the corpus. We experiment with either the title or the highest scored keyword from the gold passage as the first keyword in the path. The next keywords are sampled based on their score, given they do not overlap with any of the existing keywords in the path. We continue augmenting a path \\( p \\) with keywords until at most ten passages in the corpus match i.e., \\( |D_p| < 10 \\). The answer keyword is then appended to the path. Our train paths (including the answer) contain a median of three keywords and one matching document." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.541, + 0.322, + 0.558 + ], + "angle": 0, + "content": "6 Experimental Setup" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.569, + 0.229, + 0.584 + ], + "angle": 0, + "content": "6.1 Datasets" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.592, + 0.49, + 0.786 + ], + "angle": 0, + "content": "We use Open-NQ (Kwiatkowski et al., 2019; Lee et al., 2019) as the question-answering dataset for training. For evaluation, besides Open-NQ, WebQuestions (Berant et al., 2013) and CuratedTREC (Baudiš and Šedivý, 2015) are used to measure out-of-domain performance. The FM-Index corpus for constrained decoding is built over DPR Wikipedia corpus with 100-word splits (Karpukhin et al., 2020). The positive gold passages from DPR are used for sampling training paths. This setup is chosen to mirror SEAL and also permits fair comparison against DPR." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.801, + 0.231, + 0.816 + ], + "angle": 0, + "content": "6.2 Training" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.49, + 0.919 + ], + "angle": 0, + "content": "1P's training dataset contains 310k paths corresponding to 55k queries from Open-NQ. Majority of the training paths begin with the title, with a small fraction starting with other keywords (12%). All keywords, except the title, are scored using the LM-scoring technique described above." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.198 + ], + "angle": 0, + "content": "For our experiments, we use the T5X (Roberts et al., 2022) framework. A T5-XXL \\(1.1^{1}\\) (Raffel et al., 2020) model is finetuned with a batch size of 256 and dropout of 0.1. No additional hyperparameter tuning is performed. We format search paths using the reserved tokens \\(\\mathsf{K\\_SEP} = \"\\) \" and \\(\\mathsf{A\\_SEP} = \"\\) \"." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.21, + 0.632, + 0.224 + ], + "angle": 0, + "content": "6.3 Inference" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.231, + 0.885, + 0.343 + ], + "angle": 0, + "content": "Our best model employs beam decoding with a beam of 5. Even when the beam is greater than one, only the top-beam result is used for retrieval. We discuss the effect of beam size in depth in Section 7. Given the top generated path \\( p \\), \\( D_{p} \\) corresponds to the retrieved documents. In case \\( |D_{p}| > 1 \\), a document is sampled arbitrarily for evaluation." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.355, + 0.63, + 0.369 + ], + "angle": 0, + "content": "6.4Baselines" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.376, + 0.884, + 0.504 + ], + "angle": 0, + "content": "We compare to a closed-book question answering (CBQA) system that generates answers, but does not ground these in an evidence corpus, as well as retrieve-and-read systems that combine a variety of retrievers with a Transformer-based answerer module. Both the CBQA baseline and the answerer module are derived from the same T5-XXL 1.1 pretrained model as 1P." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.515, + 0.652, + 0.53 + ], + "angle": 0, + "content": "6.4.1 T5-CBQA" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.535, + 0.884, + 0.679 + ], + "angle": 0, + "content": "A T5-XXL 1.1 model is fine-tuned to predict answers from the DPR training set for 10,000 steps with a batch size of 128. Note that it is possible to achieve a higher closed-book performance on NQ using the full Open-NQ training split instead of the subset included in the DPR training set (Roberts et al., 2020). However, to enable meaningful comparison we restrict the CBQA baseline to the same training examples used to train 1P." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.689, + 0.719, + 0.703 + ], + "angle": 0, + "content": "6.4.2 Retrieve-and-Read" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.709, + 0.885, + 0.805 + ], + "angle": 0, + "content": "The retrieve-and-read baselines first retrieve a single passage from the evidence corpus, and then feed this passage and the question into the answer generation module2. We report retrieval accuracy for the retrieved passage and answer accuracy for the generated answer." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.815, + 0.884, + 0.848 + ], + "angle": 0, + "content": "T5-Reader We tune a T5-XXL 1.1 model to generate answers from (question, evidence passage)" + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.856, + 0.771, + 0.87 + ], + "angle": 0, + "content": "1https://goo.gl/t5-checkpoints" + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.87, + 0.882, + 0.919 + ], + "angle": 0, + "content": "2This differs from ODQA evaluations that do not include evidence retrieval as a first-class task, where many retrieved passages are fed into a reader that generates an answer without attribution to any single piece of text." + }, + { + "type": "list", + "bbox": [ + 0.509, + 0.856, + 0.882, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14533" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.135 + ], + "angle": 0, + "content": "pairs. This is the same base model used by 1P and we train on the (question, passage, answer) triples in the DPR training split to ensure fair comparison." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.144, + 0.489, + 0.289 + ], + "angle": 0, + "content": "DPR-Retriever We compare against vanilla DPR finetuned on NQ without hard negatives (Karpukhin et al., 2020) using the pre-computed index available on DPR's repository3. We note that our ODQA setup differs from the one used by Karpukhin et al. in that we choose the highest scoring retrieval as evidence for answer generation, instead of generating from the top-100 passages without attribution." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.301, + 0.49, + 0.351 + ], + "angle": 0, + "content": "BM25-Retriever We use Pyserini toolkit (Lin et al., 2021) with default configurations, retrieving the top-1 passage." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.36, + 0.49, + 0.538 + ], + "angle": 0, + "content": "SEAL-Retriever SEAL (Bevilacqua et al., 2022) is a generative retrieval system that generates a set of keywords constrained on the corpus. In terms of technique, 1P borrows inspiration from SEAL's use of the FM-Index as well as keywords-as-identifiers. However, the two setups have substantial differences that we highlight in Section 8. We run SEAL with its default configuration and a beam of 5 using the publicly released checkpoint based on Bart-large (Lewis et al., 2020). All outputs from the beam are used for retrieval." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.551, + 0.248, + 0.565 + ], + "angle": 0, + "content": "6.5 Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.573, + 0.49, + 0.671 + ], + "angle": 0, + "content": "We evaluate in-domain performance on the OpenNQ test split and out-of-domain performance on WebQuestions (WQ) and CuratedTREC (TREC) following the setup from Karpukhin et al. (2020). Passage retrieval performance is measured with Hits@1 using Pyserini evaluation scripts4." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.683, + 0.303, + 0.699 + ], + "angle": 0, + "content": "6.6 1P configurations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.704, + 0.49, + 0.882 + ], + "angle": 0, + "content": "We experiment with three configurations: a) 1P: Our primary setup that uses both training and constrained decoding procedures described above, producing a retrieved passage as well as an answer. b) 1P-Unconstrained: Only the training technique described in Section 5 is adopted, with standard unconstrained decoding. Since generation is unconstrained, it is possible that no passage gets retrieved for a given path. c) \\(1\\mathrm{P} +\\) Reader: Here, we take the top retrieved passage from 1P and input it to the Reader model (Section 6.4) to extract the answer." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.084, + 0.61, + 0.1 + ], + "angle": 0, + "content": "7 Results" + }, + { + "type": "table", + "bbox": [ + 0.512, + 0.12, + 0.887, + 0.289 + ], + "angle": 0, + "content": "
RetrieverAnswererRetrieval Hits @ 1Answer
EMF1
-T5 - CBQA-26.834.0
BM25T5 - Reader23.617.924.0
SEALT5 - Reader37.929.435.8
DPRT5 - Reader46.535.642.4
1PT5 - Reader46.334.241.4
1P - Unconstrained29.329.336.1
1P46.331.738.0
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.298, + 0.886, + 0.371 + ], + "angle": 0, + "content": "Table 1: Comparison of different Retriever and Answerer combinations on the NQ-Open test set. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage. 1P combines passage retrieval and answer generation in a single prediction." + }, + { + "type": "table", + "bbox": [ + 0.512, + 0.404, + 0.897, + 0.525 + ], + "angle": 0, + "content": "
SystemWebQuestionsTREC
Hits @1EMHits @1EM
BM25 + Rdr19.714.235.229.1
DPR + Rdr32.017.351.635.0
1P + Rdr38.020.463.838.5
1P38.020.563.836.4
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.534, + 0.886, + 0.607 + ], + "angle": 0, + "content": "Table 2: Comparison of different Retriever and Answerer combinations on Out-of-domain datasets. Both the Retriever and Answerer (Rdr) are trained on only Open-NQ. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.63, + 0.885, + 0.774 + ], + "angle": 0, + "content": "We compare to the baselines described in Section 6.4 on Open-NQ using both retrieval and answer accuracy metrics in Table 1. Answers are generated based on the top retrieved document in systems that separate retrieval from answer generation, to provide a clean comparison between systems that return (answer, evidence passage) pairs. Table 2 reports the out-of-domain performance of various systems on WQ and TREC." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.775, + 0.886, + 0.92 + ], + "angle": 0, + "content": "1P outperforms CBQA in question answering and beats the retrieve-and-read systems, BM25 and SEAL. On the passage retrieval task, it significantly improves over BM25 and SEAL. For indomain setting, 1P is competitive with DPR on retrieval task, but lags behind the QA pipeline that uses DPR. However, this appears to be more due to the reader rather than the retriever as discussed in Section 8. It is worth noting that 1P general" + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.891, + 0.442, + 0.905 + ], + "angle": 0, + "content": "3https://github.com/facebookresearch/DPR" + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.905, + 0.425, + 0.919 + ], + "angle": 0, + "content": "4https://github.com/castorini/pyserini" + }, + { + "type": "list", + "bbox": [ + 0.136, + 0.891, + 0.442, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14534" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.488, + 0.117 + ], + "angle": 0, + "content": "izes significantly better out-of-domain compared to other systems." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.127, + 0.49, + 0.256 + ], + "angle": 0, + "content": "Utility of Search Paths 1P-Unconstrained can be viewed as an extended version of CBQA that generates a search path before predicting the answer. Thus, improvement of 1P-Unconstrained over CBQA can be attributed to this path-conditioned answer generation process, analogous to chain-of-thought reasoning (Wei et al., 2022; Lampinen et al., 2022)." + }, + { + "type": "table", + "bbox": [ + 0.124, + 0.278, + 0.481, + 0.398 + ], + "angle": 0, + "content": "
SystemConstrained DecodingBeam
15
CBQANo26.726.8
1P Unconst.No29.029.3
SEAL + ReaderYes28.529.4
1PYes28.731.7
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.408, + 0.49, + 0.495 + ], + "angle": 0, + "content": "Table 3: EM for various decoding setups with different beam sizes on Open-NQ. Only top-beam result is used for evaluation, except in SEAL which uses all beam outputs. 1P constrained decoding benefits the most from a large beam whereas Unconstrained setups have only a slight effect." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.517, + 0.49, + 0.758 + ], + "angle": 0, + "content": "Effect of Constrained Decoding The purpose of constrained decoding is to ground the answer in an evidence retrieved from the corpus. As expected, the constrained setup enables 1P to achieve a higher Hits@1 than 1P-unconstrained. Surprisingly, when decoding with a beam of one, we observe a small drop in answer accuracy for 1P compared to 1P-Unconstrained (Table 3). Inspecting the losses, two dominant reasons surface. Firstly, As DPR passages are chunked into 100-words (Karpukhin et al., 2020), some queries may become unanswerable given a single passage due to missing context. This is disadvantageous when the model has memorized the answer but there is no single passage to attribute it to." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.759, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Secondly, during constrained decoding, after generating the initial keywords, the search space may soon become sparse with no good candidates to pick from. Could a larger room for planning its actions help the model here? Indeed, increasing the beam size to 5 improves performance by \\(3\\%\\) (Table 3), even when only the top-beam is used for retrieval. We refer to this as Planning, since the larger beam only enables the model to plan better and the remaining beam outputs are otherwise dis" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.165 + ], + "angle": 0, + "content": "carded. Note that unconstrained decoding does not gain from planning. In the final setup in Table 1, we use a beam of 5 for both 1P and SEAL. Unlike 1P, SEAL uses all the outputs from the larger beam for retrieval." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.179, + 0.762, + 0.194 + ], + "angle": 0, + "content": "8 Discussion and Ablations" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.205, + 0.885, + 0.366 + ], + "angle": 0, + "content": "Generating Answers While 1P is capable of generating answers, Table 1 highlights that it falls behind the 1P+Reader. The reason seems to be clear: the Reader has visibility into the full passage context while 1P is limited to the decoded search path and the constrained index which only ensures that generations are grounded in the corpus. Since 1P does retrieve passages, it would be possible to pull in the corresponding text as input for answer generation. We leave this as future work." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.377, + 0.884, + 0.44 + ], + "angle": 0, + "content": "Comparison to SEAL While 1P takes inspiration from SEAL, in practice, there are a few key differences between the two systems aside from 1P's answer generation." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.442, + 0.884, + 0.506 + ], + "angle": 0, + "content": "SEAL generates a large set of keywords (Table 4) using many separate decodes and heuristic guidance (Appendix A.3). In contrast, 1P decodes a single sequence of about three keywords." + }, + { + "type": "table", + "bbox": [ + 0.548, + 0.518, + 0.848, + 0.6 + ], + "angle": 0, + "content": "
SEAL1P
Median keywords323
Median docs retrieved5001
Generates answer×
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.61, + 0.884, + 0.639 + ], + "angle": 0, + "content": "Table 4: Key differences between SEAL and 1P measured over Open-NQ test split with a beam of 1." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.662, + 0.885, + 0.806 + ], + "angle": 0, + "content": "The SEAL keywords are a set, decoded independently of each other and re-scored using sophisticated techniques to retrieve a large number of documents. For instance, the default configuration in SEAL retrieves up to 500 documents. This makes SEAL suitable to be employed in conjunction with a re-ranker. In contrast, 1P search path's map directly to a single (or few) relevant documents (Appendix A.6)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.807, + 0.884, + 0.92 + ], + "angle": 0, + "content": "We acknowledge the model-size variation between SEAL and 1P in the reported experiments, however we preferred using the publicly available SEAL checkpoint. Given the discrepancies with larger beam-size, multiple decodes and use of Reader model, it is difficult to have an apples to apples comparison between the two systems." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14535" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.262 + ], + "angle": 0, + "content": "Path vs Keyword set We qualitatively observe that keywords in a 1P path, owing to sequential generation, are distinct and add new information as compared to the SEAL output set where overlapping keywords are common (Appendix A.3). Thus, paths are advantageous for precisely narrowing down to a single relevant document while keyword sets are effective for retrieving a large number of documents that can later be reranked. This is corroborated by the fact that 1P is better at Hits@1 while SEAL is better at Hits@5 (Appendix A.4)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.272, + 0.489, + 0.319 + ], + "angle": 0, + "content": "Qualitative Analysis Table 5 illustrates patterns of Search Paths generated by 1P. We note some of the common path patterns here:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.321, + 0.488, + 0.4 + ], + "angle": 0, + "content": "1) First keywords are entities in the query, followed by query predicates that iteratively narrow down towards an answer. This is the most common type of path observed and can be attributed to the dominant presence of title in the training data." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.401, + 0.489, + 0.481 + ], + "angle": 0, + "content": "2) Rewrites of the original query or related predicates such as \"seasons consists of\", \"appeared on ...\". Such paths are more prevalent where there is no canonical entity in the query or no entity can be determined with high confidence." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.482, + 0.489, + 0.561 + ], + "angle": 0, + "content": "3) Answer is directly generated followed by supporting keywords that guide towards an attributed passage. This happens in a small fraction of cases, likely where the pretrained model has memorized an answer with high confidence." + }, + { + "type": "list", + "bbox": [ + 0.114, + 0.321, + 0.489, + 0.561 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.563, + 0.488, + 0.595 + ], + "angle": 0, + "content": "Overall, we find the generated search paths to be fairly meaningful and interpretable." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.605, + 0.49, + 0.764 + ], + "angle": 0, + "content": "Sampling Search Paths for Training Table 6 highlights that high quality keywords are crucial to performance. The LM re-scored set of keywords result in significant accuracy gain over heuristically sampled keywords. Paths with first keyword as Title boost performance further. Mixing in a small fraction of paths starting with non-title keywords encourages the model to generate predicates where no entity can be determined, giving us the best results." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.775, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Sensitivity to tokenization We find that constrained decoding is highly sensitive to rare tokenization or punctuation formatting in the corpus. Consider the query \"who sang i ran all the way home\" with the gold document title \"Sorry (I Ran All the Way Home)\". In the unconstrained setup, the model's top prediction starts with \"I Ran All the Way Home\". However, \"(I\" is tokenized differently from \"I\" and searching over the FM-Index" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.261 + ], + "angle": 0, + "content": "returns no match. As a result, constrained decoding drops the predicted keyword altogether, resorting to lower ranked keywords in the beam. We partially fix the issue by modifying the answer in a fraction of the training data to include surrounding punctuation tokens based on how they appear in the FM-index. For instance, the keyword \"I Ran ...\" would update to \"(I Ran ...)\". This simple change leads to a jump in answer accuracy from \\(26.4\\%\\) to \\(28.7\\%\\). However, much more work is needed to make 1P robust to variations in tokenization." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.263, + 0.885, + 0.31 + ], + "angle": 0, + "content": "See Appendix A.2 for analysis of training data size and Appendix A.5 for masking logits vs log-probs." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.324, + 0.612, + 0.339 + ], + "angle": 0, + "content": "Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.35, + 0.885, + 0.558 + ], + "angle": 0, + "content": "We introduce 1-PAGER, the first system to perform question answering and passage retrieval in one pass with a single language model, using a constrained decoder to iteratively partition the retrieval corpus and then generate an answer. We show competitive or improved performance over a variety of comparable baselines and carefully analyze the results, ablating both training strategies and decoding style. We also provide a qualitative analysis of predictions to illustrate the system's capabilities. Challenges with constrained decoding are surfaced including poor search spaces and sensitivity to tokenization and mitigation strategies are presented." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.56, + 0.884, + 0.638 + ], + "angle": 0, + "content": "We hope that 1P adds value in demonstrating how a single transformer model can be harnessed to do both retrieval and answering and pave the path for further progress in the generative retrieval domain." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.652, + 0.616, + 0.667 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.678, + 0.885, + 0.854 + ], + "angle": 0, + "content": "1P is geared towards identifying a concise, small set of documents and generating answer in a single go. While this makes the architecture simpler, it also adds certain weaknesses. 1P is not effective for retrieving a large number of documents and falls behind pipelined systems that combine retrieval with re-ranking. Even for a single passage, it lags behind state-of-the-art dense-retrieval techniques. 1P's method of answer generation is also not competitive with the use of a reader, due to lack of passage context." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.856, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Our training strategy relies heavily on titles or entities and it's generalization on corpora without rich structure or on queries without central entities, remains to be studied." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.942 + ], + "angle": 0, + "content": "14536" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.116, + 0.081, + 0.885, + 0.362 + ], + "angle": 0, + "content": "
Query (Q) and Generated Search Path (SP)Comment
Correctly attributed passages and answers
Q: how many episodes of greys anatomy season 14\nSP: Grey's Anatomy (season 14) » season consists of 24 episodes » 24Query entity resolved first, followed by query predicates
Q: when did they start adding zinc to pennies\nSP: Penny (United States coin) » zinc » Lincoln cent » 1943Query entity resolved iteratively
Q: who was executed for being an american spy during the revolutionary war\nSP: Nathan Hale » Army during the American Revolutionary » Nathan HaleAnswer generated first
Q: who was the grandfather on the Cosby show\nSP: appeared on "The Cosby » Earle HymanQuery rewrites
Incorrect Passage or Answer
Q: who decides the number of judges in the high court\nSP: judge is appointed » High Court » Chief Justice of IndiaA: President of IndiaPath correctly resolved,\nFailed on answer
Q: when did the isle of wight become an island\nIsle of Wight » 1890 » 1890A: During the last Ice AgeQuery entity resolved,\nFailed on supporting keywords
Q: love yourself by justin bieber is about who\nSP: Love Yourself: Her » music video » HerA: RihanaFailed to resolve\nquery entity
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.372, + 0.884, + 0.403 + ], + "angle": 0, + "content": "Table 5: Example 1P Search Paths (SP) on Open-NQ test set. The last keyword in SP is the predicted answer. Gold answers are indicated by A." + }, + { + "type": "table", + "bbox": [ + 0.155, + 0.429, + 0.451, + 0.543 + ], + "angle": 0, + "content": "
Search PathHits@1EM
Heuristic34.522.6
LM-scored40.027.2
Title » LM-scored41.928.0
Title » LM-scored + LM-scored (7+1)42.928.7
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.552, + 0.49, + 0.626 + ], + "angle": 0, + "content": "Table 6: Comparison of Training Search Paths on OpenNQ. Here LM-scored denotes re-scoring by LM on a heuristic set. All results are with a beam of one. \"»\" indicates keyword separator and \"+\" mixture of path types in the give ratio." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.66, + 0.49, + 0.804 + ], + "angle": 0, + "content": "Constrained decoding also comes with its own challenges. Constrained beam outputs often lack diversity, so that even with a larger beam one may still end up in poor search spaces. Computing document-level constraints across the corpus is expensive as it may require scanning a large number of rows in the index. Further, communication between FM-Index and Transformer model slows down inference." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.825, + 0.279, + 0.841 + ], + "angle": 0, + "content": "Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.855, + 0.49, + 0.919 + ], + "angle": 0, + "content": "We thank Don Metzler, Nicholas FitzGerald, Partha Talukdar, Srini Narayanan, as well as our anonymous reviewers, for their thoughtful comments and valuable feedback" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.431, + 0.714, + 0.447 + ], + "angle": 0, + "content": "Ethical Considerations" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.458, + 0.885, + 0.586 + ], + "angle": 0, + "content": "While Large Language Models can solve a wide range of tasks effectively, they also suffer from biases across axis such as gender, race, region (Chan, 2023). LLMs are also prone to generating toxic content, especially when probed about it. Although, our task grounds the model's generations on a corpus, some of the biases in pre-trained LLMs, may seep in 1-PAGER." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.588, + 0.885, + 0.749 + ], + "angle": 0, + "content": "Building the FM-index and constrained decoding is a compute-intensive affair. We have experimented over a single dataset, Natural Questions, involving only knowledge-seeking queries, and single model family, T5. It is possible that some of our findings may not hold over other datasets or model families. Finally, our experiments are limited to English corpus and queries. The proposed approaches are resource-intensive and may not be accessible or valid for several low-resourced languages." + }, + { + "type": "title", + "bbox": [ + 0.511, + 0.777, + 0.61, + 0.792 + ], + "angle": 0, + "content": "References" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.801, + 0.885, + 0.881 + ], + "angle": 0, + "content": "Leonard Adolphs, Benjamin Boerschinger, Christian Buck, Michelle Chen Huebscher, Massimiliano Ciaramita, Lasse Espeholt, Thomas Hofmann, Yannic Kilcher, Sascha Rothe, Pier Giuseppe Sessa, et al. 2021. Boosting search engines with interactive agents. arXiv preprint arXiv:2109.00527." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.892, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14537" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.135, + 0.086, + 0.487, + 0.126 + ], + "angle": 0, + "content": "Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.139, + 0.488, + 0.231 + ], + "angle": 0, + "content": "Petr Baudis and Jan Šedivý. 2015. Modeling of the question answering task in the yodaqa system. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF'15, Toulouse, France, September 8-11, 2015, Proceedings 6, pages 222-228. Springer." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.243, + 0.487, + 0.309 + ], + "angle": 0, + "content": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533-1544." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.321, + 0.487, + 0.386 + ], + "angle": 0, + "content": "Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen tau Yih, Sebastian Riedel, and Fabio Petroni. 2022. Autoregressive search engines: Generating substrings as document identifiers. In arXiv pre-print 2204.10628." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.399, + 0.487, + 0.478 + ], + "angle": 0, + "content": "Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, et al. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. arXiv preprint arXiv:2212.08037." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.49, + 0.487, + 0.542 + ], + "angle": 0, + "content": "Anastasia Chan. 2023. Gpt-3 and instructgpt: technological dystopianism, utopianism, and \"contextual\" perspectives in ai ethics and industry. AI and Ethics, 3(1):53-64." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.555, + 0.487, + 0.606 + ], + "angle": 0, + "content": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.619, + 0.487, + 0.685 + ], + "angle": 0, + "content": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever-reader interaction for scalable open-domain question answering. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.697, + 0.487, + 0.763 + ], + "angle": 0, + "content": "Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.775, + 0.487, + 0.828 + ], + "angle": 0, + "content": "P. Ferragina and G. Manzini. 2000. Opportunistic data structures with applications. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 390-398." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.84, + 0.487, + 0.918 + ], + "angle": 0, + "content": "Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.488, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.125 + ], + "angle": 0, + "content": "Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.141, + 0.882, + 0.167 + ], + "angle": 0, + "content": "Alex Graves. 2012. Sequence transduction with recurrent neural networks." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.183, + 0.882, + 0.235 + ], + "angle": 0, + "content": "Sanda M Harabagiu, Steven J Maiorano, and Marius A Pasca. 2003. Open-domain textual question answering techniques. Natural Language Engineering, 9(3):231-267." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.251, + 0.882, + 0.316 + ], + "angle": 0, + "content": "Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.332, + 0.882, + 0.397 + ], + "angle": 0, + "content": "Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neubig. 2022. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. arXiv preprint arXiv:2212.02027." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.413, + 0.882, + 0.476 + ], + "angle": 0, + "content": "Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.493, + 0.882, + 0.598 + ], + "angle": 0, + "content": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.613, + 0.882, + 0.692 + ], + "angle": 0, + "content": "Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39-48." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.707, + 0.882, + 0.825 + ], + "angle": 0, + "content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.84, + 0.882, + 0.918 + ], + "angle": 0, + "content": "Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and Felix Hill. 2022. Can language models learn from explanations in context? arXiv preprint arXiv:2204.02329." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14538" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.151 + ], + "angle": 0, + "content": "Hyunjii Lee, Sohee Yang, Hanseok Oh, and Minjoon Seo. 2022. Generative multi-hop retrieval. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1417-1436." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.165, + 0.488, + 0.243 + ], + "angle": 0, + "content": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.255, + 0.488, + 0.373 + ], + "angle": 0, + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.386, + 0.488, + 0.45 + ], + "angle": 0, + "content": "Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. Paq: 65 million probably-asked questions and what you can do with them." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.463, + 0.486, + 0.503 + ], + "angle": 0, + "content": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.515, + 0.488, + 0.581 + ], + "angle": 0, + "content": "Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: An easy-to-use python toolkit to support replicable in research with sparse and dense representations. arXiv preprint arXiv:2102.10073." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.593, + 0.488, + 0.672 + ], + "angle": 0, + "content": "Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, and Nat McAleese. 2022. Teaching language models to support answers with verified quotes." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.684, + 0.488, + 0.723 + ], + "angle": 0, + "content": "Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork. 2021. Rethinking search: making domain experts out of dilettantes. ACM SIGIR Forum, 55(1):1-27." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.736, + 0.376, + 0.75 + ], + "angle": 0, + "content": "OpenAI. 2023. Gpt-4 technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.762, + 0.488, + 0.827 + ], + "angle": 0, + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.84, + 0.488, + 0.918 + ], + "angle": 0, + "content": "Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. 2021. Measuring attribution in natural language generation models. arXiv preprint arXiv:2112.12870." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.513, + 0.086, + 0.883, + 0.295 + ], + "angle": 0, + "content": "Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aankanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, and Andrea Gesmundo. 2022. Scaling up models and data with t5x and seqio." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.306, + 0.883, + 0.358 + ], + "angle": 0, + "content": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.37, + 0.883, + 0.422 + ], + "angle": 0, + "content": "Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333-389." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.433, + 0.883, + 0.486 + ], + "angle": 0, + "content": "Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. Colbertv2: Effective and efficient retrieval via lightweight late interaction." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.497, + 0.883, + 0.562 + ], + "angle": 0, + "content": "Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems, 35:21831-21843." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.573, + 0.883, + 0.638 + ], + "angle": 0, + "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.65, + 0.883, + 0.703 + ], + "angle": 0, + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.713, + 0.883, + 0.779 + ], + "angle": 0, + "content": "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.79, + 0.883, + 0.867 + ], + "angle": 0, + "content": "Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022. Generate rather than retrieve: Large language models are strong context generators. arXiv preprint arXiv:2209.10063." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.879, + 0.883, + 0.918 + ], + "angle": 0, + "content": "Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate" + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.883, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14539" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.135, + 0.086, + 0.49, + 0.126 + ], + "angle": 0, + "content": "rather than retrieve: Large language models are strong context generators. In The Eleventh International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.136, + 0.49, + 0.189 + ], + "angle": 0, + "content": "Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, Peitan Zhang, and Ji-Rong Wen. 2022. Ultron: An ultimate retriever on corpus with a model-based indexer. arXiv preprint arXiv:2208.09257." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.199, + 0.49, + 0.253 + ], + "angle": 0, + "content": "Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.527, + 0.941 + ], + "angle": 0, + "content": "14540" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.239, + 0.101 + ], + "angle": 0, + "content": "A Appendix" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.11, + 0.353, + 0.126 + ], + "angle": 0, + "content": "A.1 Constrain Computation" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.131, + 0.49, + 0.163 + ], + "angle": 0, + "content": "1P relies on two key operations for constrain computation:" + }, + { + "type": "text", + "bbox": [ + 0.128, + 0.173, + 0.486, + 0.189 + ], + "angle": 0, + "content": "a) \\(\\mathcal{F}(D,k)\\) : Documents that contain keyword \\(k\\)" + }, + { + "type": "text", + "bbox": [ + 0.127, + 0.199, + 0.488, + 0.231 + ], + "angle": 0, + "content": "b) \\(\\mathcal{C}(k,D)\\) : Next tokens for keyword \\(k\\) in arbitrary document set \\(D\\)" + }, + { + "type": "list", + "bbox": [ + 0.127, + 0.173, + 0.488, + 0.231 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.242, + 0.489, + 0.386 + ], + "angle": 0, + "content": "\\(\\mathcal{F}(D,k)\\) is preprocessed and cached to allow for quick computation. \\(\\mathcal{C}(k,D)\\) is trickier to compute. When \\(D\\) represents the full corpus, FM-index can fetch the next tokens in \\(O(|V| \\log(|V|))\\), where \\(V\\) is the token vocabulary and independent of \\(|D|\\). However, arbitrary \\(D\\) requires a traversal over all documents and can be very expensive. In practise, the LLM training guides it to generate effective keywords such that \\(|D|\\) is small." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.388, + 0.49, + 0.42 + ], + "angle": 0, + "content": "We also apply certain other optimizations to reduce the compute cost:" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.43, + 0.487, + 0.462 + ], + "angle": 0, + "content": "- Constrains are computed lazily over a decoding pass." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.473, + 0.485, + 0.505 + ], + "angle": 0, + "content": "- Several computations are cached, eg: keyword to document id mapping" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.516, + 0.486, + 0.61 + ], + "angle": 0, + "content": "- To cap the cost of constraints at each decoding step, we allow for unconstrained generation in rare scenarios, when the estimated cost is too high. If the generated path is absent in the corpus (\\(< 1\\%\\) examples), these can be filtered out later." + }, + { + "type": "list", + "bbox": [ + 0.137, + 0.43, + 0.487, + 0.61 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.623, + 0.488, + 0.672 + ], + "angle": 0, + "content": "Despite these optimizations, inference continues to be expensive and we perhaps need a special data structure for next token look-up." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.681, + 0.307, + 0.697 + ], + "angle": 0, + "content": "A.2 Training data size" + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.713, + 0.484, + 0.811 + ], + "angle": 0, + "content": "
DatasetQueriesPathsHits@1EM
Open-NQ55k55k41.928.1
Open-NQ55k310k42.928.7
Open-NQ + PAQ55k310k43.629.5
+ 9M+ 9M
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.82, + 0.488, + 0.851 + ], + "angle": 0, + "content": "Table 7: Comparison of different dataset sizes for queries and paths" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.872, + 0.489, + 0.92 + ], + "angle": 0, + "content": "In Table 7, we observe the effect of dataset size on performance. Increasing the numbers of paths sampled per query improves performance, perhaps" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.133 + ], + "angle": 0, + "content": "due to higher diversity in training. However, this method of dataset expansion is limited by the number of relevant paths we could extract for a query." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.134, + 0.884, + 0.278 + ], + "angle": 0, + "content": "We also experiment with increasing the query set manifold by mixing in unsupervised datasets. A total of 9M QA pairs are sampled from PAQ (Lewis et al., 2021), a synthetic QA dataset, and search paths extracted with heuristic scoring described in Section 5. The original 1P training dataset is mixed in 1:1 ratio. This further boosts performance, but not proportionally to the amount of data added, indicating diminishing returns from silver datasets." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.289, + 0.689, + 0.305 + ], + "angle": 0, + "content": "A.3 SEAL keywords" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.31, + 0.884, + 0.631 + ], + "angle": 0, + "content": "SEAL generates a set of document substrings constrained on the corpus, that are combined to form document identifiers. Besides using a LM to generate keywords, SEAL utilizes several other mechanisms for extracting keywords. This includes partial beam sequences, heuristically adding query n-grams, sampling the top-k tokens from the logprobs of the first decoding step, force decoding title etc. The keywords are re-scored using the LM as well as FM-index count and all keyword combinations are retrieved. Table 8 illustrates keywords generated by both the systems. Note that SEAL keywords can be repetitive and therefore require a large number of keywords to narrow down to meaningful documents. This also makes SEAL suitable for retrieving a much larger set of documents that can be re-ranked later. The maximum number of retrieved documents for SEAL are capped by a hyperparameter with default value of 500. In contrast, 1P is geared towards retrieving only the top-document." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.642, + 0.62, + 0.656 + ], + "angle": 0, + "content": "A.4 Hits@5" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.663, + 0.884, + 0.727 + ], + "angle": 0, + "content": "SEAL does significantly better than 1P for Hits@5 (Table 9). We attribute this to the large set of keywords generated by SEAL as explained in the Appendix A.3." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.738, + 0.855, + 0.77 + ], + "angle": 0, + "content": "A.5 Normalizing sequence likelihood over constrained space" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.775, + 0.883, + 0.886 + ], + "angle": 0, + "content": "During constrained decoding a sequence \\( X \\), we need to choose the next token from \\( \\mathcal{C}(X, D) \\) and not the entire vocabulary space \\( V \\). Should the sequence likelihood be re-normalized over this constrained space? We find that re-normalizing the probabilities results in inflated likelihoods, making it hard for the model to back-track." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.888, + 0.882, + 0.919 + ], + "angle": 0, + "content": "Consider the query, \"where did the butchers in the slaughterhouse cases live\" to which our model" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "14541" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.12, + 0.096, + 0.88, + 0.391 + ], + "angle": 0, + "content": "
SystemQuestion or Search PathAnswer
1P SEALwho has the most catches in nfl history2,000-yard club » Barry SandersJerry RiceBarry SandersT.J. Houshmandzadeh
</s> Michael Irvin @ @, yards per catch, caught his, touchdown, record
1P SEALwhen was harry potter and the philosophers stone publishedHarry Potter and the Philosopher's Stone » first published in the United » 199719971997
</s> Harry Potter and the Philosopher's Stone @ @, "Harry Potter, Potter and thePhilosopher's Stone is, Potter and the Philosopher's Stone Harry, novel1999
1P SEALwhat is the meaning of the harp in irelandHarp » national symbol of Ireland » national symbol of Irelandthe arms of Irelandnational symbol of Ireland
</s> Harp @ @, Irish harp., harp is, harp was, harparistocracy
1P SEALwho was the president of pakistan during 1971 warIndo-Pakistani War of 1971 » Prime Minister of Pakistan » Zulfikar Ali BhuttoYahya KhanZulfikar Ali Bhutto
</s> Indo-Pakistani War of 1971 @ @, East Pakistan, Pakistani, Pakistan Army,Pakistan'sMuhammad Yaqub Khan
1P SEALwhen do you declare honors in contract bridgeContract bridge » declaring » end of the handany time after the auctionend of the hand
</s> Contract bridge @ @, declarer, bidding, honors, handsbidding
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.4, + 0.884, + 0.472 + ], + "angle": 0, + "content": "Table 8: Comparison of keywords generated by SEAL and 1P for randomly sampled examples from Open-NQ test set. For 1P, we show the full search path separated by \"»\" with the last keyword as the answer. For SEAL, we illustrate the top-5 keywords along with the answer from Reader model. \"\" and \"@@\" are special tokens used by SEAL for identifying start of passage and title marker respectively. The Answer next to the question is the gold answer while others are predictions from corresponding systems." + }, + { + "type": "table", + "bbox": [ + 0.192, + 0.5, + 0.411, + 0.597 + ], + "angle": 0, + "content": "
SystemBeamHits@5
SEAL159.7
SEAL562.8
1P146.5
1P550.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.607, + 0.489, + 0.68 + ], + "angle": 0, + "content": "Table 9: Hits@5 on Open-NQ test. SEAL achieves a much higher score than 1P owning to the larger number of documents matched and re-scored. Note that only top-beam result is used for 1P while SEAL uses all beam outputs." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.71, + 0.49, + 0.92 + ], + "angle": 0, + "content": "predicts an irrelevant search path [Slaughterhouse Five, but, EoS]. What's going on under the hood? The first keyword is incorrect lending the model into a poor search space. With the second keyword, the model is possibly looking to generate \"butcher\" but there's no such keyword in the constrained set. Ideally, the model should backtrack at this point to other candidates in the beam. However, since the set of continuations is small, renormalizing inflates the probabilities of all tokens in \\(\\mathcal{C}\\) including \\(EoS\\), even though the true likelihood of such a sequence is very low. Indeed, using the language model's scores directly without any re" + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.503, + 0.88, + 0.672 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.685, + 0.883, + 0.73 + ], + "angle": 0, + "content": "Figure 4: Number of matching documents in the corpus for 1P generated path in the test set. About half the examples match only a single path." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.755, + 0.884, + 0.803 + ], + "angle": 0, + "content": "normalization cures this issue yielding [Slaughterhouse cases, Butcher, EoS]. and this is the strategy we opt for in all our experiments." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.817, + 0.813, + 0.833 + ], + "angle": 0, + "content": "A.6 Number of matching documents" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.839, + 0.882, + 0.92 + ], + "angle": 0, + "content": "1P generated paths effectively narrow down the corpus, generally matching only a few documents in the corpus as illustrated in Figure 4. Note that a small fraction of paths match 0 documents due to pruning optimizations applied during inference" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14542" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.115, + 0.086, + 0.347, + 0.101 + ], + "angle": 0, + "content": "time detailed in Appendix A.1." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14543" + } + ] +] \ No newline at end of file diff --git a/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_origin.pdf b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7b95b02f68785028018b2112d5c484fdb0150512 --- /dev/null +++ b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/ed6283a9-c47a-45be-a3e5-a228ad5db48e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d3bbda581e133bce3116c1a53df4fc4a4a19cddab81e602bf459518802b97de +size 611969 diff --git a/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/full.md b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a798f9dc15448353cf7fcb83e905b315993f1023 --- /dev/null +++ b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/full.md @@ -0,0 +1,388 @@ +# 1-PAGER: One Pass Answer Generation and Evidence Retrieval + +Palak Jain1 Livio Baldini Soares2 Tom Kwiatkowski2 + +$^{1}$ Google Research $^{2}$ Google Deepmind + +{palakj, liviobs, tomkwiat}@google.com + +# Abstract + +We present 1-PAGER the first system that answers a question and retrieves evidence using a single Transformer-based model and decoding process. 1-PAGER incrementally partitions the retrieval corpus using constrained decoding to select a document and answer string, and we show that this is competitive with comparable retrieve-and-read alternatives according to both retrieval and answer accuracy metrics. 1-PAGER also outperforms the equivalent 'closed-book' question answering model, by grounding predictions in an evidence corpus. While 1-PAGER is not yet on-par with more expensive systems that read many more documents before generating an answer, we argue that it provides an important step toward attributed generation by folding retrieval into the sequence-to-sequence paradigm that is currently dominant in NLP. We also show that the search paths used to partition the corpus are easy to read and understand, paving a way forward for interpretable neural retrieval. + +# 1 Introduction + +In recent times, there has been a push to reformulate a wide variety of tasks from NLP and other domains into the sequence-to-sequence paradigm, to make use of large pre-trained Transformer networks (Vaswani et al., 2017). However, despite evidence that large language models can often answer questions (Roberts et al., 2020), predict identifiers of documents that support those answers (Tay et al., 2022), or generate text that contains and explains an answer (Yu et al., 2022) the dominant paradigm in question answering is still the retrieve-and-read approach that pipelines separate retrieval and answer generation modules. This approach has the benefit that it can provide direct and targeted paragraph-level attribution for the generated answers (Bohnet et al., 2022). However, it also relies on a heterogenous mix of models that are hard to train in concert (Metzler et al., 2021). + +![](images/64c5c707b66c4d979c376511847a2f6ef5534751d30fb0db42801b4d3d7ac625.jpg) +Figure 1: Example 1P output that iteratively partitions the corpus into sub-sets containing the generated n-grams. The last n-gram is taken as the answer. + +Motivated by the observation that language model decoders already perform search over possible sequences (Graves, 2012), and that evidence documents themselves are simply sequences of tokens, we present an alternative approach that relies on a single Transformer model. In this approach, which we name 1-PAGER (One Pass Answer Generation and Evidence Retrieval) or simply 1P, the decoder iteratively partitions a corpus of evidence documents by generating a search path consisting of a set of keywords that identify relevant documents and an answer string that is contained in at least one of these documents. With 1P, we aim to explore the spectrum between CBQA, where the answer is generated without reference to an evidence corpus, and pipelined approaches that feed retrieved documents into the transformer. + +Figure 1 illustrates an example in which the corpus is iteratively partitioned into documents that contain the string 'Economy of India', then those that also contain the string 'Agriculture', and finally those that also contain the answer string '23%'. + +1P output sequences are guaranteed to match at least one document in the evidence corpus. This is enforced via a constrained decoder that has ac + +cess to an FM-index representation of the evidence corpus contents (Ferragina and Manzini, 2000) and we evaluate 1P's ability to correctly answer open-domain questions while also retrieving passages that provide support for those answers (Bohnet et al., 2022). Since 1P is the first model that can do both of these tasks, we compare to pipelined systems that first retrieve a single passage and then generate an answer based on this evidence passage. 1P is competitive as a passage retriever, performing similarly to a widely used dense retriever (Karpukhin et al., 2020) and outperforming the SEAL system which independently generates keywords rather than a search path (Bevilacqua et al., 2022). 1P also outperforms an equivalent closed-book question answering (CBQA) model (Roberts et al., 2020) according to answer accuracy. Part of this improvement comes from the prediction of search paths themselves, reminiscent of chain-of-thought reasoning (Wei et al., 2022), and part is from 1P's constrained decoder, which forces the model to generate answers from passages that contain the keywords. + +While 1P does not yet perform as well as the very best retrieval or open-domain question answering systems in terms of accuracy, the fact that it is competitive with pipelined systems that are trained with the same data and which use similar amounts of inference-time compute suggests a promising path ahead. Unlike those systems, 1P can be trained end-to-end along with any other task that fits into the sequence-to-sequence paradigm. Additionally, 1P search paths are inherently interpretable, unlike embeddings used in dense retrieval. + +# 2 Related Work + +"Retrieve-and-read" Question Answering Question answering approaches in NLP are dominated by the "retrieve-and-read" paradigm where a retriever first fetches hundreds of relevant documents from a corpus, followed by a language model that reranks and extracts the answer (Harabagiu et al., 2003; Chen et al., 2017; Zhu et al., 2021). Sparse retrievers such as BM25 (Robertson et al., 2009) build a high-dimensional lexical index over text corpus. Dense retrievers (Karpukhin et al., 2020) use a dual encoder architecture to embed the query and document and perform an approximate nearest neighbor search. Various modifications to dense retrieval have been proposed over the years includ + +ing hard negative training (Xiong et al., 2020), late interaction (Khattab and Zaharia, 2020; Santhanam et al., 2022), few-shot learning (Izacard et al., 2022), joint retriever and reader training (Jiang et al., 2022). + +A particular variant of interest is the Iterative Retrieval process where the query is reformulated incrementally (Das et al., 2019; Lee et al., 2022) leading to an interactive search process (Jiang et al., 2023; Adolphs et al., 2021). This query augmentation scheme has similarities with our use of search paths. However, we use the paths to iteratively partition the corpus while prior works have used it for refining the query. + +To perform well, retrieve-and-read systems will typically retrieve 10s to 100s of passages that must be processed by a language model. In constraint, 1P retrieves and extracts an answer in a single pass of language model generation. + +Closed Book Question Answering With data and parameter scale, LLMs in a closed-book setting (CBQA) have shown competitive performance (OpenAI, 2023; Anil et al., 2023; Yu et al., 2023) to retrieve pipelines (ODQA), however without producing any attributed passages (Rashkin et al., 2021; Bohnet et al., 2022). An extension of CBQA is post-hoc retrieval where a large language model LLM) is first used to generate an answer and then evidence for the question-answer pair is fetched by a retriever (Gao et al., 2023a; Bohnet et al., 2022). While post-hoc retrieval serves the same goal as 1P, it still uses a pipeline of LLM and retriever to do so. + +Generative Retrieval Recently, generative retrieval has emerged as an alternative to the conventional "retrieve-and-read" pipeline (Metzler et al., 2021). Genre (De Cao et al., 2021) performed generative entity linking by constraining model's decoding to a set of entities. DSI (Tay et al., 2022) showed one of the first proof of LLM's ability to memorize docids in the corpus. However, atomic ids or hierarchical clusters, as used in DSI, are opaque identifiers and capture limited information. Works such as SEAL (Bevilacqua et al., 2022) and Ultron (Zhou et al., 2022) use a semantically richer representation: keywords in the document. In particular, SEAL constrains the generation to only keywords in the corpus using the FM-index (Ferragina and Manzini, 2000), a key data structure we borrow in this work. + +![](images/8dcfa0acd2a903129c9e6dbe6563a933e6f4602c2c9e926949dec9381190afc9.jpg) +Figure 2: System illustration of different QA systems. From left to right: CBQA, 1-PAGER, SEAL, Retrieve-and-Read system. $C$ denotes the retrieval corpus, $P$ a retrieved passage, $Q$ the input question and $A$ , the generated answer. 1P is closest to CBQA (only single model used) but it also outputs a passage retrieved from $C$ . + +1P represents docids as keyword paths, which are arguably more interpretable, and learns a soft partition over the corpus instead of the hard partition imposed by DSI's clustering. + +Another crucial distinction is 1P's ability to both retrieve and generate an answer while prior works rely on a external re-ranker/reader for the same. A high-level view of various question-answering systems is presented in Figure 2. + +Attributed Question Answering Standard metrics for open-domain question answering, such as exact match or token-based F1, have received criticism for being imprecise and/or insufficient. Several efforts have proposed augmenting answers with textual evidence, via retrieval or citations (Bohnet et al., 2022; Menick et al., 2022; Gao et al., 2023b). While this work does not directly evaluate the quality of retrieved answer evidence, our proposed model inherently produces a passage to support the final answer, along with a search path of keywords, which could be used to provide users with answer evidence. + +# 3 Iterative Corpus Partitioning and Answer Prediction + +We focus on the problem of learning a mapping $f(q, D) \to (a, d_a)$ from a question $q$ and corpus of documents $D$ to an answer and supporting document $(a, d_a)$ . The predicted document $d_a$ is retrieved from $D$ and the answer $a$ is a sub-string of $d_a$ . The document $d_a$ should be relevant to the question and provide evidence for answer. + +The goal of this paper is to model the function $f$ using a single sequence-to-sequence model, rather than a pipeline which first retrieves $d_{a}$ and then feeds it into an answer generation module. To achieve our goal, we recast retrieval as an iterative corpus partitioning process illustrated in Figure 3. + +Iterative corpus partitioning adopts the LM decoder's autoregressive search process to partition + +$D$ by predicting n-gram keywords. + +An n-gram of tokens $k$ is said to be contained in a document $d$ , denoted by $k \prec d$ , when $k$ is a sub-sequence of $d$ . We define a keyword corpus partitioning function + +$$ +\mathcal {F} (D, k) = \{d | k \prec d; d \in D \} +$$ + +that selects only those documents that contain $k$ . 1-PAGER iteratively partitions the corpus $D$ by generating a sequence of n-grams that we refer to as a Search Path $p_t = [k_1, k_2, \dots, k_t]$ . Each prefix of this search path defines a subset of $D$ via the search path corpus partitioning function + +$$ +\mathcal {P} (D, p _ {t}) = D _ {p _ {t}} = \{\cap_ {i \in [ 1, t ]} \mathcal {F} (D, k _ {i}) \} +$$ + +and each subsequent keyword $k_{t+1}$ narrows down $D_{p_t}$ into further sub-spaces such that $D_{p_{t+1}} \subseteq D_{p_t}$ . + +Answer prediction is treated in exactly the same way as keyword selection and in 1P the last keyword from $p$ is taken as the answer. + +# 4 Constrained Decoding and FM-Index + +To avoid generating empty partitions, we constrain 1-PAGER to only decode search paths that match at least one document. We modify the decoder's beam-search strategy to only allow keyword continuations that are contained in the current partition. + +Given a document subset $D_{p_i}$ , which could be the full corpus $D$ at the start of decoding $(i = 0)$ and a keyword prefix $k$ , which could be empty, the set of all valid continuation tokens is defined as, + +$$ +\mathcal {C} (k, D _ {p _ {i}}) = \{x | k \| x \prec d, d \in D _ {p _ {i}} \} +$$ + +where $x$ is any vocabulary token and $\| \cdot \|$ indicates concatenation of two token sequences. As a special case, when $k = \phi$ and $i = 0$ , all tokens in $D$ are valid continuations. 1P separates keywords in $p_T$ with a special separator token $\rightarrow$ and marks the end of the sequence with an EOS token. These two tokens are always valid continuations. + +![](images/ee9b1d9383128dc9e50e582c09f5501209e0eb4fb1f761fd7f474a4bf66331d8.jpg) +Figure 3: Illustration of the 1P decoding process. A keyword can only be generated from the documents matching previously generated keywords. Right panel shows a magnified view of applying constraints to a decoding step. Any keyword not present in the documents is masked out. + +Consider Figure 3. The three keywords correspond to the decoded token sequence [Ten, Commandments, $\rightarrow$ , twice, in, the, Hebrew, Bible, $\rightarrow$ , books, of, Exodus, EOS]. At the start of decoding, any token in $D$ is allowed. After decoding Ten, only those tokens that follow Ten as an n-gram in $D$ are allowed, along with the default separators. After decoding [Ten, Commandments, $\rightarrow$ ] we are ready to start a new keyword, but only tokens from documents that contain the keyword Ten Commandments are allowed. Decoding continues in this manner until EOS is generated. + +To efficiently implement these constraints, we need a data-structure that can quickly determine both $\mathcal{C}(k,D_p)$ , the continuation tokens given a document set and $\mathcal{P}(D_p,k)$ , the subset of documents that contain a given path. + +For this, we extend the usage of an FM-index (Ferragina and Manzini, 2000) as described in (Bevilacqua et al., 2022). The FM-index is a compressed token-based index over a corpus $D_0$ with a few important properties for our usage: (1) it can efficiently list possible token continuations for a sequence prefix that occur in $D_0$ i.e., $\mathcal{C}(k,D_0)$ , (2) it can list the set of documents in the corpus that match an n-gram i.e., $\mathcal{F}(D_0,k)$ , and (3) it supports search over arbitrary n-grams that occur within documents. Note that the FM-index operations are optimized for $D_0$ , the original corpus it is built over. We extend these to an arbitrary $D_p \subset D_0$ at additional cost described in Appendix A.1. + +# 5 Training data generation + +For training 1P, we produce a dataset with examples of queries and search paths as described above. At a high-level, we generate search paths by iteratively selecting n-grams from an answer passage, and simulating, using the FM-Index of the retrieval corpus, the partitioning of the corpus after selecting each keyword, until only a few documents remain. Finally, the answer span $a$ is appended to the search path. Each example produced can be serialized as sequence-to-sequence pair of inputs and targets as: + +inputs: Generate keywords for: $$ ? + +targets: K_SEP $k_{0}$ K_SEP $k_{1}$ ... K_SEP A_SEP a EOS + +# 5.1 Keyword Selection + +A good keyword should have a) high relevance to the query and b) effectively narrow down the search space. To identify relevant keywords, we restrict to only the gold document $g$ . All ngrams in $g$ of length up to five are extracted. Irrelevant keywords are filtered out such as those starting or ending with stop words. Similarly, keywords that are too rare in the corpus, e.g., "Philippines at Luzon" or too frequent, e.g., "part" are excluded based on a threshold on their count in corpus. The remaining keywords are scored with a combinations of heuristics, mainly Rouge-1 similarity with the query (Lin, 2004) along with minor award for keywords containing entities and penalty for keywords highly frequent in the corpus. + +This scoring mechanism often misses out on keywords that are semantically relevant, but do not lexically overlap with the query. To boost the relevance of our keyword set, we re-score the top hundred keywords using a language model. A T5-XXL model is finetuned with the input as the query $q$ and target as either the title or a heuristically sampled keyword in a similar fashion to Bevilacqua et al. (2022). The heuristically sampled keywords are re-scored using this model to obtain a refined LM-scored set. Two other special types of keywords are awarded high scores: Title of the gold passage and the keyword containing the answer string $a$ . + +# 5.2 Search Paths + +The first keyword in a search path needs to effectively partition the corpus. We experiment with either the title or the highest scored keyword from the gold passage as the first keyword in the path. The next keywords are sampled based on their score, given they do not overlap with any of the existing keywords in the path. We continue augmenting a path $p$ with keywords until at most ten passages in the corpus match i.e., $|D_p| < 10$ . The answer keyword is then appended to the path. Our train paths (including the answer) contain a median of three keywords and one matching document. + +# 6 Experimental Setup + +# 6.1 Datasets + +We use Open-NQ (Kwiatkowski et al., 2019; Lee et al., 2019) as the question-answering dataset for training. For evaluation, besides Open-NQ, WebQuestions (Berant et al., 2013) and CuratedTREC (Baudiš and Šedivý, 2015) are used to measure out-of-domain performance. The FM-Index corpus for constrained decoding is built over DPR Wikipedia corpus with 100-word splits (Karpukhin et al., 2020). The positive gold passages from DPR are used for sampling training paths. This setup is chosen to mirror SEAL and also permits fair comparison against DPR. + +# 6.2 Training + +1P's training dataset contains 310k paths corresponding to 55k queries from Open-NQ. Majority of the training paths begin with the title, with a small fraction starting with other keywords (12%). All keywords, except the title, are scored using the LM-scoring technique described above. + +For our experiments, we use the T5X (Roberts et al., 2022) framework. A T5-XXL $1.1^{1}$ (Raffel et al., 2020) model is finetuned with a batch size of 256 and dropout of 0.1. No additional hyperparameter tuning is performed. We format search paths using the reserved tokens $\mathsf{K\_SEP} = "$ " and $\mathsf{A\_SEP} = "$ ". + +# 6.3 Inference + +Our best model employs beam decoding with a beam of 5. Even when the beam is greater than one, only the top-beam result is used for retrieval. We discuss the effect of beam size in depth in Section 7. Given the top generated path $p$ , $D_{p}$ corresponds to the retrieved documents. In case $|D_{p}| > 1$ , a document is sampled arbitrarily for evaluation. + +# 6.4Baselines + +We compare to a closed-book question answering (CBQA) system that generates answers, but does not ground these in an evidence corpus, as well as retrieve-and-read systems that combine a variety of retrievers with a Transformer-based answerer module. Both the CBQA baseline and the answerer module are derived from the same T5-XXL 1.1 pretrained model as 1P. + +# 6.4.1 T5-CBQA + +A T5-XXL 1.1 model is fine-tuned to predict answers from the DPR training set for 10,000 steps with a batch size of 128. Note that it is possible to achieve a higher closed-book performance on NQ using the full Open-NQ training split instead of the subset included in the DPR training set (Roberts et al., 2020). However, to enable meaningful comparison we restrict the CBQA baseline to the same training examples used to train 1P. + +# 6.4.2 Retrieve-and-Read + +The retrieve-and-read baselines first retrieve a single passage from the evidence corpus, and then feed this passage and the question into the answer generation module2. We report retrieval accuracy for the retrieved passage and answer accuracy for the generated answer. + +T5-Reader We tune a T5-XXL 1.1 model to generate answers from (question, evidence passage) + +pairs. This is the same base model used by 1P and we train on the (question, passage, answer) triples in the DPR training split to ensure fair comparison. + +DPR-Retriever We compare against vanilla DPR finetuned on NQ without hard negatives (Karpukhin et al., 2020) using the pre-computed index available on DPR's repository3. We note that our ODQA setup differs from the one used by Karpukhin et al. in that we choose the highest scoring retrieval as evidence for answer generation, instead of generating from the top-100 passages without attribution. + +BM25-Retriever We use Pyserini toolkit (Lin et al., 2021) with default configurations, retrieving the top-1 passage. + +SEAL-Retriever SEAL (Bevilacqua et al., 2022) is a generative retrieval system that generates a set of keywords constrained on the corpus. In terms of technique, 1P borrows inspiration from SEAL's use of the FM-Index as well as keywords-as-identifiers. However, the two setups have substantial differences that we highlight in Section 8. We run SEAL with its default configuration and a beam of 5 using the publicly released checkpoint based on Bart-large (Lewis et al., 2020). All outputs from the beam are used for retrieval. + +# 6.5 Evaluation + +We evaluate in-domain performance on the OpenNQ test split and out-of-domain performance on WebQuestions (WQ) and CuratedTREC (TREC) following the setup from Karpukhin et al. (2020). Passage retrieval performance is measured with Hits@1 using Pyserini evaluation scripts4. + +# 6.6 1P configurations + +We experiment with three configurations: a) 1P: Our primary setup that uses both training and constrained decoding procedures described above, producing a retrieved passage as well as an answer. b) 1P-Unconstrained: Only the training technique described in Section 5 is adopted, with standard unconstrained decoding. Since generation is unconstrained, it is possible that no passage gets retrieved for a given path. c) $1\mathrm{P} +$ Reader: Here, we take the top retrieved passage from 1P and input it to the Reader model (Section 6.4) to extract the answer. + +# 7 Results + +
RetrieverAnswererRetrieval Hits @ 1Answer
EMF1
-T5 - CBQA-26.834.0
BM25T5 - Reader23.617.924.0
SEALT5 - Reader37.929.435.8
DPRT5 - Reader46.535.642.4
1PT5 - Reader46.334.241.4
1P - Unconstrained29.329.336.1
1P46.331.738.0
+ +Table 1: Comparison of different Retriever and Answerer combinations on the NQ-Open test set. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage. 1P combines passage retrieval and answer generation in a single prediction. + +
SystemWebQuestionsTREC
Hits @1EMHits @1EM
BM25 + Rdr19.714.235.229.1
DPR + Rdr32.017.351.635.0
1P + Rdr38.020.463.838.5
1P38.020.563.836.4
+ +Table 2: Comparison of different Retriever and Answerer combinations on Out-of-domain datasets. Both the Retriever and Answerer (Rdr) are trained on only Open-NQ. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage. + +We compare to the baselines described in Section 6.4 on Open-NQ using both retrieval and answer accuracy metrics in Table 1. Answers are generated based on the top retrieved document in systems that separate retrieval from answer generation, to provide a clean comparison between systems that return (answer, evidence passage) pairs. Table 2 reports the out-of-domain performance of various systems on WQ and TREC. + +1P outperforms CBQA in question answering and beats the retrieve-and-read systems, BM25 and SEAL. On the passage retrieval task, it significantly improves over BM25 and SEAL. For indomain setting, 1P is competitive with DPR on retrieval task, but lags behind the QA pipeline that uses DPR. However, this appears to be more due to the reader rather than the retriever as discussed in Section 8. It is worth noting that 1P general + +izes significantly better out-of-domain compared to other systems. + +Utility of Search Paths 1P-Unconstrained can be viewed as an extended version of CBQA that generates a search path before predicting the answer. Thus, improvement of 1P-Unconstrained over CBQA can be attributed to this path-conditioned answer generation process, analogous to chain-of-thought reasoning (Wei et al., 2022; Lampinen et al., 2022). + +
SystemConstrained DecodingBeam
15
CBQANo26.726.8
1P Unconst.No29.029.3
SEAL + ReaderYes28.529.4
1PYes28.731.7
+ +Effect of Constrained Decoding The purpose of constrained decoding is to ground the answer in an evidence retrieved from the corpus. As expected, the constrained setup enables 1P to achieve a higher Hits@1 than 1P-unconstrained. Surprisingly, when decoding with a beam of one, we observe a small drop in answer accuracy for 1P compared to 1P-Unconstrained (Table 3). Inspecting the losses, two dominant reasons surface. Firstly, As DPR passages are chunked into 100-words (Karpukhin et al., 2020), some queries may become unanswerable given a single passage due to missing context. This is disadvantageous when the model has memorized the answer but there is no single passage to attribute it to. + +Secondly, during constrained decoding, after generating the initial keywords, the search space may soon become sparse with no good candidates to pick from. Could a larger room for planning its actions help the model here? Indeed, increasing the beam size to 5 improves performance by $3\%$ (Table 3), even when only the top-beam is used for retrieval. We refer to this as Planning, since the larger beam only enables the model to plan better and the remaining beam outputs are otherwise dis + +carded. Note that unconstrained decoding does not gain from planning. In the final setup in Table 1, we use a beam of 5 for both 1P and SEAL. Unlike 1P, SEAL uses all the outputs from the larger beam for retrieval. + +# 8 Discussion and Ablations + +Generating Answers While 1P is capable of generating answers, Table 1 highlights that it falls behind the 1P+Reader. The reason seems to be clear: the Reader has visibility into the full passage context while 1P is limited to the decoded search path and the constrained index which only ensures that generations are grounded in the corpus. Since 1P does retrieve passages, it would be possible to pull in the corresponding text as input for answer generation. We leave this as future work. + +Comparison to SEAL While 1P takes inspiration from SEAL, in practice, there are a few key differences between the two systems aside from 1P's answer generation. + +SEAL generates a large set of keywords (Table 4) using many separate decodes and heuristic guidance (Appendix A.3). In contrast, 1P decodes a single sequence of about three keywords. + +Table 3: EM for various decoding setups with different beam sizes on Open-NQ. Only top-beam result is used for evaluation, except in SEAL which uses all beam outputs. 1P constrained decoding benefits the most from a large beam whereas Unconstrained setups have only a slight effect. + +
SEAL1P
Median keywords323
Median docs retrieved5001
Generates answer×
+ +Table 4: Key differences between SEAL and 1P measured over Open-NQ test split with a beam of 1. + +The SEAL keywords are a set, decoded independently of each other and re-scored using sophisticated techniques to retrieve a large number of documents. For instance, the default configuration in SEAL retrieves up to 500 documents. This makes SEAL suitable to be employed in conjunction with a re-ranker. In contrast, 1P search path's map directly to a single (or few) relevant documents (Appendix A.6). + +We acknowledge the model-size variation between SEAL and 1P in the reported experiments, however we preferred using the publicly available SEAL checkpoint. Given the discrepancies with larger beam-size, multiple decodes and use of Reader model, it is difficult to have an apples to apples comparison between the two systems. + +Path vs Keyword set We qualitatively observe that keywords in a 1P path, owing to sequential generation, are distinct and add new information as compared to the SEAL output set where overlapping keywords are common (Appendix A.3). Thus, paths are advantageous for precisely narrowing down to a single relevant document while keyword sets are effective for retrieving a large number of documents that can later be reranked. This is corroborated by the fact that 1P is better at Hits@1 while SEAL is better at Hits@5 (Appendix A.4). + +Qualitative Analysis Table 5 illustrates patterns of Search Paths generated by 1P. We note some of the common path patterns here: + +1) First keywords are entities in the query, followed by query predicates that iteratively narrow down towards an answer. This is the most common type of path observed and can be attributed to the dominant presence of title in the training data. +2) Rewrites of the original query or related predicates such as "seasons consists of", "appeared on ...". Such paths are more prevalent where there is no canonical entity in the query or no entity can be determined with high confidence. +3) Answer is directly generated followed by supporting keywords that guide towards an attributed passage. This happens in a small fraction of cases, likely where the pretrained model has memorized an answer with high confidence. + +Overall, we find the generated search paths to be fairly meaningful and interpretable. + +Sampling Search Paths for Training Table 6 highlights that high quality keywords are crucial to performance. The LM re-scored set of keywords result in significant accuracy gain over heuristically sampled keywords. Paths with first keyword as Title boost performance further. Mixing in a small fraction of paths starting with non-title keywords encourages the model to generate predicates where no entity can be determined, giving us the best results. + +Sensitivity to tokenization We find that constrained decoding is highly sensitive to rare tokenization or punctuation formatting in the corpus. Consider the query "who sang i ran all the way home" with the gold document title "Sorry (I Ran All the Way Home)". In the unconstrained setup, the model's top prediction starts with "I Ran All the Way Home". However, "(I" is tokenized differently from "I" and searching over the FM-Index + +returns no match. As a result, constrained decoding drops the predicted keyword altogether, resorting to lower ranked keywords in the beam. We partially fix the issue by modifying the answer in a fraction of the training data to include surrounding punctuation tokens based on how they appear in the FM-index. For instance, the keyword "I Ran ..." would update to "(I Ran ...)". This simple change leads to a jump in answer accuracy from $26.4\%$ to $28.7\%$ . However, much more work is needed to make 1P robust to variations in tokenization. + +See Appendix A.2 for analysis of training data size and Appendix A.5 for masking logits vs log-probs. + +# Conclusion + +We introduce 1-PAGER, the first system to perform question answering and passage retrieval in one pass with a single language model, using a constrained decoder to iteratively partition the retrieval corpus and then generate an answer. We show competitive or improved performance over a variety of comparable baselines and carefully analyze the results, ablating both training strategies and decoding style. We also provide a qualitative analysis of predictions to illustrate the system's capabilities. Challenges with constrained decoding are surfaced including poor search spaces and sensitivity to tokenization and mitigation strategies are presented. + +We hope that 1P adds value in demonstrating how a single transformer model can be harnessed to do both retrieval and answering and pave the path for further progress in the generative retrieval domain. + +# Limitations + +1P is geared towards identifying a concise, small set of documents and generating answer in a single go. While this makes the architecture simpler, it also adds certain weaknesses. 1P is not effective for retrieving a large number of documents and falls behind pipelined systems that combine retrieval with re-ranking. Even for a single passage, it lags behind state-of-the-art dense-retrieval techniques. 1P's method of answer generation is also not competitive with the use of a reader, due to lack of passage context. + +Our training strategy relies heavily on titles or entities and it's generalization on corpora without rich structure or on queries without central entities, remains to be studied. + +
Query (Q) and Generated Search Path (SP)Comment
Correctly attributed passages and answers
Q: how many episodes of greys anatomy season 14 +SP: Grey's Anatomy (season 14) » season consists of 24 episodes » 24Query entity resolved first, followed by query predicates
Q: when did they start adding zinc to pennies +SP: Penny (United States coin) » zinc » Lincoln cent » 1943Query entity resolved iteratively
Q: who was executed for being an american spy during the revolutionary war +SP: Nathan Hale » Army during the American Revolutionary » Nathan HaleAnswer generated first
Q: who was the grandfather on the Cosby show +SP: appeared on "The Cosby » Earle HymanQuery rewrites
Incorrect Passage or Answer
Q: who decides the number of judges in the high court +SP: judge is appointed » High Court » Chief Justice of IndiaA: President of IndiaPath correctly resolved, +Failed on answer
Q: when did the isle of wight become an island +Isle of Wight » 1890 » 1890A: During the last Ice AgeQuery entity resolved, +Failed on supporting keywords
Q: love yourself by justin bieber is about who +SP: Love Yourself: Her » music video » HerA: RihanaFailed to resolve +query entity
+ +Table 5: Example 1P Search Paths (SP) on Open-NQ test set. The last keyword in SP is the predicted answer. Gold answers are indicated by A. + +
Search PathHits@1EM
Heuristic34.522.6
LM-scored40.027.2
Title » LM-scored41.928.0
Title » LM-scored + LM-scored (7+1)42.928.7
+ +Table 6: Comparison of Training Search Paths on OpenNQ. Here LM-scored denotes re-scoring by LM on a heuristic set. All results are with a beam of one. "»" indicates keyword separator and "+" mixture of path types in the give ratio. + +Constrained decoding also comes with its own challenges. Constrained beam outputs often lack diversity, so that even with a larger beam one may still end up in poor search spaces. Computing document-level constraints across the corpus is expensive as it may require scanning a large number of rows in the index. Further, communication between FM-Index and Transformer model slows down inference. + +# Acknowledgement + +We thank Don Metzler, Nicholas FitzGerald, Partha Talukdar, Srini Narayanan, as well as our anonymous reviewers, for their thoughtful comments and valuable feedback + +# Ethical Considerations + +While Large Language Models can solve a wide range of tasks effectively, they also suffer from biases across axis such as gender, race, region (Chan, 2023). LLMs are also prone to generating toxic content, especially when probed about it. Although, our task grounds the model's generations on a corpus, some of the biases in pre-trained LLMs, may seep in 1-PAGER. + +Building the FM-index and constrained decoding is a compute-intensive affair. We have experimented over a single dataset, Natural Questions, involving only knowledge-seeking queries, and single model family, T5. It is possible that some of our findings may not hold over other datasets or model families. Finally, our experiments are limited to English corpus and queries. The proposed approaches are resource-intensive and may not be accessible or valid for several low-resourced languages. + +# References + +Leonard Adolphs, Benjamin Boerschinger, Christian Buck, Michelle Chen Huebscher, Massimiliano Ciaramita, Lasse Espeholt, Thomas Hofmann, Yannic Kilcher, Sascha Rothe, Pier Giuseppe Sessa, et al. 2021. Boosting search engines with interactive agents. arXiv preprint arXiv:2109.00527. + +Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak + +Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. +Petr Baudis and Jan Šedivý. 2015. Modeling of the question answering task in the yodaqa system. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF'15, Toulouse, France, September 8-11, 2015, Proceedings 6, pages 222-228. Springer. +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533-1544. +Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen tau Yih, Sebastian Riedel, and Fabio Petroni. 2022. Autoregressive search engines: Generating substrings as document identifiers. In arXiv pre-print 2204.10628. +Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, et al. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. arXiv preprint arXiv:2212.08037. +Anastasia Chan. 2023. Gpt-3 and instructgpt: technological dystopianism, utopianism, and "contextual" perspectives in ai ethics and industry. AI and Ethics, 3(1):53-64. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. +Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever-reader interaction for scalable open-domain question answering. In International Conference on Learning Representations. +Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. +P. Ferragina and G. Manzini. 2000. Opportunistic data structures with applications. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 390-398. +Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models. + +Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations. +Alex Graves. 2012. Sequence transduction with recurrent neural networks. +Sanda M Harabagiu, Steven J Maiorano, and Marius A Pasca. 2003. Open-domain textual question answering techniques. Natural Language Engineering, 9(3):231-267. +Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models. +Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neubig. 2022. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. arXiv preprint arXiv:2212.02027. +Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics. +Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39-48. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. +Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and Felix Hill. 2022. Can language models learn from explanations in context? arXiv preprint arXiv:2204.02329. + +Hyunjii Lee, Sohee Yang, Hanseok Oh, and Minjoon Seo. 2022. Generative multi-hop retrieval. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1417-1436. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. Paq: 65 million probably-asked questions and what you can do with them. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81. +Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: An easy-to-use python toolkit to support replicable in research with sparse and dense representations. arXiv preprint arXiv:2102.10073. +Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, and Nat McAleese. 2022. Teaching language models to support answers with verified quotes. +Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork. 2021. Rethinking search: making domain experts out of dilettantes. ACM SIGIR Forum, 55(1):1-27. +OpenAI. 2023. Gpt-4 technical report. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67. +Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. 2021. Measuring attribution in natural language generation models. arXiv preprint arXiv:2112.12870. + +Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aankanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, and Andrea Gesmundo. 2022. Scaling up models and data with t5x and seqio. +Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910. +Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333-389. +Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. Colbertv2: Effective and efficient retrieval via lightweight late interaction. +Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems, 35:21831-21843. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903. +Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808. +Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022. Generate rather than retrieve: Large language models are strong context generators. arXiv preprint arXiv:2209.10063. +Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate + +rather than retrieve: Large language models are strong context generators. In The Eleventh International Conference on Learning Representations. +Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, Peitan Zhang, and Ji-Rong Wen. 2022. Ultron: An ultimate retriever on corpus with a model-based indexer. arXiv preprint arXiv:2208.09257. +Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering. + +# A Appendix + +# A.1 Constrain Computation + +1P relies on two key operations for constrain computation: + +a) $\mathcal{F}(D,k)$ : Documents that contain keyword $k$ +b) $\mathcal{C}(k,D)$ : Next tokens for keyword $k$ in arbitrary document set $D$ + +$\mathcal{F}(D,k)$ is preprocessed and cached to allow for quick computation. $\mathcal{C}(k,D)$ is trickier to compute. When $D$ represents the full corpus, FM-index can fetch the next tokens in $O(|V| \log(|V|))$ , where $V$ is the token vocabulary and independent of $|D|$ . However, arbitrary $D$ requires a traversal over all documents and can be very expensive. In practise, the LLM training guides it to generate effective keywords such that $|D|$ is small. + +We also apply certain other optimizations to reduce the compute cost: + +- Constrains are computed lazily over a decoding pass. +- Several computations are cached, eg: keyword to document id mapping +- To cap the cost of constraints at each decoding step, we allow for unconstrained generation in rare scenarios, when the estimated cost is too high. If the generated path is absent in the corpus ( $< 1\%$ examples), these can be filtered out later. + +Despite these optimizations, inference continues to be expensive and we perhaps need a special data structure for next token look-up. + +# A.2 Training data size + +
DatasetQueriesPathsHits@1EM
Open-NQ55k55k41.928.1
Open-NQ55k310k42.928.7
Open-NQ + PAQ55k310k43.629.5
+ 9M+ 9M
+ +Table 7: Comparison of different dataset sizes for queries and paths + +In Table 7, we observe the effect of dataset size on performance. Increasing the numbers of paths sampled per query improves performance, perhaps + +due to higher diversity in training. However, this method of dataset expansion is limited by the number of relevant paths we could extract for a query. + +We also experiment with increasing the query set manifold by mixing in unsupervised datasets. A total of 9M QA pairs are sampled from PAQ (Lewis et al., 2021), a synthetic QA dataset, and search paths extracted with heuristic scoring described in Section 5. The original 1P training dataset is mixed in 1:1 ratio. This further boosts performance, but not proportionally to the amount of data added, indicating diminishing returns from silver datasets. + +# A.3 SEAL keywords + +SEAL generates a set of document substrings constrained on the corpus, that are combined to form document identifiers. Besides using a LM to generate keywords, SEAL utilizes several other mechanisms for extracting keywords. This includes partial beam sequences, heuristically adding query n-grams, sampling the top-k tokens from the logprobs of the first decoding step, force decoding title etc. The keywords are re-scored using the LM as well as FM-index count and all keyword combinations are retrieved. Table 8 illustrates keywords generated by both the systems. Note that SEAL keywords can be repetitive and therefore require a large number of keywords to narrow down to meaningful documents. This also makes SEAL suitable for retrieving a much larger set of documents that can be re-ranked later. The maximum number of retrieved documents for SEAL are capped by a hyperparameter with default value of 500. In contrast, 1P is geared towards retrieving only the top-document. + +# A.4 Hits@5 + +SEAL does significantly better than 1P for Hits@5 (Table 9). We attribute this to the large set of keywords generated by SEAL as explained in the Appendix A.3. + +# A.5 Normalizing sequence likelihood over constrained space + +During constrained decoding a sequence $X$ , we need to choose the next token from $\mathcal{C}(X, D)$ and not the entire vocabulary space $V$ . Should the sequence likelihood be re-normalized over this constrained space? We find that re-normalizing the probabilities results in inflated likelihoods, making it hard for the model to back-track. + +Consider the query, "where did the butchers in the slaughterhouse cases live" to which our model + +
SystemQuestion or Search PathAnswer
1P SEALwho has the most catches in nfl history2,000-yard club » Barry SandersJerry RiceBarry SandersT.J. Houshmandzadeh
</s> Michael Irvin @ @, yards per catch, caught his, touchdown, record
1P SEALwhen was harry potter and the philosophers stone publishedHarry Potter and the Philosopher's Stone » first published in the United » 199719971997
</s> Harry Potter and the Philosopher's Stone @ @, "Harry Potter, Potter and thePhilosopher's Stone is, Potter and the Philosopher's Stone Harry, novel1999
1P SEALwhat is the meaning of the harp in irelandHarp » national symbol of Ireland » national symbol of Irelandthe arms of Irelandnational symbol of Ireland
</s> Harp @ @, Irish harp., harp is, harp was, harparistocracy
1P SEALwho was the president of pakistan during 1971 warIndo-Pakistani War of 1971 » Prime Minister of Pakistan » Zulfikar Ali BhuttoYahya KhanZulfikar Ali Bhutto
</s> Indo-Pakistani War of 1971 @ @, East Pakistan, Pakistani, Pakistan Army,Pakistan'sMuhammad Yaqub Khan
1P SEALwhen do you declare honors in contract bridgeContract bridge » declaring » end of the handany time after the auctionend of the hand
</s> Contract bridge @ @, declarer, bidding, honors, handsbidding
+ +Table 8: Comparison of keywords generated by SEAL and 1P for randomly sampled examples from Open-NQ test set. For 1P, we show the full search path separated by "»" with the last keyword as the answer. For SEAL, we illustrate the top-5 keywords along with the answer from Reader model. "" and "@@" are special tokens used by SEAL for identifying start of passage and title marker respectively. The Answer next to the question is the gold answer while others are predictions from corresponding systems. + +
SystemBeamHits@5
SEAL159.7
SEAL562.8
1P146.5
1P550.8
+ +Table 9: Hits@5 on Open-NQ test. SEAL achieves a much higher score than 1P owning to the larger number of documents matched and re-scored. Note that only top-beam result is used for 1P while SEAL uses all beam outputs. + +predicts an irrelevant search path [Slaughterhouse Five, but, EoS]. What's going on under the hood? The first keyword is incorrect lending the model into a poor search space. With the second keyword, the model is possibly looking to generate "butcher" but there's no such keyword in the constrained set. Ideally, the model should backtrack at this point to other candidates in the beam. However, since the set of continuations is small, renormalizing inflates the probabilities of all tokens in $\mathcal{C}$ including $EoS$ , even though the true likelihood of such a sequence is very low. Indeed, using the language model's scores directly without any re + +![](images/654724f0998c6062f18ca04f65c75b48e7ea83283c2b0243c15547b26b82e17d.jpg) +Figure 4: Number of matching documents in the corpus for 1P generated path in the test set. About half the examples match only a single path. + +normalization cures this issue yielding [Slaughterhouse cases, Butcher, EoS]. and this is the strategy we opt for in all our experiments. + +# A.6 Number of matching documents + +1P generated paths effectively narrow down the corpus, generally matching only a few documents in the corpus as illustrated in Figure 4. Note that a small fraction of paths match 0 documents due to pruning optimizations applied during inference + +time detailed in Appendix A.1. \ No newline at end of file diff --git a/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/images.zip b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f885f02878962d7429eaf504c878a3361302d0db --- /dev/null +++ b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1603eaa6b2434c8f0a9c35dd810355d630df7196bb6504aa1401ba17082d3d06 +size 575982 diff --git a/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/layout.json b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..49ff0e81c51154fd14df1dccb7e4db729e3b39c1 --- /dev/null +++ b/2023/1-PAGER_ One Pass Answer Generation and Evidence Retrieval/layout.json @@ -0,0 +1,9957 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 98, + 76, + 495, + 92 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 98, + 76, + 495, + 92 + ], + "spans": [ + { + "bbox": [ + 98, + 76, + 495, + 92 + ], + "type": "text", + "content": "1-PAGER: One Pass Answer Generation and Evidence Retrieval" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 136, + 118, + 459, + 132 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 118, + 459, + 132 + ], + "spans": [ + { + "bbox": [ + 136, + 118, + 459, + 132 + ], + "type": "text", + "content": "Palak Jain1 Livio Baldini Soares2 Tom Kwiatkowski2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 190, + 133, + 405, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 133, + 405, + 148 + ], + "spans": [ + { + "bbox": [ + 190, + 133, + 405, + 148 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 190, + 133, + 405, + 148 + ], + "type": "text", + "content": " Google Research " + }, + { + "bbox": [ + 190, + 133, + 405, + 148 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 190, + 133, + 405, + 148 + ], + "type": "text", + "content": " Google Deepmind" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 190, + 148, + 408, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 148, + 408, + 162 + ], + "spans": [ + { + "bbox": [ + 190, + 148, + 408, + 162 + ], + "type": "text", + "content": "{palakj, liviobs, tomkwiat}@google.com" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 84, + 235, + 274, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 235, + 274, + 498 + ], + "spans": [ + { + "bbox": [ + 84, + 235, + 274, + 498 + ], + "type": "text", + "content": "We present 1-PAGER the first system that answers a question and retrieves evidence using a single Transformer-based model and decoding process. 1-PAGER incrementally partitions the retrieval corpus using constrained decoding to select a document and answer string, and we show that this is competitive with comparable retrieve-and-read alternatives according to both retrieval and answer accuracy metrics. 1-PAGER also outperforms the equivalent 'closed-book' question answering model, by grounding predictions in an evidence corpus. While 1-PAGER is not yet on-par with more expensive systems that read many more documents before generating an answer, we argue that it provides an important step toward attributed generation by folding retrieval into the sequence-to-sequence paradigm that is currently dominant in NLP. We also show that the search paths used to partition the corpus are easy to read and understand, paving a way forward for interpretable neural retrieval." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 508, + 155, + 521 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 508, + 155, + 521 + ], + "spans": [ + { + "bbox": [ + 68, + 508, + 155, + 521 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 529, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 529, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 529, + 291, + 772 + ], + "type": "text", + "content": "In recent times, there has been a push to reformulate a wide variety of tasks from NLP and other domains into the sequence-to-sequence paradigm, to make use of large pre-trained Transformer networks (Vaswani et al., 2017). However, despite evidence that large language models can often answer questions (Roberts et al., 2020), predict identifiers of documents that support those answers (Tay et al., 2022), or generate text that contains and explains an answer (Yu et al., 2022) the dominant paradigm in question answering is still the retrieve-and-read approach that pipelines separate retrieval and answer generation modules. This approach has the benefit that it can provide direct and targeted paragraph-level attribution for the generated answers (Bohnet et al., 2022). However, it also relies on a heterogenous mix of models that are hard to train in concert (Metzler et al., 2021)." + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 307, + 215, + 526, + 368 + ], + "blocks": [ + { + "bbox": [ + 307, + 215, + 526, + 368 + ], + "lines": [ + { + "bbox": [ + 307, + 215, + 526, + 368 + ], + "spans": [ + { + "bbox": [ + 307, + 215, + 526, + 368 + ], + "type": "image", + "image_path": "64c5c707b66c4d979c376511847a2f6ef5534751d30fb0db42801b4d3d7ac625.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 375, + 527, + 413 + ], + "lines": [ + { + "bbox": [ + 302, + 375, + 527, + 413 + ], + "spans": [ + { + "bbox": [ + 302, + 375, + 527, + 413 + ], + "type": "text", + "content": "Figure 1: Example 1P output that iteratively partitions the corpus into sub-sets containing the generated n-grams. The last n-gram is taken as the answer." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 434, + 526, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 434, + 526, + 663 + ], + "spans": [ + { + "bbox": [ + 302, + 434, + 526, + 663 + ], + "type": "text", + "content": "Motivated by the observation that language model decoders already perform search over possible sequences (Graves, 2012), and that evidence documents themselves are simply sequences of tokens, we present an alternative approach that relies on a single Transformer model. In this approach, which we name 1-PAGER (One Pass Answer Generation and Evidence Retrieval) or simply 1P, the decoder iteratively partitions a corpus of evidence documents by generating a search path consisting of a set of keywords that identify relevant documents and an answer string that is contained in at least one of these documents. With 1P, we aim to explore the spectrum between CBQA, where the answer is generated without reference to an evidence corpus, and pipelined approaches that feed retrieved documents into the transformer." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 665, + 526, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 665, + 526, + 731 + ], + "spans": [ + { + "bbox": [ + 302, + 665, + 526, + 731 + ], + "type": "text", + "content": "Figure 1 illustrates an example in which the corpus is iteratively partitioned into documents that contain the string 'Economy of India', then those that also contain the string 'Agriculture', and finally those that also contain the answer string '23%'." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 733, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 526, + 772 + ], + "type": "text", + "content": "1P output sequences are guaranteed to match at least one document in the evidence corpus. This is enforced via a constrained decoder that has ac" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "text", + "content": "14529" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "spans": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14529-14543" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "spans": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 292, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 292, + 381 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 292, + 381 + ], + "type": "text", + "content": "cess to an FM-index representation of the evidence corpus contents (Ferragina and Manzini, 2000) and we evaluate 1P's ability to correctly answer open-domain questions while also retrieving passages that provide support for those answers (Bohnet et al., 2022). Since 1P is the first model that can do both of these tasks, we compare to pipelined systems that first retrieve a single passage and then generate an answer based on this evidence passage. 1P is competitive as a passage retriever, performing similarly to a widely used dense retriever (Karpukhin et al., 2020) and outperforming the SEAL system which independently generates keywords rather than a search path (Bevilacqua et al., 2022). 1P also outperforms an equivalent closed-book question answering (CBQA) model (Roberts et al., 2020) according to answer accuracy. Part of this improvement comes from the prediction of search paths themselves, reminiscent of chain-of-thought reasoning (Wei et al., 2022), and part is from 1P's constrained decoder, which forces the model to generate answers from passages that contain the keywords." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 384, + 291, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 384, + 291, + 532 + ], + "spans": [ + { + "bbox": [ + 69, + 384, + 291, + 532 + ], + "type": "text", + "content": "While 1P does not yet perform as well as the very best retrieval or open-domain question answering systems in terms of accuracy, the fact that it is competitive with pipelined systems that are trained with the same data and which use similar amounts of inference-time compute suggests a promising path ahead. Unlike those systems, 1P can be trained end-to-end along with any other task that fits into the sequence-to-sequence paradigm. Additionally, 1P search paths are inherently interpretable, unlike embeddings used in dense retrieval." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 546, + 159, + 559 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 546, + 159, + 559 + ], + "spans": [ + { + "bbox": [ + 69, + 546, + 159, + 559 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 570, + 289, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 570, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 570, + 289, + 772 + ], + "type": "text", + "content": "\"Retrieve-and-read\" Question Answering Question answering approaches in NLP are dominated by the \"retrieve-and-read\" paradigm where a retriever first fetches hundreds of relevant documents from a corpus, followed by a language model that reranks and extracts the answer (Harabagiu et al., 2003; Chen et al., 2017; Zhu et al., 2021). Sparse retrievers such as BM25 (Robertson et al., 2009) build a high-dimensional lexical index over text corpus. Dense retrievers (Karpukhin et al., 2020) use a dual encoder architecture to embed the query and document and perform an approximate nearest neighbor search. Various modifications to dense retrieval have been proposed over the years includ" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 71, + 524, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 71, + 524, + 137 + ], + "spans": [ + { + "bbox": [ + 305, + 71, + 524, + 137 + ], + "type": "text", + "content": "ing hard negative training (Xiong et al., 2020), late interaction (Khattab and Zaharia, 2020; Santhanam et al., 2022), few-shot learning (Izacard et al., 2022), joint retriever and reader training (Jiang et al., 2022)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 140, + 524, + 261 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 140, + 524, + 261 + ], + "spans": [ + { + "bbox": [ + 305, + 140, + 524, + 261 + ], + "type": "text", + "content": "A particular variant of interest is the Iterative Retrieval process where the query is reformulated incrementally (Das et al., 2019; Lee et al., 2022) leading to an interactive search process (Jiang et al., 2023; Adolphs et al., 2021). This query augmentation scheme has similarities with our use of search paths. However, we use the paths to iteratively partition the corpus while prior works have used it for refining the query." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 264, + 524, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 264, + 524, + 329 + ], + "spans": [ + { + "bbox": [ + 305, + 264, + 524, + 329 + ], + "type": "text", + "content": "To perform well, retrieve-and-read systems will typically retrieve 10s to 100s of passages that must be processed by a language model. In constraint, 1P retrieves and extracts an answer in a single pass of language model generation." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 343, + 524, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 343, + 524, + 529 + ], + "spans": [ + { + "bbox": [ + 305, + 343, + 524, + 529 + ], + "type": "text", + "content": "Closed Book Question Answering With data and parameter scale, LLMs in a closed-book setting (CBQA) have shown competitive performance (OpenAI, 2023; Anil et al., 2023; Yu et al., 2023) to retrieve pipelines (ODQA), however without producing any attributed passages (Rashkin et al., 2021; Bohnet et al., 2022). An extension of CBQA is post-hoc retrieval where a large language model LLM) is first used to generate an answer and then evidence for the question-answer pair is fetched by a retriever (Gao et al., 2023a; Bohnet et al., 2022). While post-hoc retrieval serves the same goal as 1P, it still uses a pipeline of LLM and retriever to do so." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 544, + 524, + 771 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 544, + 524, + 771 + ], + "spans": [ + { + "bbox": [ + 305, + 544, + 524, + 771 + ], + "type": "text", + "content": "Generative Retrieval Recently, generative retrieval has emerged as an alternative to the conventional \"retrieve-and-read\" pipeline (Metzler et al., 2021). Genre (De Cao et al., 2021) performed generative entity linking by constraining model's decoding to a set of entities. DSI (Tay et al., 2022) showed one of the first proof of LLM's ability to memorize docids in the corpus. However, atomic ids or hierarchical clusters, as used in DSI, are opaque identifiers and capture limited information. Works such as SEAL (Bevilacqua et al., 2022) and Ultron (Zhou et al., 2022) use a semantically richer representation: keywords in the document. In particular, SEAL constrains the generation to only keywords in the corpus using the FM-index (Ferragina and Manzini, 2000), a key data structure we borrow in this work." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "type": "text", + "content": "14530" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 73, + 68, + 522, + 144 + ], + "blocks": [ + { + "bbox": [ + 73, + 68, + 522, + 144 + ], + "lines": [ + { + "bbox": [ + 73, + 68, + 522, + 144 + ], + "spans": [ + { + "bbox": [ + 73, + 68, + 522, + 144 + ], + "type": "image", + "image_path": "8dcfa0acd2a903129c9e6dbe6563a933e6f4602c2c9e926949dec9381190afc9.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "lines": [ + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "spans": [ + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "text", + "content": "Figure 2: System illustration of different QA systems. From left to right: CBQA, 1-PAGER, SEAL, Retrieve-and-Read system. " + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "text", + "content": " denotes the retrieval corpus, " + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "inline_equation", + "content": "P" + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "text", + "content": " a retrieved passage, " + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "inline_equation", + "content": "Q" + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "text", + "content": " the input question and " + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "text", + "content": ", the generated answer. 1P is closest to CBQA (only single model used) but it also outputs a passage retrieved from " + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 67, + 152, + 526, + 190 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 210, + 290, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 210, + 290, + 264 + ], + "spans": [ + { + "bbox": [ + 67, + 210, + 290, + 264 + ], + "type": "text", + "content": "1P represents docids as keyword paths, which are arguably more interpretable, and learns a soft partition over the corpus instead of the hard partition imposed by DSI's clustering." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 264, + 290, + 332 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 264, + 290, + 332 + ], + "spans": [ + { + "bbox": [ + 67, + 264, + 290, + 332 + ], + "type": "text", + "content": "Another crucial distinction is 1P's ability to both retrieve and generate an answer while prior works rely on a external re-ranker/reader for the same. A high-level view of various question-answering systems is presented in Figure 2." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 339, + 291, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 339, + 291, + 515 + ], + "spans": [ + { + "bbox": [ + 67, + 339, + 291, + 515 + ], + "type": "text", + "content": "Attributed Question Answering Standard metrics for open-domain question answering, such as exact match or token-based F1, have received criticism for being imprecise and/or insufficient. Several efforts have proposed augmenting answers with textual evidence, via retrieval or citations (Bohnet et al., 2022; Menick et al., 2022; Gao et al., 2023b). While this work does not directly evaluate the quality of retrieved answer evidence, our proposed model inherently produces a passage to support the final answer, along with a search path of keywords, which could be used to provide users with answer evidence." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 526, + 260, + 553 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 526, + 260, + 553 + ], + "spans": [ + { + "bbox": [ + 67, + 526, + 260, + 553 + ], + "type": "text", + "content": "3 Iterative Corpus Partitioning and Answer Prediction" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "spans": [ + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": "We focus on the problem of learning a mapping " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "f(q, D) \\to (a, d_a)" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": " from a question " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": " and corpus of documents " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": " to an answer and supporting document " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "(a, d_a)" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": ". The predicted document " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "d_a" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": " is retrieved from " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": " and the answer " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "a" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": " is a sub-string of " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "d_a" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": ". The document " + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "inline_equation", + "content": "d_a" + }, + { + "bbox": [ + 67, + 562, + 290, + 655 + ], + "type": "text", + "content": " should be relevant to the question and provide evidence for answer." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 657, + 290, + 739 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 657, + 290, + 739 + ], + "spans": [ + { + "bbox": [ + 67, + 657, + 290, + 739 + ], + "type": "text", + "content": "The goal of this paper is to model the function " + }, + { + "bbox": [ + 67, + 657, + 290, + 739 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 67, + 657, + 290, + 739 + ], + "type": "text", + "content": " using a single sequence-to-sequence model, rather than a pipeline which first retrieves " + }, + { + "bbox": [ + 67, + 657, + 290, + 739 + ], + "type": "inline_equation", + "content": "d_{a}" + }, + { + "bbox": [ + 67, + 657, + 290, + 739 + ], + "type": "text", + "content": " and then feeds it into an answer generation module. To achieve our goal, we recast retrieval as an iterative corpus partitioning process illustrated in Figure 3." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 746, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 291, + 773 + ], + "type": "text", + "content": "Iterative corpus partitioning adopts the LM decoder's autoregressive search process to partition" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 303, + 210, + 456, + 223 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 210, + 456, + 223 + ], + "spans": [ + { + "bbox": [ + 303, + 210, + 456, + 223 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 303, + 210, + 456, + 223 + ], + "type": "text", + "content": " by predicting n-gram keywords." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "spans": [ + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "text", + "content": "An n-gram of tokens " + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "text", + "content": " is said to be contained in a document " + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "text", + "content": ", denoted by " + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "inline_equation", + "content": "k \\prec d" + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "text", + "content": ", when " + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "text", + "content": " is a sub-sequence of " + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 302, + 223, + 525, + 277 + ], + "type": "text", + "content": ". We define a keyword corpus partitioning function" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 346, + 285, + 481, + 301 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 346, + 285, + 481, + 301 + ], + "spans": [ + { + "bbox": [ + 346, + 285, + 481, + 301 + ], + "type": "interline_equation", + "content": "\\mathcal {F} (D, k) = \\{d | k \\prec d; d \\in D \\}", + "image_path": "2989bd1034be2185ada8382416e28c3de13b3fdbf486e2e2f4053c8f10e8c409.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "spans": [ + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "text", + "content": "that selects only those documents that contain " + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "text", + "content": ". 1-PAGER iteratively partitions the corpus " + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "text", + "content": " by generating a sequence of n-grams that we refer to as a Search Path " + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "inline_equation", + "content": "p_t = [k_1, k_2, \\dots, k_t]" + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "text", + "content": ". Each prefix of this search path defines a subset of " + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 302, + 308, + 525, + 390 + ], + "type": "text", + "content": " via the search path corpus partitioning function" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 327, + 397, + 499, + 414 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 327, + 397, + 499, + 414 + ], + "spans": [ + { + "bbox": [ + 327, + 397, + 499, + 414 + ], + "type": "interline_equation", + "content": "\\mathcal {P} (D, p _ {t}) = D _ {p _ {t}} = \\{\\cap_ {i \\in [ 1, t ]} \\mathcal {F} (D, k _ {i}) \\}", + "image_path": "33f95125ff1657dec2f686c0f1eb881d11517d8f1faec59d6262f84daecb2f11.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "spans": [ + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "text", + "content": "and each subsequent keyword " + }, + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "inline_equation", + "content": "k_{t+1}" + }, + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "text", + "content": " narrows down " + }, + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "inline_equation", + "content": "D_{p_t}" + }, + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "text", + "content": " into further sub-spaces such that " + }, + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "inline_equation", + "content": "D_{p_{t+1}} \\subseteq D_{p_t}" + }, + { + "bbox": [ + 302, + 421, + 525, + 449 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 454, + 525, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 454, + 525, + 495 + ], + "spans": [ + { + "bbox": [ + 302, + 454, + 525, + 495 + ], + "type": "text", + "content": "Answer prediction is treated in exactly the same way as keyword selection and in 1P the last keyword from " + }, + { + "bbox": [ + 302, + 454, + 525, + 495 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 302, + 454, + 525, + 495 + ], + "type": "text", + "content": " is taken as the answer." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 504, + 516, + 518 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 504, + 516, + 518 + ], + "spans": [ + { + "bbox": [ + 302, + 504, + 516, + 518 + ], + "type": "text", + "content": "4 Constrained Decoding and FM-Index" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 526, + 525, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 526, + 525, + 592 + ], + "spans": [ + { + "bbox": [ + 302, + 526, + 525, + 592 + ], + "type": "text", + "content": "To avoid generating empty partitions, we constrain 1-PAGER to only decode search paths that match at least one document. We modify the decoder's beam-search strategy to only allow keyword continuations that are contained in the current partition." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "spans": [ + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "text", + "content": "Given a document subset " + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "inline_equation", + "content": "D_{p_i}" + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "text", + "content": ", which could be the full corpus " + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "text", + "content": " at the start of decoding " + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "inline_equation", + "content": "(i = 0)" + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "text", + "content": " and a keyword prefix " + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 593, + 525, + 646 + ], + "type": "text", + "content": ", which could be empty, the set of all valid continuation tokens is defined as," + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 330, + 655, + 497, + 671 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 330, + 655, + 497, + 671 + ], + "spans": [ + { + "bbox": [ + 330, + 655, + 497, + 671 + ], + "type": "interline_equation", + "content": "\\mathcal {C} (k, D _ {p _ {i}}) = \\{x | k \\| x \\prec d, d \\in D _ {p _ {i}} \\}", + "image_path": "6124afbeefb15dbd137eee1283047784d67cb80910927b6d05565d8fd9921757.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": " is any vocabulary token and " + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "inline_equation", + "content": "\\| \\cdot \\|" + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": " indicates concatenation of two token sequences. As a special case, when " + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "inline_equation", + "content": "k = \\phi" + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "inline_equation", + "content": "i = 0" + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": ", all tokens in " + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": " are valid continuations. 1P separates keywords in " + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "inline_equation", + "content": "p_T" + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": " with a special separator token " + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 302, + 678, + 525, + 772 + ], + "type": "text", + "content": " and marks the end of the sequence with an EOS token. These two tokens are always valid continuations." + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "text", + "content": "14531" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 75, + 71, + 522, + 283 + ], + "blocks": [ + { + "bbox": [ + 75, + 71, + 522, + 283 + ], + "lines": [ + { + "bbox": [ + 75, + 71, + 522, + 283 + ], + "spans": [ + { + "bbox": [ + 75, + 71, + 522, + 283 + ], + "type": "image", + "image_path": "ee9b1d9383128dc9e50e582c09f5501209e0eb4fb1f761fd7f474a4bf66331d8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 295, + 526, + 333 + ], + "lines": [ + { + "bbox": [ + 67, + 295, + 526, + 333 + ], + "spans": [ + { + "bbox": [ + 67, + 295, + 526, + 333 + ], + "type": "text", + "content": "Figure 3: Illustration of the 1P decoding process. A keyword can only be generated from the documents matching previously generated keywords. Right panel shows a magnified view of applying constraints to a decoding step. Any keyword not present in the documents is masked out." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "spans": [ + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "text", + "content": "Consider Figure 3. The three keywords correspond to the decoded token sequence [Ten, Commandments, " + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "text", + "content": ", twice, in, the, Hebrew, Bible, " + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "text", + "content": ", books, of, Exodus, EOS]. At the start of decoding, any token in " + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "text", + "content": " is allowed. After decoding Ten, only those tokens that follow Ten as an n-gram in " + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "text", + "content": " are allowed, along with the default separators. After decoding [Ten, Commandments, " + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 67, + 352, + 291, + 514 + ], + "type": "text", + "content": "] we are ready to start a new keyword, but only tokens from documents that contain the keyword Ten Commandments are allowed. Decoding continues in this manner until EOS is generated." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 516, + 291, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 516, + 291, + 582 + ], + "spans": [ + { + "bbox": [ + 67, + 516, + 291, + 582 + ], + "type": "text", + "content": "To efficiently implement these constraints, we need a data-structure that can quickly determine both " + }, + { + "bbox": [ + 67, + 516, + 291, + 582 + ], + "type": "inline_equation", + "content": "\\mathcal{C}(k,D_p)" + }, + { + "bbox": [ + 67, + 516, + 291, + 582 + ], + "type": "text", + "content": ", the continuation tokens given a document set and " + }, + { + "bbox": [ + 67, + 516, + 291, + 582 + ], + "type": "inline_equation", + "content": "\\mathcal{P}(D_p,k)" + }, + { + "bbox": [ + 67, + 516, + 291, + 582 + ], + "type": "text", + "content": ", the subset of documents that contain a given path." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "content": "For this, we extend the usage of an FM-index (Ferragina and Manzini, 2000) as described in (Bevilacqua et al., 2022). The FM-index is a compressed token-based index over a corpus " + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "inline_equation", + "content": "D_0" + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "content": " with a few important properties for our usage: (1) it can efficiently list possible token continuations for a sequence prefix that occur in " + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "inline_equation", + "content": "D_0" + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "content": " i.e., " + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{C}(k,D_0)" + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "content": ", (2) it can list the set of documents in the corpus that match an n-gram i.e., " + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(D_0,k)" + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "content": ", and (3) it supports search over arbitrary n-grams that occur within documents. Note that the FM-index operations are optimized for " + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "inline_equation", + "content": "D_0" + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "content": ", the original corpus it is built over. We extend these to an arbitrary " + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "inline_equation", + "content": "D_p \\subset D_0" + }, + { + "bbox": [ + 67, + 584, + 291, + 773 + ], + "type": "text", + "content": " at additional cost described in Appendix A.1." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 303, + 352, + 452, + 365 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 352, + 452, + 365 + ], + "spans": [ + { + "bbox": [ + 303, + 352, + 452, + 365 + ], + "type": "text", + "content": "5 Training data generation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 374, + 526, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 374, + 526, + 509 + ], + "spans": [ + { + "bbox": [ + 302, + 374, + 526, + 509 + ], + "type": "text", + "content": "For training 1P, we produce a dataset with examples of queries and search paths as described above. At a high-level, we generate search paths by iteratively selecting n-grams from an answer passage, and simulating, using the FM-Index of the retrieval corpus, the partitioning of the corpus after selecting each keyword, until only a few documents remain. Finally, the answer span " + }, + { + "bbox": [ + 302, + 374, + 526, + 509 + ], + "type": "inline_equation", + "content": "a" + }, + { + "bbox": [ + 302, + 374, + 526, + 509 + ], + "type": "text", + "content": " is appended to the search path. Each example produced can be serialized as sequence-to-sequence pair of inputs and targets as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 303, + 515, + 498, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 515, + 498, + 528 + ], + "spans": [ + { + "bbox": [ + 303, + 515, + 498, + 528 + ], + "type": "text", + "content": "inputs: Generate keywords for: " + }, + { + "bbox": [ + 303, + 515, + 498, + 528 + ], + "type": "inline_equation", + "content": "" + }, + { + "bbox": [ + 303, + 515, + 498, + 528 + ], + "type": "text", + "content": "?" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 530, + 524, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 530, + 524, + 541 + ], + "spans": [ + { + "bbox": [ + 303, + 530, + 524, + 541 + ], + "type": "text", + "content": "targets: K_SEP " + }, + { + "bbox": [ + 303, + 530, + 524, + 541 + ], + "type": "inline_equation", + "content": "k_{0}" + }, + { + "bbox": [ + 303, + 530, + 524, + 541 + ], + "type": "text", + "content": " K_SEP " + }, + { + "bbox": [ + 303, + 530, + 524, + 541 + ], + "type": "inline_equation", + "content": "k_{1}" + }, + { + "bbox": [ + 303, + 530, + 524, + 541 + ], + "type": "text", + "content": " ... K_SEP A_SEP a EOS" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 552, + 418, + 565 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 552, + 418, + 565 + ], + "spans": [ + { + "bbox": [ + 302, + 552, + 418, + 565 + ], + "type": "text", + "content": "5.1 Keyword Selection" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 570, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 570, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 570, + 526, + 773 + ], + "type": "text", + "content": "A good keyword should have a) high relevance to the query and b) effectively narrow down the search space. To identify relevant keywords, we restrict to only the gold document " + }, + { + "bbox": [ + 302, + 570, + 526, + 773 + ], + "type": "inline_equation", + "content": "g" + }, + { + "bbox": [ + 302, + 570, + 526, + 773 + ], + "type": "text", + "content": ". All ngrams in " + }, + { + "bbox": [ + 302, + 570, + 526, + 773 + ], + "type": "inline_equation", + "content": "g" + }, + { + "bbox": [ + 302, + 570, + 526, + 773 + ], + "type": "text", + "content": " of length up to five are extracted. Irrelevant keywords are filtered out such as those starting or ending with stop words. Similarly, keywords that are too rare in the corpus, e.g., \"Philippines at Luzon\" or too frequent, e.g., \"part\" are excluded based on a threshold on their count in corpus. The remaining keywords are scored with a combinations of heuristics, mainly Rouge-1 similarity with the query (Lin, 2004) along with minor award for keywords containing entities and penalty for keywords highly frequent in the corpus." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14532" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 248 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 248 + ], + "type": "text", + "content": "This scoring mechanism often misses out on keywords that are semantically relevant, but do not lexically overlap with the query. To boost the relevance of our keyword set, we re-score the top hundred keywords using a language model. A T5-XXL model is finetuned with the input as the query " + }, + { + "bbox": [ + 67, + 71, + 293, + 248 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 67, + 71, + 293, + 248 + ], + "type": "text", + "content": " and target as either the title or a heuristically sampled keyword in a similar fashion to Bevilacqua et al. (2022). The heuristically sampled keywords are re-scored using this model to obtain a refined LM-scored set. Two other special types of keywords are awarded high scores: Title of the gold passage and the keyword containing the answer string " + }, + { + "bbox": [ + 67, + 71, + 293, + 248 + ], + "type": "inline_equation", + "content": "a" + }, + { + "bbox": [ + 67, + 71, + 293, + 248 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 259, + 158, + 271 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 259, + 158, + 271 + ], + "spans": [ + { + "bbox": [ + 67, + 259, + 158, + 271 + ], + "type": "text", + "content": "5.2 Search Paths" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 279, + 291, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 279, + 291, + 441 + ], + "spans": [ + { + "bbox": [ + 67, + 279, + 291, + 441 + ], + "type": "text", + "content": "The first keyword in a search path needs to effectively partition the corpus. We experiment with either the title or the highest scored keyword from the gold passage as the first keyword in the path. The next keywords are sampled based on their score, given they do not overlap with any of the existing keywords in the path. We continue augmenting a path " + }, + { + "bbox": [ + 67, + 279, + 291, + 441 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 67, + 279, + 291, + 441 + ], + "type": "text", + "content": " with keywords until at most ten passages in the corpus match i.e., " + }, + { + "bbox": [ + 67, + 279, + 291, + 441 + ], + "type": "inline_equation", + "content": "|D_p| < 10" + }, + { + "bbox": [ + 67, + 279, + 291, + 441 + ], + "type": "text", + "content": ". The answer keyword is then appended to the path. Our train paths (including the answer) contain a median of three keywords and one matching document." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 454, + 191, + 469 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 454, + 191, + 469 + ], + "spans": [ + { + "bbox": [ + 67, + 454, + 191, + 469 + ], + "type": "text", + "content": "6 Experimental Setup" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 478, + 136, + 491 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 478, + 136, + 491 + ], + "spans": [ + { + "bbox": [ + 67, + 478, + 136, + 491 + ], + "type": "text", + "content": "6.1 Datasets" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 497, + 291, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 497, + 291, + 661 + ], + "spans": [ + { + "bbox": [ + 67, + 497, + 291, + 661 + ], + "type": "text", + "content": "We use Open-NQ (Kwiatkowski et al., 2019; Lee et al., 2019) as the question-answering dataset for training. For evaluation, besides Open-NQ, WebQuestions (Berant et al., 2013) and CuratedTREC (Baudiš and Šedivý, 2015) are used to measure out-of-domain performance. The FM-Index corpus for constrained decoding is built over DPR Wikipedia corpus with 100-word splits (Karpukhin et al., 2020). The positive gold passages from DPR are used for sampling training paths. This setup is chosen to mirror SEAL and also permits fair comparison against DPR." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 673, + 137, + 686 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 673, + 137, + 686 + ], + "spans": [ + { + "bbox": [ + 67, + 673, + 137, + 686 + ], + "type": "text", + "content": "6.2 Training" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "content": "1P's training dataset contains 310k paths corresponding to 55k queries from Open-NQ. Majority of the training paths begin with the title, with a small fraction starting with other keywords (12%). All keywords, except the title, are scored using the LM-scoring technique described above." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": "For our experiments, we use the T5X (Roberts et al., 2022) framework. A T5-XXL " + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "inline_equation", + "content": "1.1^{1}" + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": " (Raffel et al., 2020) model is finetuned with a batch size of 256 and dropout of 0.1. No additional hyperparameter tuning is performed. We format search paths using the reserved tokens " + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "inline_equation", + "content": "\\mathsf{K\\_SEP} = \"" + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": " \" and " + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "inline_equation", + "content": "\\mathsf{A\\_SEP} = \"" + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": " \"." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 176, + 376, + 188 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 176, + 376, + 188 + ], + "spans": [ + { + "bbox": [ + 302, + 176, + 376, + 188 + ], + "type": "text", + "content": "6.3 Inference" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "spans": [ + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "text", + "content": "Our best model employs beam decoding with a beam of 5. Even when the beam is greater than one, only the top-beam result is used for retrieval. We discuss the effect of beam size in depth in Section 7. Given the top generated path " + }, + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "inline_equation", + "content": "D_{p}" + }, + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "text", + "content": " corresponds to the retrieved documents. In case " + }, + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "inline_equation", + "content": "|D_{p}| > 1" + }, + { + "bbox": [ + 302, + 194, + 526, + 288 + ], + "type": "text", + "content": ", a document is sampled arbitrarily for evaluation." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 298, + 374, + 310 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 298, + 374, + 310 + ], + "spans": [ + { + "bbox": [ + 302, + 298, + 374, + 310 + ], + "type": "text", + "content": "6.4Baselines" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 316, + 525, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 316, + 525, + 423 + ], + "spans": [ + { + "bbox": [ + 302, + 316, + 525, + 423 + ], + "type": "text", + "content": "We compare to a closed-book question answering (CBQA) system that generates answers, but does not ground these in an evidence corpus, as well as retrieve-and-read systems that combine a variety of retrievers with a Transformer-based answerer module. Both the CBQA baseline and the answerer module are derived from the same T5-XXL 1.1 pretrained model as 1P." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 433, + 387, + 445 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 433, + 387, + 445 + ], + "spans": [ + { + "bbox": [ + 302, + 433, + 387, + 445 + ], + "type": "text", + "content": "6.4.1 T5-CBQA" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 449, + 525, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 449, + 525, + 571 + ], + "spans": [ + { + "bbox": [ + 302, + 449, + 525, + 571 + ], + "type": "text", + "content": "A T5-XXL 1.1 model is fine-tuned to predict answers from the DPR training set for 10,000 steps with a batch size of 128. Note that it is possible to achieve a higher closed-book performance on NQ using the full Open-NQ training split instead of the subset included in the DPR training set (Roberts et al., 2020). However, to enable meaningful comparison we restrict the CBQA baseline to the same training examples used to train 1P." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 579, + 427, + 591 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 579, + 427, + 591 + ], + "spans": [ + { + "bbox": [ + 302, + 579, + 427, + 591 + ], + "type": "text", + "content": "6.4.2 Retrieve-and-Read" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 596, + 526, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 596, + 526, + 677 + ], + "spans": [ + { + "bbox": [ + 302, + 596, + 526, + 677 + ], + "type": "text", + "content": "The retrieve-and-read baselines first retrieve a single passage from the evidence corpus, and then feed this passage and the question into the answer generation module2. We report retrieval accuracy for the retrieved passage and answer accuracy for the generated answer." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 685, + 525, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 685, + 525, + 713 + ], + "spans": [ + { + "bbox": [ + 302, + 685, + 525, + 713 + ], + "type": "text", + "content": "T5-Reader We tune a T5-XXL 1.1 model to generate answers from (question, evidence passage)" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 315, + 719, + 458, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 719, + 458, + 731 + ], + "spans": [ + { + "bbox": [ + 315, + 719, + 458, + 731 + ], + "type": "text", + "content": "1https://goo.gl/t5-checkpoints" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 731, + 524, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 731, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 731, + 524, + 772 + ], + "type": "text", + "content": "2This differs from ODQA evaluations that do not include evidence retrieval as a first-class task, where many retrieved passages are fed into a reader that generates an answer without attribution to any single piece of text." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14533" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 113 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 113 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 113 + ], + "type": "text", + "content": "pairs. This is the same base model used by 1P and we train on the (question, passage, answer) triples in the DPR training split to ensure fair comparison." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 121, + 290, + 243 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 121, + 290, + 243 + ], + "spans": [ + { + "bbox": [ + 67, + 121, + 290, + 243 + ], + "type": "text", + "content": "DPR-Retriever We compare against vanilla DPR finetuned on NQ without hard negatives (Karpukhin et al., 2020) using the pre-computed index available on DPR's repository3. We note that our ODQA setup differs from the one used by Karpukhin et al. in that we choose the highest scoring retrieval as evidence for answer generation, instead of generating from the top-100 passages without attribution." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 253, + 291, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 253, + 291, + 295 + ], + "spans": [ + { + "bbox": [ + 67, + 253, + 291, + 295 + ], + "type": "text", + "content": "BM25-Retriever We use Pyserini toolkit (Lin et al., 2021) with default configurations, retrieving the top-1 passage." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 302, + 291, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 302, + 291, + 452 + ], + "spans": [ + { + "bbox": [ + 67, + 302, + 291, + 452 + ], + "type": "text", + "content": "SEAL-Retriever SEAL (Bevilacqua et al., 2022) is a generative retrieval system that generates a set of keywords constrained on the corpus. In terms of technique, 1P borrows inspiration from SEAL's use of the FM-Index as well as keywords-as-identifiers. However, the two setups have substantial differences that we highlight in Section 8. We run SEAL with its default configuration and a beam of 5 using the publicly released checkpoint based on Bart-large (Lewis et al., 2020). All outputs from the beam are used for retrieval." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 463, + 147, + 475 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 463, + 147, + 475 + ], + "spans": [ + { + "bbox": [ + 67, + 463, + 147, + 475 + ], + "type": "text", + "content": "6.5 Evaluation" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 481, + 291, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 481, + 291, + 564 + ], + "spans": [ + { + "bbox": [ + 67, + 481, + 291, + 564 + ], + "type": "text", + "content": "We evaluate in-domain performance on the OpenNQ test split and out-of-domain performance on WebQuestions (WQ) and CuratedTREC (TREC) following the setup from Karpukhin et al. (2020). Passage retrieval performance is measured with Hits@1 using Pyserini evaluation scripts4." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 574, + 180, + 587 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 574, + 180, + 587 + ], + "spans": [ + { + "bbox": [ + 67, + 574, + 180, + 587 + ], + "type": "text", + "content": "6.6 1P configurations" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 592, + 291, + 741 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 592, + 291, + 741 + ], + "spans": [ + { + "bbox": [ + 67, + 592, + 291, + 741 + ], + "type": "text", + "content": "We experiment with three configurations: a) 1P: Our primary setup that uses both training and constrained decoding procedures described above, producing a retrieved passage as well as an answer. b) 1P-Unconstrained: Only the training technique described in Section 5 is adopted, with standard unconstrained decoding. Since generation is unconstrained, it is possible that no passage gets retrieved for a given path. c) " + }, + { + "bbox": [ + 67, + 592, + 291, + 741 + ], + "type": "inline_equation", + "content": "1\\mathrm{P} +" + }, + { + "bbox": [ + 67, + 592, + 291, + 741 + ], + "type": "text", + "content": " Reader: Here, we take the top retrieved passage from 1P and input it to the Reader model (Section 6.4) to extract the answer." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 70, + 362, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 70, + 362, + 84 + ], + "spans": [ + { + "bbox": [ + 303, + 70, + 362, + 84 + ], + "type": "text", + "content": "7 Results" + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 304, + 100, + 527, + 243 + ], + "blocks": [ + { + "bbox": [ + 304, + 100, + 527, + 243 + ], + "lines": [ + { + "bbox": [ + 304, + 100, + 527, + 243 + ], + "spans": [ + { + "bbox": [ + 304, + 100, + 527, + 243 + ], + "type": "table", + "html": "
RetrieverAnswererRetrieval Hits @ 1Answer
EMF1
-T5 - CBQA-26.834.0
BM25T5 - Reader23.617.924.0
SEALT5 - Reader37.929.435.8
DPRT5 - Reader46.535.642.4
1PT5 - Reader46.334.241.4
1P - Unconstrained29.329.336.1
1P46.331.738.0
", + "image_path": "373d92d883e2c3fe39aedc352fd6ec69e48eaa5838f6d02af015818d49684aef.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "type": "table", + "bbox": [ + 304, + 339, + 533, + 441 + ], + "blocks": [ + { + "bbox": [ + 302, + 250, + 527, + 312 + ], + "lines": [ + { + "bbox": [ + 302, + 250, + 527, + 312 + ], + "spans": [ + { + "bbox": [ + 302, + 250, + 527, + 312 + ], + "type": "text", + "content": "Table 1: Comparison of different Retriever and Answerer combinations on the NQ-Open test set. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage. 1P combines passage retrieval and answer generation in a single prediction." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 304, + 339, + 533, + 441 + ], + "lines": [ + { + "bbox": [ + 304, + 339, + 533, + 441 + ], + "spans": [ + { + "bbox": [ + 304, + 339, + 533, + 441 + ], + "type": "table", + "html": "
SystemWebQuestionsTREC
Hits @1EMHits @1EM
BM25 + Rdr19.714.235.229.1
DPR + Rdr32.017.351.635.0
1P + Rdr38.020.463.838.5
1P38.020.563.836.4
", + "image_path": "1bfeba8d3638f54c80776ec86b6211ca901b7b47015f1c3e6b40c0416d68f5a7.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 449, + 527, + 510 + ], + "lines": [ + { + "bbox": [ + 302, + 449, + 527, + 510 + ], + "spans": [ + { + "bbox": [ + 302, + 449, + 527, + 510 + ], + "type": "text", + "content": "Table 2: Comparison of different Retriever and Answerer combinations on Out-of-domain datasets. Both the Retriever and Answerer (Rdr) are trained on only Open-NQ. In retrieve-and-read setups, answers are generated from the top-1 retrieved passage." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 301, + 529, + 526, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 529, + 526, + 650 + ], + "spans": [ + { + "bbox": [ + 301, + 529, + 526, + 650 + ], + "type": "text", + "content": "We compare to the baselines described in Section 6.4 on Open-NQ using both retrieval and answer accuracy metrics in Table 1. Answers are generated based on the top retrieved document in systems that separate retrieval from answer generation, to provide a clean comparison between systems that return (answer, evidence passage) pairs. Table 2 reports the out-of-domain performance of various systems on WQ and TREC." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 301, + 651, + 527, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 651, + 527, + 773 + ], + "spans": [ + { + "bbox": [ + 301, + 651, + 527, + 773 + ], + "type": "text", + "content": "1P outperforms CBQA in question answering and beats the retrieve-and-read systems, BM25 and SEAL. On the passage retrieval task, it significantly improves over BM25 and SEAL. For indomain setting, 1P is competitive with DPR on retrieval task, but lags behind the QA pipeline that uses DPR. However, this appears to be more due to the reader rather than the retriever as discussed in Section 8. It is worth noting that 1P general" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 80, + 749, + 262, + 761 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 749, + 262, + 761 + ], + "spans": [ + { + "bbox": [ + 80, + 749, + 262, + 761 + ], + "type": "text", + "content": "3https://github.com/facebookresearch/DPR" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 80, + 761, + 252, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 761, + 252, + 772 + ], + "spans": [ + { + "bbox": [ + 80, + 761, + 252, + 772 + ], + "type": "text", + "content": "4https://github.com/castorini/pyserini" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14534" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "type": "text", + "content": "izes significantly better out-of-domain compared to other systems." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 106, + 291, + 215 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 106, + 291, + 215 + ], + "spans": [ + { + "bbox": [ + 67, + 106, + 291, + 215 + ], + "type": "text", + "content": "Utility of Search Paths 1P-Unconstrained can be viewed as an extended version of CBQA that generates a search path before predicting the answer. Thus, improvement of 1P-Unconstrained over CBQA can be attributed to this path-conditioned answer generation process, analogous to chain-of-thought reasoning (Wei et al., 2022; Lampinen et al., 2022)." + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 73, + 233, + 286, + 334 + ], + "blocks": [ + { + "bbox": [ + 73, + 233, + 286, + 334 + ], + "lines": [ + { + "bbox": [ + 73, + 233, + 286, + 334 + ], + "spans": [ + { + "bbox": [ + 73, + 233, + 286, + 334 + ], + "type": "table", + "html": "
SystemConstrained DecodingBeam
15
CBQANo26.726.8
1P Unconst.No29.029.3
SEAL + ReaderYes28.529.4
1PYes28.731.7
", + "image_path": "fb7a9f435f7af5f644d5e644283f75948883f6355bcb0aee300fc20d6edc77a6.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 434, + 291, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 434, + 291, + 637 + ], + "spans": [ + { + "bbox": [ + 67, + 434, + 291, + 637 + ], + "type": "text", + "content": "Effect of Constrained Decoding The purpose of constrained decoding is to ground the answer in an evidence retrieved from the corpus. As expected, the constrained setup enables 1P to achieve a higher Hits@1 than 1P-unconstrained. Surprisingly, when decoding with a beam of one, we observe a small drop in answer accuracy for 1P compared to 1P-Unconstrained (Table 3). Inspecting the losses, two dominant reasons surface. Firstly, As DPR passages are chunked into 100-words (Karpukhin et al., 2020), some queries may become unanswerable given a single passage due to missing context. This is disadvantageous when the model has memorized the answer but there is no single passage to attribute it to." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "text", + "content": "Secondly, during constrained decoding, after generating the initial keywords, the search space may soon become sparse with no good candidates to pick from. Could a larger room for planning its actions help the model here? Indeed, increasing the beam size to 5 improves performance by " + }, + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "inline_equation", + "content": "3\\%" + }, + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "text", + "content": " (Table 3), even when only the top-beam is used for retrieval. We refer to this as Planning, since the larger beam only enables the model to plan better and the remaining beam outputs are otherwise dis" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 526, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 138 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 138 + ], + "type": "text", + "content": "carded. Note that unconstrained decoding does not gain from planning. In the final setup in Table 1, we use a beam of 5 for both 1P and SEAL. Unlike 1P, SEAL uses all the outputs from the larger beam for retrieval." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 150, + 453, + 163 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 150, + 453, + 163 + ], + "spans": [ + { + "bbox": [ + 302, + 150, + 453, + 163 + ], + "type": "text", + "content": "8 Discussion and Ablations" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 172, + 526, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 172, + 526, + 307 + ], + "spans": [ + { + "bbox": [ + 302, + 172, + 526, + 307 + ], + "type": "text", + "content": "Generating Answers While 1P is capable of generating answers, Table 1 highlights that it falls behind the 1P+Reader. The reason seems to be clear: the Reader has visibility into the full passage context while 1P is limited to the decoded search path and the constrained index which only ensures that generations are grounded in the corpus. Since 1P does retrieve passages, it would be possible to pull in the corresponding text as input for answer generation. We leave this as future work." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 317, + 525, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 317, + 525, + 370 + ], + "spans": [ + { + "bbox": [ + 302, + 317, + 525, + 370 + ], + "type": "text", + "content": "Comparison to SEAL While 1P takes inspiration from SEAL, in practice, there are a few key differences between the two systems aside from 1P's answer generation." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 371, + 525, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 371, + 525, + 425 + ], + "spans": [ + { + "bbox": [ + 302, + 371, + 525, + 425 + ], + "type": "text", + "content": "SEAL generates a large set of keywords (Table 4) using many separate decodes and heuristic guidance (Appendix A.3). In contrast, 1P decodes a single sequence of about three keywords." + } + ] + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 326, + 435, + 504, + 504 + ], + "blocks": [ + { + "bbox": [ + 67, + 343, + 291, + 416 + ], + "lines": [ + { + "bbox": [ + 67, + 343, + 291, + 416 + ], + "spans": [ + { + "bbox": [ + 67, + 343, + 291, + 416 + ], + "type": "text", + "content": "Table 3: EM for various decoding setups with different beam sizes on Open-NQ. Only top-beam result is used for evaluation, except in SEAL which uses all beam outputs. 1P constrained decoding benefits the most from a large beam whereas Unconstrained setups have only a slight effect." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 326, + 435, + 504, + 504 + ], + "lines": [ + { + "bbox": [ + 326, + 435, + 504, + 504 + ], + "spans": [ + { + "bbox": [ + 326, + 435, + 504, + 504 + ], + "type": "table", + "html": "
SEAL1P
Median keywords323
Median docs retrieved5001
Generates answer×
", + "image_path": "bd58d3bc2f48fa3e4ce0e928bbafbdfc664beeb698e29c15813aa140e2b04146.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 513, + 525, + 537 + ], + "lines": [ + { + "bbox": [ + 302, + 513, + 525, + 537 + ], + "spans": [ + { + "bbox": [ + 302, + 513, + 525, + 537 + ], + "type": "text", + "content": "Table 4: Key differences between SEAL and 1P measured over Open-NQ test split with a beam of 1." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 556, + 526, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 556, + 526, + 677 + ], + "spans": [ + { + "bbox": [ + 302, + 556, + 526, + 677 + ], + "type": "text", + "content": "The SEAL keywords are a set, decoded independently of each other and re-scored using sophisticated techniques to retrieve a large number of documents. For instance, the default configuration in SEAL retrieves up to 500 documents. This makes SEAL suitable to be employed in conjunction with a re-ranker. In contrast, 1P search path's map directly to a single (or few) relevant documents (Appendix A.6)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 678, + 525, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 678, + 525, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 678, + 525, + 773 + ], + "type": "text", + "content": "We acknowledge the model-size variation between SEAL and 1P in the reported experiments, however we preferred using the publicly available SEAL checkpoint. Given the discrepancies with larger beam-size, multiple decodes and use of Reader model, it is difficult to have an apples to apples comparison between the two systems." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14535" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 220 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 220 + ], + "type": "text", + "content": "Path vs Keyword set We qualitatively observe that keywords in a 1P path, owing to sequential generation, are distinct and add new information as compared to the SEAL output set where overlapping keywords are common (Appendix A.3). Thus, paths are advantageous for precisely narrowing down to a single relevant document while keyword sets are effective for retrieving a large number of documents that can later be reranked. This is corroborated by the fact that 1P is better at Hits@1 while SEAL is better at Hits@5 (Appendix A.4)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 228, + 290, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 228, + 290, + 268 + ], + "spans": [ + { + "bbox": [ + 67, + 228, + 290, + 268 + ], + "type": "text", + "content": "Qualitative Analysis Table 5 illustrates patterns of Search Paths generated by 1P. We note some of the common path patterns here:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 269, + 290, + 471 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 67, + 269, + 290, + 336 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 269, + 290, + 336 + ], + "spans": [ + { + "bbox": [ + 67, + 269, + 290, + 336 + ], + "type": "text", + "content": "1) First keywords are entities in the query, followed by query predicates that iteratively narrow down towards an answer. This is the most common type of path observed and can be attributed to the dominant presence of title in the training data." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 337, + 290, + 404 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 337, + 290, + 404 + ], + "spans": [ + { + "bbox": [ + 67, + 337, + 290, + 404 + ], + "type": "text", + "content": "2) Rewrites of the original query or related predicates such as \"seasons consists of\", \"appeared on ...\". Such paths are more prevalent where there is no canonical entity in the query or no entity can be determined with high confidence." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 405, + 290, + 471 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 405, + 290, + 471 + ], + "spans": [ + { + "bbox": [ + 67, + 405, + 290, + 471 + ], + "type": "text", + "content": "3) Answer is directly generated followed by supporting keywords that guide towards an attributed passage. This happens in a small fraction of cases, likely where the pretrained model has memorized an answer with high confidence." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 473, + 290, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 473, + 290, + 500 + ], + "spans": [ + { + "bbox": [ + 67, + 473, + 290, + 500 + ], + "type": "text", + "content": "Overall, we find the generated search paths to be fairly meaningful and interpretable." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 508, + 291, + 642 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 508, + 291, + 642 + ], + "spans": [ + { + "bbox": [ + 67, + 508, + 291, + 642 + ], + "type": "text", + "content": "Sampling Search Paths for Training Table 6 highlights that high quality keywords are crucial to performance. The LM re-scored set of keywords result in significant accuracy gain over heuristically sampled keywords. Paths with first keyword as Title boost performance further. Mixing in a small fraction of paths starting with non-title keywords encourages the model to generate predicates where no entity can be determined, giving us the best results." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 651, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 651, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 651, + 291, + 772 + ], + "type": "text", + "content": "Sensitivity to tokenization We find that constrained decoding is highly sensitive to rare tokenization or punctuation formatting in the corpus. Consider the query \"who sang i ran all the way home\" with the gold document title \"Sorry (I Ran All the Way Home)\". In the unconstrained setup, the model's top prediction starts with \"I Ran All the Way Home\". However, \"(I\" is tokenized differently from \"I\" and searching over the FM-Index" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 71, + 526, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 219 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 219 + ], + "type": "text", + "content": "returns no match. As a result, constrained decoding drops the predicted keyword altogether, resorting to lower ranked keywords in the beam. We partially fix the issue by modifying the answer in a fraction of the training data to include surrounding punctuation tokens based on how they appear in the FM-index. For instance, the keyword \"I Ran ...\" would update to \"(I Ran ...)\". This simple change leads to a jump in answer accuracy from " + }, + { + "bbox": [ + 302, + 71, + 526, + 219 + ], + "type": "inline_equation", + "content": "26.4\\%" + }, + { + "bbox": [ + 302, + 71, + 526, + 219 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 302, + 71, + 526, + 219 + ], + "type": "inline_equation", + "content": "28.7\\%" + }, + { + "bbox": [ + 302, + 71, + 526, + 219 + ], + "type": "text", + "content": ". However, much more work is needed to make 1P robust to variations in tokenization." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 221, + 526, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 221, + 526, + 260 + ], + "spans": [ + { + "bbox": [ + 302, + 221, + 526, + 260 + ], + "type": "text", + "content": "See Appendix A.2 for analysis of training data size and Appendix A.5 for masking logits vs log-probs." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 303, + 272, + 364, + 285 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 272, + 364, + 285 + ], + "spans": [ + { + "bbox": [ + 303, + 272, + 364, + 285 + ], + "type": "text", + "content": "Conclusion" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 294, + 526, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 294, + 526, + 469 + ], + "spans": [ + { + "bbox": [ + 302, + 294, + 526, + 469 + ], + "type": "text", + "content": "We introduce 1-PAGER, the first system to perform question answering and passage retrieval in one pass with a single language model, using a constrained decoder to iteratively partition the retrieval corpus and then generate an answer. We show competitive or improved performance over a variety of comparable baselines and carefully analyze the results, ablating both training strategies and decoding style. We also provide a qualitative analysis of predictions to illustrate the system's capabilities. Challenges with constrained decoding are surfaced including poor search spaces and sensitivity to tokenization and mitigation strategies are presented." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 470, + 525, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 470, + 525, + 536 + ], + "spans": [ + { + "bbox": [ + 302, + 470, + 525, + 536 + ], + "type": "text", + "content": "We hope that 1P adds value in demonstrating how a single transformer model can be harnessed to do both retrieval and answering and pave the path for further progress in the generative retrieval domain." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 303, + 548, + 366, + 560 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 548, + 366, + 560 + ], + "spans": [ + { + "bbox": [ + 303, + 548, + 366, + 560 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 570, + 526, + 718 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 570, + 526, + 718 + ], + "spans": [ + { + "bbox": [ + 302, + 570, + 526, + 718 + ], + "type": "text", + "content": "1P is geared towards identifying a concise, small set of documents and generating answer in a single go. While this makes the architecture simpler, it also adds certain weaknesses. 1P is not effective for retrieving a large number of documents and falls behind pipelined systems that combine retrieval with re-ranking. Even for a single passage, it lags behind state-of-the-art dense-retrieval techniques. 1P's method of answer generation is also not competitive with the use of a reader, due to lack of passage context." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "content": "Our training strategy relies heavily on titles or entities and it's generalization on corpora without rich structure or on queries without central entities, remains to be studied." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 792 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 792 + ], + "type": "text", + "content": "14536" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 69, + 68, + 526, + 304 + ], + "blocks": [ + { + "bbox": [ + 69, + 68, + 526, + 304 + ], + "lines": [ + { + "bbox": [ + 69, + 68, + 526, + 304 + ], + "spans": [ + { + "bbox": [ + 69, + 68, + 526, + 304 + ], + "type": "table", + "html": "
Query (Q) and Generated Search Path (SP)Comment
Correctly attributed passages and answers
Q: how many episodes of greys anatomy season 14\nSP: Grey's Anatomy (season 14) » season consists of 24 episodes » 24Query entity resolved first, followed by query predicates
Q: when did they start adding zinc to pennies\nSP: Penny (United States coin) » zinc » Lincoln cent » 1943Query entity resolved iteratively
Q: who was executed for being an american spy during the revolutionary war\nSP: Nathan Hale » Army during the American Revolutionary » Nathan HaleAnswer generated first
Q: who was the grandfather on the Cosby show\nSP: appeared on "The Cosby » Earle HymanQuery rewrites
Incorrect Passage or Answer
Q: who decides the number of judges in the high court\nSP: judge is appointed » High Court » Chief Justice of IndiaA: President of IndiaPath correctly resolved,\nFailed on answer
Q: when did the isle of wight become an island\nIsle of Wight » 1890 » 1890A: During the last Ice AgeQuery entity resolved,\nFailed on supporting keywords
Q: love yourself by justin bieber is about who\nSP: Love Yourself: Her » music video » HerA: RihanaFailed to resolve\nquery entity
", + "image_path": "6615d0859b5cffb3239259a2a7b25b02962c66e39f3ea5a0456551d6295b932b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 92, + 360, + 268, + 456 + ], + "blocks": [ + { + "bbox": [ + 67, + 312, + 525, + 338 + ], + "lines": [ + { + "bbox": [ + 67, + 312, + 525, + 338 + ], + "spans": [ + { + "bbox": [ + 67, + 312, + 525, + 338 + ], + "type": "text", + "content": "Table 5: Example 1P Search Paths (SP) on Open-NQ test set. The last keyword in SP is the predicted answer. Gold answers are indicated by A." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 92, + 360, + 268, + 456 + ], + "lines": [ + { + "bbox": [ + 92, + 360, + 268, + 456 + ], + "spans": [ + { + "bbox": [ + 92, + 360, + 268, + 456 + ], + "type": "table", + "html": "
Search PathHits@1EM
Heuristic34.522.6
LM-scored40.027.2
Title » LM-scored41.928.0
Title » LM-scored + LM-scored (7+1)42.928.7
", + "image_path": "760f38de5fe54c729d356498f7cec5a4a204ee9a0a6106e83a20b90ae1fec2f1.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 464, + 291, + 526 + ], + "lines": [ + { + "bbox": [ + 67, + 464, + 291, + 526 + ], + "spans": [ + { + "bbox": [ + 67, + 464, + 291, + 526 + ], + "type": "text", + "content": "Table 6: Comparison of Training Search Paths on OpenNQ. Here LM-scored denotes re-scoring by LM on a heuristic set. All results are with a beam of one. \"»\" indicates keyword separator and \"+\" mixture of path types in the give ratio." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 555, + 291, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 555, + 291, + 676 + ], + "spans": [ + { + "bbox": [ + 67, + 555, + 291, + 676 + ], + "type": "text", + "content": "Constrained decoding also comes with its own challenges. Constrained beam outputs often lack diversity, so that even with a larger beam one may still end up in poor search spaces. Computing document-level constraints across the corpus is expensive as it may require scanning a large number of rows in the index. Further, communication between FM-Index and Transformer model slows down inference." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 693, + 166, + 707 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 693, + 166, + 707 + ], + "spans": [ + { + "bbox": [ + 68, + 693, + 166, + 707 + ], + "type": "text", + "content": "Acknowledgement" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "content": "We thank Don Metzler, Nicholas FitzGerald, Partha Talukdar, Srini Narayanan, as well as our anonymous reviewers, for their thoughtful comments and valuable feedback" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 303, + 362, + 424, + 375 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 362, + 424, + 375 + ], + "spans": [ + { + "bbox": [ + 303, + 362, + 424, + 375 + ], + "type": "text", + "content": "Ethical Considerations" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 385, + 526, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 385, + 526, + 492 + ], + "spans": [ + { + "bbox": [ + 302, + 385, + 526, + 492 + ], + "type": "text", + "content": "While Large Language Models can solve a wide range of tasks effectively, they also suffer from biases across axis such as gender, race, region (Chan, 2023). LLMs are also prone to generating toxic content, especially when probed about it. Although, our task grounds the model's generations on a corpus, some of the biases in pre-trained LLMs, may seep in 1-PAGER." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 494, + 526, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 494, + 526, + 629 + ], + "spans": [ + { + "bbox": [ + 302, + 494, + 526, + 629 + ], + "type": "text", + "content": "Building the FM-index and constrained decoding is a compute-intensive affair. We have experimented over a single dataset, Natural Questions, involving only knowledge-seeking queries, and single model family, T5. It is possible that some of our findings may not hold over other datasets or model families. Finally, our experiments are limited to English corpus and queries. The proposed approaches are resource-intensive and may not be accessible or valid for several low-resourced languages." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 653, + 362, + 666 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 653, + 362, + 666 + ], + "spans": [ + { + "bbox": [ + 304, + 653, + 362, + 666 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 303, + 673, + 526, + 740 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 673, + 526, + 740 + ], + "spans": [ + { + "bbox": [ + 303, + 673, + 526, + 740 + ], + "type": "text", + "content": "Leonard Adolphs, Benjamin Boerschinger, Christian Buck, Michelle Chen Huebscher, Massimiliano Ciaramita, Lasse Espeholt, Thomas Hofmann, Yannic Kilcher, Sascha Rothe, Pier Giuseppe Sessa, et al. 2021. Boosting search engines with interactive agents. arXiv preprint arXiv:2109.00527." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 303, + 750, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 750, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 303, + 750, + 525, + 772 + ], + "type": "text", + "content": "Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14537" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 80, + 72, + 289, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 72, + 289, + 105 + ], + "spans": [ + { + "bbox": [ + 80, + 72, + 289, + 105 + ], + "type": "text", + "content": "Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 116, + 290, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 116, + 290, + 194 + ], + "spans": [ + { + "bbox": [ + 69, + 116, + 290, + 194 + ], + "type": "text", + "content": "Petr Baudis and Jan Šedivý. 2015. Modeling of the question answering task in the yodaqa system. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF'15, Toulouse, France, September 8-11, 2015, Proceedings 6, pages 222-228. Springer." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 204, + 289, + 259 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 204, + 289, + 259 + ], + "spans": [ + { + "bbox": [ + 69, + 204, + 289, + 259 + ], + "type": "text", + "content": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533-1544." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 269, + 289, + 324 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 269, + 289, + 324 + ], + "spans": [ + { + "bbox": [ + 69, + 269, + 289, + 324 + ], + "type": "text", + "content": "Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen tau Yih, Sebastian Riedel, and Fabio Petroni. 2022. Autoregressive search engines: Generating substrings as document identifiers. In arXiv pre-print 2204.10628." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 335, + 289, + 401 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 335, + 289, + 401 + ], + "spans": [ + { + "bbox": [ + 69, + 335, + 289, + 401 + ], + "type": "text", + "content": "Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, et al. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. arXiv preprint arXiv:2212.08037." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 412, + 289, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 412, + 289, + 455 + ], + "spans": [ + { + "bbox": [ + 69, + 412, + 289, + 455 + ], + "type": "text", + "content": "Anastasia Chan. 2023. Gpt-3 and instructgpt: technological dystopianism, utopianism, and \"contextual\" perspectives in ai ethics and industry. AI and Ethics, 3(1):53-64." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 466, + 289, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 466, + 289, + 509 + ], + "spans": [ + { + "bbox": [ + 69, + 466, + 289, + 509 + ], + "type": "text", + "content": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 520, + 289, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 520, + 289, + 576 + ], + "spans": [ + { + "bbox": [ + 69, + 520, + 289, + 576 + ], + "type": "text", + "content": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever-reader interaction for scalable open-domain question answering. In International Conference on Learning Representations." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 586, + 289, + 641 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 586, + 289, + 641 + ], + "spans": [ + { + "bbox": [ + 69, + 586, + 289, + 641 + ], + "type": "text", + "content": "Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 651, + 289, + 696 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 651, + 289, + 696 + ], + "spans": [ + { + "bbox": [ + 69, + 651, + 289, + 696 + ], + "type": "text", + "content": "P. Ferragina and G. Manzini. 2000. Opportunistic data structures with applications. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 390-398." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "type": "text", + "content": "Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 105 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 105 + ], + "type": "text", + "content": "Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 118, + 524, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 118, + 524, + 140 + ], + "spans": [ + { + "bbox": [ + 304, + 118, + 524, + 140 + ], + "type": "text", + "content": "Alex Graves. 2012. Sequence transduction with recurrent neural networks." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 153, + 524, + 197 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 153, + 524, + 197 + ], + "spans": [ + { + "bbox": [ + 304, + 153, + 524, + 197 + ], + "type": "text", + "content": "Sanda M Harabagiu, Steven J Maiorano, and Marius A Pasca. 2003. Open-domain textual question answering techniques. Natural Language Engineering, 9(3):231-267." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 211, + 524, + 265 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 211, + 524, + 265 + ], + "spans": [ + { + "bbox": [ + 304, + 211, + 524, + 265 + ], + "type": "text", + "content": "Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 279, + 524, + 333 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 279, + 524, + 333 + ], + "spans": [ + { + "bbox": [ + 304, + 279, + 524, + 333 + ], + "type": "text", + "content": "Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neubig. 2022. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. arXiv preprint arXiv:2212.02027." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 347, + 524, + 400 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 347, + 524, + 400 + ], + "spans": [ + { + "bbox": [ + 304, + 347, + 524, + 400 + ], + "type": "text", + "content": "Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 414, + 524, + 502 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 414, + 524, + 502 + ], + "spans": [ + { + "bbox": [ + 304, + 414, + 524, + 502 + ], + "type": "text", + "content": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 515, + 524, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 515, + 524, + 581 + ], + "spans": [ + { + "bbox": [ + 304, + 515, + 524, + 581 + ], + "type": "text", + "content": "Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39-48." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 594, + 524, + 693 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 594, + 524, + 693 + ], + "spans": [ + { + "bbox": [ + 304, + 594, + 524, + 693 + ], + "type": "text", + "content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 706, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 706, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 706, + 524, + 772 + ], + "type": "text", + "content": "Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and Felix Hill. 2022. Can language models learn from explanations in context? arXiv preprint arXiv:2204.02329." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14538" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 126 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 126 + ], + "type": "text", + "content": "Hyunjii Lee, Sohee Yang, Hanseok Oh, and Minjoon Seo. 2022. Generative multi-hop retrieval. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1417-1436." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 138, + 290, + 204 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 138, + 290, + 204 + ], + "spans": [ + { + "bbox": [ + 69, + 138, + 290, + 204 + ], + "type": "text", + "content": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 214, + 290, + 313 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 214, + 290, + 313 + ], + "spans": [ + { + "bbox": [ + 69, + 214, + 290, + 313 + ], + "type": "text", + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 324, + 290, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 324, + 290, + 378 + ], + "spans": [ + { + "bbox": [ + 69, + 324, + 290, + 378 + ], + "type": "text", + "content": "Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. Paq: 65 million probably-asked questions and what you can do with them." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 389, + 289, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 389, + 289, + 423 + ], + "spans": [ + { + "bbox": [ + 69, + 389, + 289, + 423 + ], + "type": "text", + "content": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 433, + 290, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 433, + 290, + 488 + ], + "spans": [ + { + "bbox": [ + 69, + 433, + 290, + 488 + ], + "type": "text", + "content": "Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: An easy-to-use python toolkit to support replicable in research with sparse and dense representations. arXiv preprint arXiv:2102.10073." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 498, + 290, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 498, + 290, + 565 + ], + "spans": [ + { + "bbox": [ + 69, + 498, + 290, + 565 + ], + "type": "text", + "content": "Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, and Nat McAleese. 2022. Teaching language models to support answers with verified quotes." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 575, + 290, + 608 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 575, + 290, + 608 + ], + "spans": [ + { + "bbox": [ + 69, + 575, + 290, + 608 + ], + "type": "text", + "content": "Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork. 2021. Rethinking search: making domain experts out of dilettantes. ACM SIGIR Forum, 55(1):1-27." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 618, + 223, + 630 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 618, + 223, + 630 + ], + "spans": [ + { + "bbox": [ + 69, + 618, + 223, + 630 + ], + "type": "text", + "content": "OpenAI. 2023. Gpt-4 technical report." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 640, + 290, + 695 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 640, + 290, + 695 + ], + "spans": [ + { + "bbox": [ + 69, + 640, + 290, + 695 + ], + "type": "text", + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 706, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 706, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 706, + 290, + 772 + ], + "type": "text", + "content": "Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. 2021. Measuring attribution in natural language generation models. arXiv preprint arXiv:2112.12870." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 305, + 72, + 525, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 525, + 248 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 525, + 248 + ], + "type": "text", + "content": "Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aankanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, and Andrea Gesmundo. 2022. Scaling up models and data with t5x and seqio." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 257, + 525, + 301 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 257, + 525, + 301 + ], + "spans": [ + { + "bbox": [ + 304, + 257, + 525, + 301 + ], + "type": "text", + "content": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 311, + 525, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 311, + 525, + 354 + ], + "spans": [ + { + "bbox": [ + 304, + 311, + 525, + 354 + ], + "type": "text", + "content": "Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333-389." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 364, + 525, + 408 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 364, + 525, + 408 + ], + "spans": [ + { + "bbox": [ + 304, + 364, + 525, + 408 + ], + "type": "text", + "content": "Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. Colbertv2: Effective and efficient retrieval via lightweight late interaction." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 417, + 525, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 417, + 525, + 472 + ], + "spans": [ + { + "bbox": [ + 304, + 417, + 525, + 472 + ], + "type": "text", + "content": "Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems, 35:21831-21843." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 481, + 525, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 481, + 525, + 536 + ], + "spans": [ + { + "bbox": [ + 304, + 481, + 525, + 536 + ], + "type": "text", + "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 546, + 525, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 546, + 525, + 591 + ], + "spans": [ + { + "bbox": [ + 304, + 546, + 525, + 591 + ], + "type": "text", + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 599, + 525, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 599, + 525, + 655 + ], + "spans": [ + { + "bbox": [ + 304, + 599, + 525, + 655 + ], + "type": "text", + "content": "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 664, + 525, + 729 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 664, + 525, + 729 + ], + "spans": [ + { + "bbox": [ + 304, + 664, + 525, + 729 + ], + "type": "text", + "content": "Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022. Generate rather than retrieve: Large language models are strong context generators. arXiv preprint arXiv:2209.10063." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 739, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 739, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 739, + 525, + 772 + ], + "type": "text", + "content": "Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14539" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 212 + ], + "type": "list", + "angle": 0, + "index": 3, + "blocks": [ + { + "bbox": [ + 80, + 72, + 291, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 72, + 291, + 105 + ], + "spans": [ + { + "bbox": [ + 80, + 72, + 291, + 105 + ], + "type": "text", + "content": "rather than retrieve: Large language models are strong context generators. In The Eleventh International Conference on Learning Representations." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 114, + 291, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 114, + 291, + 158 + ], + "spans": [ + { + "bbox": [ + 69, + 114, + 291, + 158 + ], + "type": "text", + "content": "Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, Peitan Zhang, and Ji-Rong Wen. 2022. Ultron: An ultimate retriever on corpus with a model-based indexer. arXiv preprint arXiv:2208.09257." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 167, + 291, + 212 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 167, + 291, + 212 + ], + "spans": [ + { + "bbox": [ + 69, + 167, + 291, + 212 + ], + "type": "text", + "content": "Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering." + } + ] + } + ], + "index": 2 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 313, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 313, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 313, + 791 + ], + "type": "text", + "content": "14540" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 142, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 142, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 142, + 84 + ], + "type": "text", + "content": "A Appendix" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 92, + 210, + 105 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 92, + 210, + 105 + ], + "spans": [ + { + "bbox": [ + 68, + 92, + 210, + 105 + ], + "type": "text", + "content": "A.1 Constrain Computation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 110, + 291, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 110, + 291, + 137 + ], + "spans": [ + { + "bbox": [ + 67, + 110, + 291, + 137 + ], + "type": "text", + "content": "1P relies on two key operations for constrain computation:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 75, + 145, + 290, + 194 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 76, + 145, + 289, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 145, + 289, + 158 + ], + "spans": [ + { + "bbox": [ + 76, + 145, + 289, + 158 + ], + "type": "text", + "content": "a) " + }, + { + "bbox": [ + 76, + 145, + 289, + 158 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(D,k)" + }, + { + "bbox": [ + 76, + 145, + 289, + 158 + ], + "type": "text", + "content": " : Documents that contain keyword " + }, + { + "bbox": [ + 76, + 145, + 289, + 158 + ], + "type": "inline_equation", + "content": "k" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "spans": [ + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "type": "text", + "content": "b) " + }, + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "type": "inline_equation", + "content": "\\mathcal{C}(k,D)" + }, + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "type": "text", + "content": " : Next tokens for keyword " + }, + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "type": "text", + "content": " in arbitrary document set " + }, + { + "bbox": [ + 75, + 167, + 290, + 194 + ], + "type": "inline_equation", + "content": "D" + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "spans": [ + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(D,k)" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": " is preprocessed and cached to allow for quick computation. " + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "\\mathcal{C}(k,D)" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": " is trickier to compute. When " + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": " represents the full corpus, FM-index can fetch the next tokens in " + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "O(|V| \\log(|V|))" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": " is the token vocabulary and independent of " + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "|D|" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": ". However, arbitrary " + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": " requires a traversal over all documents and can be very expensive. In practise, the LLM training guides it to generate effective keywords such that " + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "inline_equation", + "content": "|D|" + }, + { + "bbox": [ + 67, + 203, + 290, + 324 + ], + "type": "text", + "content": " is small." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 326, + 291, + 353 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 326, + 291, + 353 + ], + "spans": [ + { + "bbox": [ + 67, + 326, + 291, + 353 + ], + "type": "text", + "content": "We also apply certain other optimizations to reduce the compute cost:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 81, + 361, + 289, + 513 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 81, + 361, + 289, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 361, + 289, + 388 + ], + "spans": [ + { + "bbox": [ + 81, + 361, + 289, + 388 + ], + "type": "text", + "content": "- Constrains are computed lazily over a decoding pass." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 81, + 397, + 288, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 397, + 288, + 424 + ], + "spans": [ + { + "bbox": [ + 81, + 397, + 288, + 424 + ], + "type": "text", + "content": "- Several computations are cached, eg: keyword to document id mapping" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 81, + 433, + 289, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 433, + 289, + 513 + ], + "spans": [ + { + "bbox": [ + 81, + 433, + 289, + 513 + ], + "type": "text", + "content": "- To cap the cost of constraints at each decoding step, we allow for unconstrained generation in rare scenarios, when the estimated cost is too high. If the generated path is absent in the corpus (" + }, + { + "bbox": [ + 81, + 433, + 289, + 513 + ], + "type": "inline_equation", + "content": "< 1\\%" + }, + { + "bbox": [ + 81, + 433, + 289, + 513 + ], + "type": "text", + "content": " examples), these can be filtered out later." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 523, + 290, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 523, + 290, + 565 + ], + "spans": [ + { + "bbox": [ + 67, + 523, + 290, + 565 + ], + "type": "text", + "content": "Despite these optimizations, inference continues to be expensive and we perhaps need a special data structure for next token look-up." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 68, + 572, + 182, + 586 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 572, + 182, + 586 + ], + "spans": [ + { + "bbox": [ + 68, + 572, + 182, + 586 + ], + "type": "text", + "content": "A.2 Training data size" + } + ] + } + ], + "index": 13 + }, + { + "type": "table", + "bbox": [ + 71, + 599, + 287, + 682 + ], + "blocks": [ + { + "bbox": [ + 71, + 599, + 287, + 682 + ], + "lines": [ + { + "bbox": [ + 71, + 599, + 287, + 682 + ], + "spans": [ + { + "bbox": [ + 71, + 599, + 287, + 682 + ], + "type": "table", + "html": "
DatasetQueriesPathsHits@1EM
Open-NQ55k55k41.928.1
Open-NQ55k310k42.928.7
Open-NQ + PAQ55k310k43.629.5
+ 9M+ 9M
", + "image_path": "71439e0cbf64e4a85772a56a9624e0e9ba13d010d9008d119d31af6dbf235983.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "table_body" + } + ], + "index": 14 + }, + { + "bbox": [ + 67, + 689, + 290, + 715 + ], + "lines": [ + { + "bbox": [ + 67, + 689, + 290, + 715 + ], + "spans": [ + { + "bbox": [ + 67, + 689, + 290, + 715 + ], + "type": "text", + "content": "Table 7: Comparison of different dataset sizes for queries and paths" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 733, + 290, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 290, + 773 + ], + "type": "text", + "content": "In Table 7, we observe the effect of dataset size on performance. Increasing the numbers of paths sampled per query improves performance, perhaps" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "type": "text", + "content": "due to higher diversity in training. However, this method of dataset expansion is limited by the number of relevant paths we could extract for a query." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 112, + 525, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 112, + 525, + 233 + ], + "spans": [ + { + "bbox": [ + 302, + 112, + 525, + 233 + ], + "type": "text", + "content": "We also experiment with increasing the query set manifold by mixing in unsupervised datasets. A total of 9M QA pairs are sampled from PAQ (Lewis et al., 2021), a synthetic QA dataset, and search paths extracted with heuristic scoring described in Section 5. The original 1P training dataset is mixed in 1:1 ratio. This further boosts performance, but not proportionally to the amount of data added, indicating diminishing returns from silver datasets." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 243, + 409, + 256 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 243, + 409, + 256 + ], + "spans": [ + { + "bbox": [ + 302, + 243, + 409, + 256 + ], + "type": "text", + "content": "A.3 SEAL keywords" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 260, + 525, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 260, + 525, + 530 + ], + "spans": [ + { + "bbox": [ + 302, + 260, + 525, + 530 + ], + "type": "text", + "content": "SEAL generates a set of document substrings constrained on the corpus, that are combined to form document identifiers. Besides using a LM to generate keywords, SEAL utilizes several other mechanisms for extracting keywords. This includes partial beam sequences, heuristically adding query n-grams, sampling the top-k tokens from the logprobs of the first decoding step, force decoding title etc. The keywords are re-scored using the LM as well as FM-index count and all keyword combinations are retrieved. Table 8 illustrates keywords generated by both the systems. Note that SEAL keywords can be repetitive and therefore require a large number of keywords to narrow down to meaningful documents. This also makes SEAL suitable for retrieving a much larger set of documents that can be re-ranked later. The maximum number of retrieved documents for SEAL are capped by a hyperparameter with default value of 500. In contrast, 1P is geared towards retrieving only the top-document." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 303, + 539, + 368, + 551 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 539, + 368, + 551 + ], + "spans": [ + { + "bbox": [ + 303, + 539, + 368, + 551 + ], + "type": "text", + "content": "A.4 Hits@5" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 557, + 525, + 611 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 557, + 525, + 611 + ], + "spans": [ + { + "bbox": [ + 302, + 557, + 525, + 611 + ], + "type": "text", + "content": "SEAL does significantly better than 1P for Hits@5 (Table 9). We attribute this to the large set of keywords generated by SEAL as explained in the Appendix A.3." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 302, + 620, + 508, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 620, + 508, + 647 + ], + "spans": [ + { + "bbox": [ + 302, + 620, + 508, + 647 + ], + "type": "text", + "content": "A.5 Normalizing sequence likelihood over constrained space" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "spans": [ + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "text", + "content": "During constrained decoding a sequence " + }, + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "text", + "content": ", we need to choose the next token from " + }, + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "inline_equation", + "content": "\\mathcal{C}(X, D)" + }, + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "text", + "content": " and not the entire vocabulary space " + }, + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 302, + 651, + 525, + 745 + ], + "type": "text", + "content": ". Should the sequence likelihood be re-normalized over this constrained space? We find that re-normalizing the probabilities results in inflated likelihoods, making it hard for the model to back-track." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "type": "text", + "content": "Consider the query, \"where did the butchers in the slaughterhouse cases live\" to which our model" + } + ] + } + ], + "index": 25 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "text", + "content": "14541" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 80, + 523, + 328 + ], + "blocks": [ + { + "bbox": [ + 71, + 80, + 523, + 328 + ], + "lines": [ + { + "bbox": [ + 71, + 80, + 523, + 328 + ], + "spans": [ + { + "bbox": [ + 71, + 80, + 523, + 328 + ], + "type": "table", + "html": "
SystemQuestion or Search PathAnswer
1P SEALwho has the most catches in nfl history2,000-yard club » Barry SandersJerry RiceBarry SandersT.J. Houshmandzadeh
</s> Michael Irvin @ @, yards per catch, caught his, touchdown, record
1P SEALwhen was harry potter and the philosophers stone publishedHarry Potter and the Philosopher's Stone » first published in the United » 199719971997
</s> Harry Potter and the Philosopher's Stone @ @, "Harry Potter, Potter and thePhilosopher's Stone is, Potter and the Philosopher's Stone Harry, novel1999
1P SEALwhat is the meaning of the harp in irelandHarp » national symbol of Ireland » national symbol of Irelandthe arms of Irelandnational symbol of Ireland
</s> Harp @ @, Irish harp., harp is, harp was, harparistocracy
1P SEALwho was the president of pakistan during 1971 warIndo-Pakistani War of 1971 » Prime Minister of Pakistan » Zulfikar Ali BhuttoYahya KhanZulfikar Ali Bhutto
</s> Indo-Pakistani War of 1971 @ @, East Pakistan, Pakistani, Pakistan Army,Pakistan'sMuhammad Yaqub Khan
1P SEALwhen do you declare honors in contract bridgeContract bridge » declaring » end of the handany time after the auctionend of the hand
</s> Contract bridge @ @, declarer, bidding, honors, handsbidding
", + "image_path": "8734388e27db242bfa05e7766526c2e6c225c95060ead1e3a67a54fb141fe56e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 114, + 420, + 244, + 502 + ], + "blocks": [ + { + "bbox": [ + 67, + 336, + 525, + 396 + ], + "lines": [ + { + "bbox": [ + 67, + 336, + 525, + 396 + ], + "spans": [ + { + "bbox": [ + 67, + 336, + 525, + 396 + ], + "type": "text", + "content": "Table 8: Comparison of keywords generated by SEAL and 1P for randomly sampled examples from Open-NQ test set. For 1P, we show the full search path separated by \"»\" with the last keyword as the answer. For SEAL, we illustrate the top-5 keywords along with the answer from Reader model. \"\" and \"@@\" are special tokens used by SEAL for identifying start of passage and title marker respectively. The Answer next to the question is the gold answer while others are predictions from corresponding systems." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 114, + 420, + 244, + 502 + ], + "lines": [ + { + "bbox": [ + 114, + 420, + 244, + 502 + ], + "spans": [ + { + "bbox": [ + 114, + 420, + 244, + 502 + ], + "type": "table", + "html": "
SystemBeamHits@5
SEAL159.7
SEAL562.8
1P146.5
1P550.8
", + "image_path": "58293c4b5818600e487ba00304d9f3dedfe3d565b91feed1a7d030d82bf1e5b2.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 510, + 290, + 571 + ], + "lines": [ + { + "bbox": [ + 67, + 510, + 290, + 571 + ], + "spans": [ + { + "bbox": [ + 67, + 510, + 290, + 571 + ], + "type": "text", + "content": "Table 9: Hits@5 on Open-NQ test. SEAL achieves a much higher score than 1P owning to the larger number of documents matched and re-scored. Note that only top-beam result is used for 1P while SEAL uses all beam outputs." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": "predicts an irrelevant search path [Slaughterhouse Five, but, EoS]. What's going on under the hood? The first keyword is incorrect lending the model into a poor search space. With the second keyword, the model is possibly looking to generate \"butcher\" but there's no such keyword in the constrained set. Ideally, the model should backtrack at this point to other candidates in the beam. However, since the set of continuations is small, renormalizing inflates the probabilities of all tokens in " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{C}" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " including " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "EoS" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": ", even though the true likelihood of such a sequence is very low. Indeed, using the language model's scores directly without any re" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 307, + 423, + 523, + 565 + ], + "blocks": [ + { + "bbox": [ + 307, + 423, + 523, + 565 + ], + "lines": [ + { + "bbox": [ + 307, + 423, + 523, + 565 + ], + "spans": [ + { + "bbox": [ + 307, + 423, + 523, + 565 + ], + "type": "image", + "image_path": "654724f0998c6062f18ca04f65c75b48e7ea83283c2b0243c15547b26b82e17d.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 576, + 525, + 613 + ], + "lines": [ + { + "bbox": [ + 302, + 576, + 525, + 613 + ], + "spans": [ + { + "bbox": [ + 302, + 576, + 525, + 613 + ], + "type": "text", + "content": "Figure 4: Number of matching documents in the corpus for 1P generated path in the test set. About half the examples match only a single path." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 634, + 525, + 675 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 634, + 525, + 675 + ], + "spans": [ + { + "bbox": [ + 302, + 634, + 525, + 675 + ], + "type": "text", + "content": "normalization cures this issue yielding [Slaughterhouse cases, Butcher, EoS]. and this is the strategy we opt for in all our experiments." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 687, + 483, + 700 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 687, + 483, + 700 + ], + "spans": [ + { + "bbox": [ + 302, + 687, + 483, + 700 + ], + "type": "text", + "content": "A.6 Number of matching documents" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 705, + 524, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 705, + 524, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 705, + 524, + 773 + ], + "type": "text", + "content": "1P generated paths effectively narrow down the corpus, generally matching only a few documents in the corpus as illustrated in Figure 4. Note that a small fraction of paths match 0 documents due to pruning optimizations applied during inference" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14542" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 72, + 206, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 72, + 206, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 72, + 206, + 84 + ], + "type": "text", + "content": "time detailed in Appendix A.1." + } + ] + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14543" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_content_list.json b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4443adec139070e0741609ad0765cb82a09dd2f9 --- /dev/null +++ b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_content_list.json @@ -0,0 +1,1650 @@ +[ + { + "type": "text", + "text": "2INER: Instructive and In-Context Learning on Few-Shot Named Entity Recognition", + "text_level": 1, + "bbox": [ + 124, + 80, + 875, + 122 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jiasheng Zhang $^{1}$ Xikai Liu $^{2}$ Xinyi Lai $^{3}$ Yan Gao $^{2}$", + "bbox": [ + 258, + 129, + 742, + 147 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Shusen Wang² Yao Hu² Yiqing LIN", + "bbox": [ + 322, + 148, + 677, + 164 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Shanghai Jiaotong University $^{2}$ Xiaohongshu Inc. $^{3}$ Chongqing University", + "bbox": [ + 184, + 164, + 818, + 181 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{js.zhang,yiqing.lin}@sjtu.edu.cn", + "bbox": [ + 332, + 181, + 668, + 198 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{xikai,yadun,haxian,xiahou}@xiaohongshu.com", + "bbox": [ + 284, + 198, + 719, + 214 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "laixinyi@cqu.edu.cn", + "bbox": [ + 403, + 215, + 596, + 230 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 267 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER (Wang et al., 2022) to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extraction, to enhance the model's understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms.", + "bbox": [ + 141, + 279, + 460, + 605 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 619, + 260, + 634 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Named Entity Recognition (NER) has been a fundamental task of Natural Language Processing (NLP) and there are three types of sub-tasks in NER: flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015). All three sub-tasks aim to locate named entities, extract the entity spans, and classify each span into pre-defined label categories. In terms of the flat NER which is the main focus of this paper, it can be formulated as a sequence labeling paradigm by assigning labels to each token in the sentence through token-classification models. The dominant methods include combining Pre-trained Language Models(PLMs) (Devlin et al., 2019) with label-specific classifier (LC) (Strubell et al., 2017; Cui and Zhang, 2019). However, the fixed shape of the output LC", + "bbox": [ + 112, + 645, + 489, + 917 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "layer necessitates a consistent label set for both the training and testing data, which poses a challenge for knowledge transfer. Therefore, these models need to be trained from scratch to adapt to a new domain with a different label set, highlighting the requirement for a large amount of data for these methods.", + "bbox": [ + 507, + 253, + 884, + 363 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Due to the high cost of sequence labeling annotation in real-world scenarios, labeled data for NER is often limited. As a result, few-shot NER has gained significant attention due to its practical applications. Meanwhile, applying prompt-base learning (Han et al., 2021) on PLMs is an effective way to solve few-shot problems (Brown et al., 2020). PLMs can learn a lot of knowledge regarding human languages by training on a large amount of self-supervised corpus. In order to explore the potential of PLMs, prompt-based learning reformulate the downstream tasks to text-to-text framework with additional prompt indicating task descriptions (e.g. instruction fine-tuning (Wei et al., 2021; Chung et al., 2022; Sanh et al., 2021)). Through this approach, the model can effectively leverage the knowledge present in PLMs to enhance downstream skills without the need for additional large amounts of downstream data. This enables the model to achieve remarkable performance in few-shot settings.", + "bbox": [ + 507, + 368, + 884, + 705 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recently, many prompt-based NER methods have emerged to address the limitations of traditional few-shot NER approaches. TemplateNER (Cui et al., 2021) treats original sentence and predicted template filled by entity spans as source and target sequence, respectively, but all candidate spans must be enumerated during inference, leading to a high computational cost. CARTNER (Yan et al., 2021) proposed a pointer mechanism to unified all NER sub-tasks into one sequence-to-sequence (seq2seq) framework. CARTNER utilizes the raw sentence as input and outputs pointer index and tag index which represent the location", + "bbox": [ + 507, + 709, + 885, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "3940", + "bbox": [ + 480, + 927, + 521, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3940-3951", + "bbox": [ + 216, + 945, + 778, + 958 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 958, + 719, + 972 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "of the span and the corresponding label index in the category, respectively. To further adapt BARTNER for few-shot settings, LightNER (Chen et al., 2022b) proposed a lightweight tuning approach for low-resource settings by adding a unified learnable verbalizer and incorporating learnable parameters into the self-attention layers. Nonetheless, due to the fact that pointer mechanism only outputs the indexes of entities and labels, the model encounters challenges in effectively leveraging the capabilities of PLMs to directly comprehend the semantic meaning between entities and labels. Thus instead of using a pointer mechanism, InstructionNER (Wang et al., 2022) directly generates entity spans and types in the target sequence and applies instruction fine-tuning with two auxiliary tasks to further mining the capabilities of PLMs, which leads to significant few-shot improvements.", + "bbox": [ + 112, + 84, + 492, + 375 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In terms of the auxiliary tasks in InstructionNER, they propose two auxiliary tasks from two perspectives: span recognition (Entity Extraction) and entity labeling (Entity Typing). However, we argue that NER can be further divided into three parts: 1) understand the relationship between the label and semantic meaning of the sentence. 2) extract the spans. 3) annotate the given spans. We believe that both span recognition and entity labeling can be benefit from having a deeper understanding of the label semantics. Therefore, we proposed a new auxiliary task, called Type Extraction, to help the model to acquire this ability.", + "bbox": [ + 112, + 378, + 489, + 589 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Meanwhile, none of the above methods take the additional external knowledge into account. Current literature related to utilize external knowledge in NER involve (Chen et al., 2022a) and (Lee et al., 2022a). SDNet (Chen et al., 2022a) proposes a self-describing mechanism to leverage external resources by self-describing both entity types and mentions, while (Lee et al., 2022a) uses a demonstration-based method by incorporating examples to the input but without a text-to-text framework. Therefore, to the best of our knowledge, there is currently no existing literature that combines in-context external knowledge with instruction fine-tuning for few-shot NER.", + "bbox": [ + 112, + 593, + 489, + 818 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we propose 2INER(Instructive and In-Context Learning on Few-Shot NER). We build upon the work of InstructionNER by incorporating in-context examples and a novel auxiliary task. Specifically, we first reformulate the NER tasks into a text-to-text framework and then employ T5", + "bbox": [ + 112, + 822, + 489, + 920 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "(Raffel et al., 2020) for natural language generation. In terms of the source sentence, we use instructions to distinguish between tasks by giving a comprehensive task description and include an alternative field to identify the entity type that requires detection. Moreover, we suggest incorporating in-context demonstration examples into the source sentence to enable the model to learn from external knowledge. For the target sentence, we use natural language to represent entity spans and types instead of pointer mechanism. In addition to the two auxiliary tasks used in InstructionNER, we propose a new task called type extraction to further explore the potential of PLMs to understand label semantics. Type Extraction task requires the model to identify all the entity types presented in the original sentence and learn to understand the meaning of entity types at the overall semantic level of the sentence. Our contributions can be summarized as follows:", + "bbox": [ + 505, + 84, + 884, + 404 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- To utilize external knowledge, we apply demonstration-based in-context learning examples to the instruction template. The in-context examples enable the model to directly learn which spans correspond to which types from these additional information, leading to better few-shot abilities.", + "- We expand the NER capabilities by dividing them into three components instead of two. And we propose a novel auxiliary task for instructions fine-tuning, called type extraction, to address the existing gap. It can enable the model to understand the meaning of the entity types through the overall semantic level of the sentence, which will improve span recognition and entity labeling abilities.", + "- We conduct extensive experiments on four datasets, demonstrating that 2NER outperforms existing few-shot NER methods and remains competitive with SOTA standard NER algorithms." + ], + "bbox": [ + 507, + 407, + 885, + 697 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 507, + 709, + 665, + 725 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 Named Entity Recognition", + "text_level": 1, + "bbox": [ + 507, + 737, + 764, + 752 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Currently, NER tasks can be divided into flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015), while in this paper, we mainly focus on the flat NER task. The current dominant method to solve flat NER is using token-level classification by turning it into a sequence labeling problem (Chiu and Nichols, 2016; Liu et al., 2019; Zhang et al., 2020; Liu et al., 2021), which apply a text encoder and CRF (Ma and Hovy, 2016) in", + "bbox": [ + 505, + 757, + 884, + 919 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "3941", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "sequence. Recently, CARTNER (Yan et al., 2021) formulate all three NER tasks into a text-to-text framework to solve them concurrently. CARTNER generate entity span sequences by a pointer-based model based on BART (Lewis et al., 2020) so that special design of tagging schema or spans post-processing are no longer needed.", + "bbox": [ + 112, + 84, + 489, + 197 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 Prompt-based Learning", + "text_level": 1, + "bbox": [ + 112, + 209, + 349, + 225 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "With the emergence of GPT-3 (Brown et al., 2020), prompt-based learning has gained increasing attention. It can better stimulate the knowledge model learned in pre-training stages and integrate different tasks together compared to the paradigm of fine-tuning separate model for each task, especially in few-shot settings (Han et al., 2021). To push prompt-based learning further, instruction-based learning (Wei et al., 2021) is proposed to fine-tune the PLMs on a collection of task descriptions which enables the model to better follow human instructions and generalize to unseen tasks with better zero-shot and few-shot abilities (Chung et al., 2022; Sanh et al., 2021).", + "bbox": [ + 112, + 230, + 489, + 455 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3 Few-Shot NER Methods", + "text_level": 1, + "bbox": [ + 112, + 467, + 352, + 482 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "One line of work in few-shot NER is to apply contrastive learning to assign the labels by searching for the closest token (Das et al., 2022; Chen et al., 2022c), prototype (Snell et al., 2017; Fritzler et al., 2019; Ma et al., 2022b) or label semantic (Ma et al., 2022a; Huang et al., 2022) in the support set. Another line of researches is prompt-based learning using a unified text-to-text framework to make full use of the PLMs abilities. (Cui et al., 2021) applies span classification using BART and (Chen et al., 2022b; Yan et al., 2021) use a pointer mechanism to generate indexes of spans and types. (Wang et al., 2022) utilizes instruction fine-tuning and two auxiliary tasks to train T5. Meanwhile, to apply external knowledge to the model, (Chen et al., 2022a) introduces a self-describing mechanism and (Lee et al., 2022a) uses a demonstration-based method. Therefore, our methods introduce in-context learning via instruction fine-tuning together to achieve better few-shot NER abilities, which haven't been fully discussed yet in seq2seq NER settings.", + "bbox": [ + 112, + 488, + 489, + 828 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Methodology", + "text_level": 1, + "bbox": [ + 112, + 839, + 263, + 854 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 NER Definition", + "text_level": 1, + "bbox": [ + 112, + 865, + 282, + 879 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "NER aims to predict all spans in the input sentence as well as the entity types associated with the spans.", + "bbox": [ + 112, + 887, + 489, + 919 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The standard flatten-NER can be formulated as follows, given the input sentence containing $n$ tokens $X = [x_{1}, x_{2}, \\ldots, x_{n}]$ , the model has to predict the target sentence $Y = [l_{1}, l_{2}, \\ldots, l_{n}]$ . We use $V_{BIO}$ to denote the BIO label set, so $\\forall l_{i}, l_{i} \\in V_{BIO}$ . While in the sequence-to-sequence modeling scenario, the input sentence is still $X$ but instead of predicting $Y$ , the model predicts each entity $y_{i} = (e_{i}, s_{i})$ directly, where $s_{i}$ represents the entity span in $X$ . And $e_{i} \\in V$ represents the entity type of $s_{i}$ , where $V$ is the set of entity types.", + "bbox": [ + 507, + 84, + 884, + 261 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "More specifically, we use $l$ and $r$ to indicate the left and right boundary of an entity span in $X$ , so $s_i$ can be simplified as $s_i = x_{l:r}$ , where $x_{l:r} = [x_l, x_{l+1}, \\dots, x_r]$ . Therefore, the NER model has to predict each $y_i$ in $X$ , indicating that the span $s_i$ belongs to the $e_i$ entity type.", + "bbox": [ + 507, + 261, + 884, + 357 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2 Convert NER to Text-to-text Task", + "text_level": 1, + "bbox": [ + 507, + 368, + 821, + 382 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Using language models like T5 (Raffel et al., 2020) to solve most NLP tasks in a unified text-to-text framework can not only fully utilize the knowledge model learned in the pre-training stage but also simplify the training by using same data format, same loss and same model architecture. Moreover, Compared to using simple prompts, using instruction finetuning can further explore the capabilities of the model (Chung et al., 2022; Sanh et al., 2021). Besides, utilizing in-context learning can further enhance the model's few-shot capabilities in general (Brown et al., 2020) and specifically NER abilities (Lee et al., 2022b). Therefore, we transform the NER task into a text-to-text format and employ instruction finetuning and in-context learning to unleash the model's few-shot capabilities, as shown in Figure 1. The backbone we used is T5.", + "bbox": [ + 507, + 388, + 884, + 661 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The basic text-to-text format of the main NER tasks consists of the following three parts, which is inspired by InstructionNER (Wang et al., 2022) $^{1}$ :", + "bbox": [ + 507, + 662, + 882, + 709 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Instruction The instruction is a prompt that informs the model about the current task it needs to perform. The model is expected to follow the instructions provided within the prompt and complete the task accordingly. The instruction for the main NER task is: Please extract entities and their types from the Sentence, choose entity types from Alternatives.", + "bbox": [ + 507, + 719, + 884, + 846 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Sentence The sentence is the input $X$ from which entities need to be extracted.", + "bbox": [ + 507, + 856, + 882, + 887 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "1The templates of auxiliary tasks and in-context Example will be discussed in 3.4 and 3.3 respectively.", + "bbox": [ + 507, + 892, + 882, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3942", + "bbox": [ + 480, + 927, + 521, + 940 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/373832b663573beaab873631fd7634757242b782f75d1348c308506edd49c34b.jpg", + "image_caption": [ + "Figure 1: The model architecture of our proposed 2INER. The left and right sides are the source and target sentence of the model, respectively." + ], + "image_footnote": [], + "bbox": [ + 139, + 91, + 880, + 272 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Alternatives Alternatives is a list of entity types $(V)$ split by comma, from which the model needs to select the corresponding type to annotate the corresponding span. Alternatives serves as a constraint and a guiding factor, informing the model that it can only select entity types from within this list.", + "bbox": [ + 110, + 344, + 485, + 455 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In order to formulate the NER output to natural language, for each NER output $y_{i} = (e_{i}, s_{i})$ , we use the following template to convert it to text: $s_{i}$ is $a / an$ $e_{i}$ , and we use dot to concatenate all detected entity occurrences $y_{i}$ to form the output text. In terms of the entity types $e_{i}$ , we use natural language to represent the entity instead of adding special tokens to the model2.", + "bbox": [ + 112, + 473, + 487, + 602 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3 Auxiliary Tasks", + "text_level": 1, + "bbox": [ + 112, + 612, + 284, + 627 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To enhance the NER performance, in addition to the main task, we need to introduce several auxiliary tasks. In InstructionNER (Wang et al., 2022), they employed two auxiliary tasks: entity extraction and entity typing. Moreover, in this paper, a new auxiliary task called type extraction will be introduced. During training, the auxiliary task will also be in the form of text-to-text data, trained alongside the main task data.", + "bbox": [ + 110, + 632, + 487, + 775 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The auxiliary task primarily aims to improve NER capabilities from three perspectives: understand label semantic, span recognition and entity labeling, since NER can be decomposed into three steps: understand the relationship between the label and semantic meaning of the sentence, then extract", + "bbox": [ + 112, + 777, + 487, + 873 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the spans and finally annotate the given spans. We will discuss the configuration of the auxiliary task in detail from these three perspectives.", + "bbox": [ + 507, + 344, + 882, + 392 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3.1 Understand label semantic", + "text_level": 1, + "bbox": [ + 507, + 403, + 779, + 417 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Type Extraction The goal of the Type Extraction task is to identify all the entity types present in the original sentence. The Instruction is changed to: Please extract all entity types appeared in the Sentence. We will remove the Alternatives in this case, which means that there will be no constraints or hints regarding entity types in the input text, aiming to increase the difficulty of the task. And the output template is: $e_i$ type exists in the sentence. The Type Extraction task involves detecting whether a specific entity type appears in the sentence, without focusing on specific spans or associating spans with entity types. This task will assist the model in understanding the meaning of entity types at the overall semantic level of the sentence. We believe that once the model gains a deeper understanding of entity types, it will be able to comprehend the relationship between spans and types more accurately. As a result, it will enhance both span recognition and entity labeling capabilities simultaneously.", + "bbox": [ + 505, + 422, + 882, + 744 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3.2 Span recognition", + "text_level": 1, + "bbox": [ + 507, + 755, + 702, + 770 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Entity Extraction The goal of the entity extraction task is to extract useful entity spans from the original sentence without the need for annotating the extracted spans. The instruction has been modified to: Please extract entities from the Sentence. Because the model doesn't need to type spans, the Alternatives field is deleted. And the output template has been changed to: $s_i$ is an entity word, since $e_i$ is no longer needed. Because the entity ex", + "bbox": [ + 505, + 774, + 884, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "2.e.g. \"Character_Name\" will be represented as \"Character Name\" instead of adding a special token named \"Character_Name\"", + "bbox": [ + 112, + 879, + 487, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "3943", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "traction task only require the model to predict useful spans regardless of the associated entity types, this task will guide the model to extract correct spans, enhancing the span-F1 accuracy, moreover, overall main task F1 as well (Wang et al., 2022).", + "bbox": [ + 112, + 84, + 487, + 164 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The original InstructionNER (Wang et al., 2022) paper only employed span concatenation as the output(e.g. $s_1, s_2, s_3$ ). However, we believe that since the output of the main task consists of complete sentences with subject-verb-object structures, it would be more cohesive to follow the same pattern for the auxiliary tasks. And more structured output can fully utilize the PLMs's understanding of the task as well.", + "bbox": [ + 112, + 165, + 487, + 307 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3.3 Entity Labeling", + "text_level": 1, + "bbox": [ + 112, + 319, + 297, + 335 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Entity Typing The entity typing task aims to type the given span with the correct label. The instruction has been modified to: Please type these entities according to the Sentence: . The Alternatives prompt and output template is the same as those in main task. During training, the given spans in the Instruction is the exact entity spans that have labels on. In entity typing task, since the spans are given, the model doesn't need to worry about the correctness of the span extracted, so the model can focus more on learning how to label the entity accurately, enhancing the main task NER ability.", + "bbox": [ + 112, + 338, + 489, + 548 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4 In-Context Learning", + "text_level": 1, + "bbox": [ + 112, + 560, + 324, + 575 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In-context learning will be applied to further enhance few-shot NER capabilities. The main approach of in-context learning is to append Examples at the end of the input sentence, hoping that the model can directly learn which spans correspond to which types from these Examples, without the need for additional gradient updates. Besides, the in-context examples are also presented in natural language format, which closely resembles the output text format, serving as a reminder for the model about the desired format it should generate and making it easier for PLMs to understand. This similarity helps bridge the gap and facilitates the model's comprehension.", + "bbox": [ + 112, + 581, + 489, + 804 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The in-context example format in NER is inspired by (Lee et al., 2022b). All examples in this context follow the template: span is a/an entity-type. And we will concatenate an additional prompt (based on the knowledge in Examples) after the Instruction to hint the model to learn from the Examples. During training stage, in-context Examples", + "bbox": [ + 112, + 806, + 489, + 919 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "will only be added to the main NER tasks and there will be no Examples added in auxiliary tasks, which will be discussed in detail in Analysis 5.2.", + "bbox": [ + 507, + 84, + 880, + 131 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In terms of the choices of the samples in Examples, we randomly choose some spans appeared in the train set as well as their corresponding entity types to create Examples. Since we are uncertain about the entity types present in the sentence, we will provide at least one example for each entity type in the Alternatives list within the Examples. The number of samples of each entity types in Examples will also be the same $^3$ (e.g. in terms of MIT Movie dataset, there are 12 entity types. If we set the number of examples to 5, there will be 5 examples for each entity types, resulting in a total of $5^{*}12$ examples in the field).", + "bbox": [ + 507, + 133, + 884, + 341 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.5 Inference", + "text_level": 1, + "bbox": [ + 507, + 354, + 630, + 368 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "During inference time, we first use the template of the main NER task to wrap the input sentence $X$ , and then feed the sentence to 2INER to get the predicted output text. In terms of the Example field, the example spans are sampled from the training support set, so the model won't see the groundtruth in the Examples during evaluation, avoiding information leakage. After the output text is generated, a decoding strategy will be applied to get the predicted entity $(e_i, s_i)$ : (1) We use dot to split the whole output text to obtain individual sub-texts. (2) We use \"is a\" or \"is an\" to split each sub-text if they can be found. (3) The span is the part before \"is a/an\" and the entity type is the part after it. Once we get the $(e_i, s_i)$ , we will check whether $s_i$ is in the input sentence $X$ and $e_i$ is in the set of entity types $V$ . If it doesn't pass the check, then it isn't a valid entity and will be deleted. And if any of the three steps result in a match failure, then the sub-text will be skipped.", + "bbox": [ + 507, + 376, + 882, + 697 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4 Experiment", + "text_level": 1, + "bbox": [ + 507, + 709, + 648, + 726 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1 Dataset", + "text_level": 1, + "bbox": [ + 507, + 737, + 616, + 751 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We conduct NER experiments in standard and low-resources settings. For the rich-resources domain, we use CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) and for the low-resource domain, we use three datasets: MIT Movie Review, MIT Restaurant Review (Liu et al., 2013) and Airline Travel Information Systems (ATIS) (Hakkani", + "bbox": [ + 507, + 758, + 884, + 871 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "3We refer to \"the number of samples per entity types\" as \"the number of examples\" in the rest of the paper for convenience.", + "bbox": [ + 507, + 879, + 885, + 917 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "3944", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Tür et al., 2016), following (Wang et al., 2022; Chen et al., 2022b; Cui et al., 2021; Yan et al., 2021).", + "bbox": [ + 112, + 84, + 489, + 131 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2 Implementation settings", + "text_level": 1, + "bbox": [ + 112, + 146, + 349, + 161 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In Few-Shot NER scenario, in order to guarantee that each entity type has equal number of instances in the training set, we can't sample $k$ sentences for each entity type directly because a single sentence may contain multiple entities, so the actual shot will exceed $k$ . Following (Wang et al., 2022), we will apply a greedy sampling strategy (Yang and Katiyar, 2020) instead, to sample the few-shot training set for each setting and due to the randomness of the sampling, we will repeat 3 times for each experiment. We use T5-large as the backbone model for fair comparison with (Wang et al., 2022). In terms of the number of examples in in-context Example field, we set the number to 5 for MIT Movie and MIT Restaurant dataset, and 1 for ATIS dataset as default. We only add in-context Example field on main-task, and don't include them in auxiliary tasks. The ratio of auxiliary tasks is set to $1.0$ . We set the batch size to $2/4/8$ , learning-rate to $2\\mathrm{e}-5/5\\mathrm{e}-5$ for $10/20/50$ Shot settings respectively, and set batch size to 32, learning-rate to $1\\mathrm{e}-4$ for the abundant data setting. The optimizer is Adam and beam search is set to 2. For evaluation, we use F1 score as the metric for NER.", + "bbox": [ + 115, + 168, + 489, + 552 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The names InstructionNER in the tables mean training with main-task data only, indicating the base model, and the subscript words in the tables indicate addition to the base model: +ET, +EE, +TE, +EX means adding Entity Typing, Entity Extraction, Type Extraction, in-context examples, respectively. And we named InstructionNER+ET,EE,TE,EX as 2INER, which is our final model.", + "bbox": [ + 110, + 555, + 487, + 700 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3 Standard NER Setting", + "text_level": 1, + "bbox": [ + 112, + 713, + 337, + 728 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We use CoNLL-2003 dataset to conduct standard NER experiment. We combine the train and validation set as described in (Yan et al., 2021) to train the model. The result is in Table 1, which shows that even though our method mainly focuses on few-shot NER settings, it remains competitive with", + "bbox": [ + 112, + 734, + 487, + 832 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/40d453eddc62f1ffaa3e37262e4d8b138dda7d3e43144b071e0a0090ab6cdbc6.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelF1Span-F1
(Yang et al., 2018)90.77-
(Ma and Hovy, 2016)91.21-
(Gui et al., 2020)92.02-
(Yamada et al., 2020)*94.3092.40
(Li et al., 2020a)†-92.87
(Yu et al., 2020a)‡-92.50
LC-BERT91.73-
LC-BART90.60-
TemplateNER91.90-
BARTNER-93.24
LightNER92.93-
2INER (InstructionNER+ET,EE,TE,EX)90.7193.93
", + "bbox": [ + 510, + 80, + 885, + 244 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1: F1 and Span-F1 (%) on CoNLL-2003 Standard NER setting. Our method is competitive with SOTA algorithm and even outperform BARTNER (Yan et al., 2021) in span-F1. \" * \" indicates training on external data. \"†\" indicates the reproduction by (Yan et al., 2021). \"‡\" indicates the reproduction with only the sentence-level context by (Yan et al., 2021).", + "bbox": [ + 507, + 253, + 884, + 354 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "SOTA algorithm under standard NER setting and even outperform BARTNER (Yan et al., 2021) in span-F1, which is designed for rich-resource NER settings. The performances of 2INER in data abundant nested and discontinuous NER settings are in Appendix A.", + "bbox": [ + 507, + 381, + 882, + 478 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.4 Few-Shot NER Setting", + "text_level": 1, + "bbox": [ + 507, + 493, + 732, + 508 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Under Few-Shot NER setting, we only use K-Shot training samples to finetune our model and the results are in Table 2. According to the table, we can find that: (1) Our models consistently outperform InstructionNER as well as other baselines on all three datasets under 10/20/50 Shot settings (except 50Shot in ATIS, which is slightly lower than BARTNER). Especially in MIT Movie dataset, our models have $7.33\\%$ , $6.76\\%$ , $5.39\\%$ improvements compared to InstructionNER under 10/20/50 settings. (2) Our 10Shot model even outperforms TempleNER's 50Shot model by $20.73\\%$ and $7.06\\%$ in MIT Movie and MIT Restaurant respectively, which highlights the superiority and capability of our model. (3) We have the same finding as InstructionNER (Wang et al., 2022) that F1 improvements are much more significant on MIT Movie than on MIT Restaurant ( $7.33\\% / 6.76\\% / 5.39\\%$ v.s. $6.86\\% / 3.24\\% / 3.3\\%$ under 10/20/50 Shot settings), which indicates that although MIT Movie has more entity types, text-to-text framework and instruction-tuning can better utilize pre-training knowledge, and through in-context learning, the model can learn more about the relationships between entities. (4) In ATIS dataset, the improve", + "bbox": [ + 505, + 517, + 885, + 919 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "4https://huggingface.co/t5-large", + "bbox": [ + 134, + 843, + 381, + 857 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "ATIS has 79 entity types so we set the number to 1 to avoid excessively long token lengths.", + "bbox": [ + 115, + 856, + 485, + 881 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "The data size ratio between main task and each auxiliary tasks. 1.0 means that each sample will be extended into 4 samples: one for main task, one for EE, ET, TE, respectively.", + "bbox": [ + 115, + 881, + 485, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "3945", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/28f1fbe800713b94355e78ec56f8f504b6928eb426379cc9871af2b667ed6bb4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelsMIT MovieMIT RestaurantATIS
102050102050102050
LC-BERT25.242.249.621.839.452.744.176.790.7
LC-BART10.227.544.26.38.551.342.072.787.5
TemplateNER37.348.552.246.057.158.771.779.492.6
BARTNER*41.154.067.744.056.064.077.786.193.4
LightNER41.757.873.148.558.062.076.385.392.8
InstructionNER64.4 (±2.1)70.0 (±0.3)74.1 (±1.2)58.7 (±1.2)65.5 (±1.4)71.2 (±1.1)90.14 (±0.12)†91.22 (±0.19)†92.53 (±0.14)†
InstructionNER+ET,EE65.6 (±3.0)70.1 (±1.9)74.7 (±0.3)58.9 (±0.8)66.1 (±0.9)71.1 (±0.9)90.04 (±0.02)†91.46 (±0.23)†92.62 (±0.04)†
InstructionNER+EX72.56 (±1.01)74.99 (±0.27)78.61 (±0.37)64.07 (±1.25)68.2 (±0.11)74.38 (±0.19)89.17 (±0.2)91.33 (±0.05)92.65 (±0.18)
InstructionNER+TE72.0 (±0.25)76.55 (±0.2)80.02 (±0.26)65.52 (±1.35)68.67 (±0.95)73.98 (±0.27)90.77 (±0.6)91.85 (±0.05)92.69 (±0.1)
InstructionNER+ET,EE,TE,EX72.93 (±0.91)76.86 (±0.53)80.09 (±0.22)65.76 (±0.47)69.34 (±0.81)74.4 (±0.4)90.47 (±0.26)92.11 (±0.09)92.83 (±0.15)
", + "bbox": [ + 115, + 80, + 897, + 218 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 2: The F1(\\%) on three dataset under 10/20/50 Shot settings. The bold number means the best F1 across all models and the numbers in brackets means the standard deviation. The underline numbers mean the best results in our experiments. The \"+\" numbers mean the results of our reproduction. \"* means the reproduction by InstructionNER (Wang et al., 2022).", + "bbox": [ + 112, + 227, + 884, + 285 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "ment of our model is less significant compared to other two datasets. We argue that this is because ATIS contains 79 entity types and even if we only provide one sample span for each entity types in in-context Example field, the average token length is 1099 compared to 368 with or without examples, where the token length of the Alternative filed is 327. So the actual input Sentence $X$ only accounts for $3.7\\%$ of the total token length, which increases the difficulty for the model to extract key information from lengthy sentences. So too many entity types may potentially reduce model improvements.", + "bbox": [ + 112, + 310, + 489, + 502 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.5 Ablation Study", + "text_level": 1, + "bbox": [ + 112, + 514, + 280, + 529 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In order to find out the influence of our proposed type extraction task and in-context examples on model's few-shot abilities, we conduct ablation studies in Figure 2. The results indicate that adding type extraction task and in-context examples can further enhance the model's few-shot NER abilities. We set InstructionNER as the baseline here which only trains on main-task data without any auxiliary tasks. Then we add type extraction task (InstructionNER+TE) or in-context examples (InstructionNER+EX) respectively on the baseline model to explore their influences. The results from Figure 2 shows that in terms of 10/20/50 Shot settings in few-shot NER, type extraction task achieves an average improvements of $7.21\\%$ , $4.86\\%$ , $4.35\\%$ F1 and in-context example achieves an average improvements of $6.76\\%$ , $3.84\\%$ , $3.84\\%$ F1 in MIT Movie and MIT Restaurant dataset.", + "bbox": [ + 112, + 533, + 487, + 822 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Moreover, adding type extraction task can greatly increase the Span-F1 as well. Because Span-F1 indicates the model's ability to locate", + "bbox": [ + 112, + 824, + 487, + 873 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ccbd7a22ac82ef0bf7eff5a54809f4a260d406dd5983e0a34343f0d4560b51d5.jpg", + "image_caption": [ + "Figure 2: F1 and Span-F1 $(\\%)$ on MIT Movie and MIT Restaurant through 10/20/50 Shot settings with different task combinations. The deep and light color indicate F1 and Span-F1 respectively." + ], + "image_footnote": [], + "bbox": [ + 515, + 310, + 875, + 470 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "spans, the results reveal that through training on type extraction task, span recognition can be benefit from having a deeper understanding of the labels from the overall semantic level of sentence. Therefore, it proves the effectiveness of three steps of NER abilities we proposed in 3.3, and shows that type extraction task can simultaneously improve span recognition and entity labeling abilities through understanding label semantic.", + "bbox": [ + 507, + 565, + 885, + 709 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5 Analysis", + "text_level": 1, + "bbox": [ + 507, + 724, + 618, + 740 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1 Increase Example Number", + "text_level": 1, + "bbox": [ + 507, + 751, + 766, + 768 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In this section, we will focus on how the number of examples in in-context Example field influence the model performance. We will sequentially change the number of examples to 1, 3, 5, 10, and 15, and train corresponding models to observe the change of F1 on MIT Restaurant dataset. We train our model with main-task and in-context example without any auxiliary tasks (InstructionNER+EX) in this section. The results are in Table 3.", + "bbox": [ + 507, + 774, + 884, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "7We try to use special-tokens to represent the entity types, but the F1 is slightly lower than without using special-tokens and the proportion of $X$ to the total number of tokens is $4.5\\%$ .", + "bbox": [ + 112, + 879, + 487, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "3946", + "bbox": [ + 480, + 928, + 521, + 940 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/efc06f9375437155a196c6ed767da44b29109efda6cf4cb5025ddf0b9e98f199.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
InstructionNER+EX ExamplesMIT Restaurant
20 Shot50 Shot
065.5 (±1.4)71.2 (±1.1)
167.74 (±0.22)73.89 (±0.15)
367.89 (±0.3)74.15 (±0.39)
568.2 (±0.11)74.38 (±0.19)
1069.47 (±0.35)74.41 (±0.18)
1569.52 (±0.16)74.64 (±0.49)
", + "bbox": [ + 137, + 80, + 463, + 195 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "As the number of examples increases, F1 score continues to increase and the largest improvement in F1 score occurs when going from zero examples to one example. As the number of examples increases further, the F1 will continue to increase but the rate of improvement gradually slows down. This suggests that when only one in-context example is provided, the model can quickly learn the specific meanings of each entity type from the example. While more examples may lead to repetitive cues to the model so a balance should be made between model performance and computational cost.", + "bbox": [ + 112, + 302, + 489, + 495 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.2 Effect of In-Context Example on Auxiliary task", + "text_level": 1, + "bbox": [ + 112, + 508, + 485, + 539 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this section, we will discuss whether to add in-context examples on auxiliary task. The model is 2INER (InstructionNER+ET,EE,TE,EX) and we will compare two settings: add examples only on main-task, add examples on main-task as well as three auxiliary tasks. The results in Table 4 indicate that adding examples on auxiliary task will slightly decrease the F1 performance. Because adding examples to auxiliary tasks may potentially reduce their difficulty and make it too easy for the model, thereby reducing the auxiliary tasks' effectiveness in aiding the main task. So adding examples only to the main task is a better approach.", + "bbox": [ + 112, + 546, + 489, + 755 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.3 Increase Shot", + "text_level": 1, + "bbox": [ + 112, + 768, + 267, + 782 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this section, we will discuss the model performance under relatively abundant settings. We increase the shots to 100, 200 and 500 in MIT Movie and MIT Restaurant datasets using 2INER (InstructionNER+ET,EE,TE,EX). As shown in Table 5, compared to InstructionNER, 2INER achieves $5.43\\%$ , $3.98\\%$ , $3.19\\%$ improvements in F1 under 100/200/500 shots settings respectively.", + "bbox": [ + 112, + 790, + 489, + 919 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/4be07f0e7b5395e6ccb2a20ce1f412ef23ef29dee2051b238deb08b31040cb4c.jpg", + "table_caption": [ + "Table 3: F1 scores(%) on MIT Restaurant dataset while changing number of examples using InstructionNER+EX. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation." + ], + "table_footnote": [], + "table_body": "
MIT Restaurant
10 Shot20 Shot50 Shot
2INER65.2669.2774.2
Examples on all tasks(±0.49)(±0.89)(±0.45)
2INER65.7669.3474.4
Examples only on Main-Task(±0.47)(±0.81)(±0.4)
", + "bbox": [ + 527, + 80, + 863, + 168 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/87d6cd817fcf3ed5d42bad417f5dfbf3862d451896fdc56b3645e3ef318ad6db.jpg", + "table_caption": [ + "Table 4: The comparison between adding in-context examples only on main-task and on all tasks including auxiliary tasks. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation." + ], + "table_footnote": [], + "table_body": "
ModelsMIT MovieMIT Restaurant
100200500100200500
LC-BERT50.759.374.453.557.461.3
LC-BART47.554.264.152.256.360.2
TemplateNER56.362.074.960.162.865.0
BARTNER*70.174.682.665.374.475.7
LightNER78.080.684.870.875.580.2
InstructionNER+ETEE74.378.482.372.775.576.6
2INER81.383.5486.1676.5778.3179.11
", + "bbox": [ + 510, + 249, + 885, + 376 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 5: The F1 (\\%) under relatively abundant settings. \" * \" indicates the reproduction results by (Wang et al., 2022). Bold numbers indicate the best F1.", + "bbox": [ + 507, + 386, + 882, + 429 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "And 2INER outperforms LightNER in all settings except 500-shots in MIT Restaurant, which shows that 2INER has great NER abilities under data abundant scenario as well. We argue that the in-context Example field may help the model to learn from more diverse samples from the abundant training set and turn the general knowledge into specialized capabilities, leading to the improvement in F1.", + "bbox": [ + 507, + 457, + 882, + 586 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 601, + 640, + 615 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we propose 2INER for few-shot NER using both instruction finetuning and in-context learning by converting NER into a text-to-text framework. Based on InstructionNER, we create a template to concatenate task-specific instructions, input sentence and entity alternatives to make full use of the pre-training knowledge. Besides, we decompose NER into three steps and introduce another auxiliary tasks, called type extraction, to help the model better understand the general semantic meaning of the entity types, which can improve both span recognition and entity labeling abilities. Moreover, we apply the in-context examples to enable the model to learn from additional contextual information, enhancing few-shot abilities. Multiple experiments on four NER datasets prove 2INER's effectiveness in few-shot NER scenario by consistently outperforming other baselines.", + "bbox": [ + 505, + 629, + 884, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "3947", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 114, + 84, + 220, + 99 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "One limitation of our work is the extensive length of the Example and Alternative field when there are too many existed entity types. While incorporating in-context examples in the input sentence can improve few-shot NER performance, it poses a challenge when the Example field becomes too long because we add at least one examples for each potential entity type, especially when the Alternative list contains numerous entity types. This can result in less improvement gains and more computational costs. To address this issue, we assume that larger PLMs such as the recently proposed LLaMA (Touvron et al., 2023) could potentially be explored in future research as a means of resolution.", + "bbox": [ + 112, + 112, + 492, + 336 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 114, + 351, + 265, + 366 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In consideration of ethical concerns, we would make the following descriptions: (1) All of our experiments are conducted using existing datasets sourced from publicly available scientific papers. (2) Our few-shot methods don't require a lot of computational resources. (3) Our text generation models will generate texts based on existing templates, so it won't generate harmful sentences.", + "bbox": [ + 112, + 379, + 489, + 507 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 114, + 537, + 213, + 551 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.", + "Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, and Le Sun. 2022a. Few-shot named entity recognition with self-describing networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5711-5722, Dublin, Ireland. Association for Computational Linguistics.", + "Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, and Ningyu Zhang. 2022b. LightNER: A lightweight tuning paradigm for low-resource NER via pluggable prompting. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2374-2387, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.", + "Yanru Chen, Yanan Zheng, and Zhilin Yang. 2022c. Prompt-based metric learning for few-shot ner. arXiv preprint arXiv:2211.04337." + ], + "bbox": [ + 115, + 561, + 489, + 917 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357-370.", + "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.", + "Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1835–1845, Online. Association for Computational Linguistics.", + "Leyang Cui and Yue Zhang. 2019. Hierarchically-refined label attention network for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4115-4128, Hong Kong, China. Association for Computational Linguistics.", + "Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5860-5870, Online. Association for Computational Linguistics.", + "Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER: Few-shot named entity recognition via contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6338-6353, Dublin, Ireland. Association for Computational Linguistics.", + "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", + "Alexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019. Few-shot classification in named entity recognition task. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pages 993-1000.", + "Tao Gui, Jiacheng Ye, Qi Zhang, Zhengyan Li, Zichu Fei, Yeyun Gong, and Xuanjing Huang. 2020. Uncertainty-aware label refinement for sequence labeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2316-2326, Online. Association for Computational Linguistics." + ], + "bbox": [ + 510, + 85, + 884, + 917 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "3948", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, YunNung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Interspeech, pages 715-719.", + "Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, et al. 2021. Pre-trained models: Past, present and future. AI Open, 2:225-250.", + "Yucheng Huang, Kai He, Yige Wang, Xianli Zhang, Tieliang Gong, Rui Mao, and Chen Li. 2022. COPNER: Contrastive learning with prompt guiding for few-shot named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2515-2527, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.", + "Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81.", + "J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl_1):i180-i182.", + "Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022a. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics.", + "Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022b. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics.", + "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", + "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020a. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics." + ], + "bbox": [ + 115, + 85, + 485, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020b. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics.", + "Jingjing Liu, Panupong Pasupat, Scott Cyphers, and Jim Glass. 2013. Asgard: A portable architecture for multilingual dialogue systems. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8386-8390. IEEE.", + "Kun Liu, Yao Fu, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, and Sheng Gao. 2021. Noisy-labeled NER with confidence estimation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3437-3445, Online. Association for Computational Linguistics.", + "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for sequence labeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2431-2441, Florence, Italy. Association for Computational Linguistics.", + "Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022a. Label semantics for few shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1956-1971, Dublin, Ireland. Association for Computational Linguistics.", + "Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022b. Decomposed meta-learning for few-shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1584-1596, Dublin, Ireland. Association for Computational Linguistics.", + "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.", + "Alejandro Metke-Jimenez and Sarvnaz Karimi. 2016. Concept identification and normalisation for adverse drug event discovery in medical forums. In BMDID@ ISWC.", + "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "3949", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207.", + "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30.", + "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2670-2680, Copenhagen, Denmark. Association for Computational Linguistics.", + "Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using lstm-crf. Wireless Communications & Mobile Computing (Online), 2018.", + "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.", + "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.", + "Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928, Online. Association for Computational Linguistics.", + "Liwen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, and Weiran Xu. 2022. Instructionner: A multi-task instruction-based generative framework for few-shot ner. arXiv preprint arXiv:2203.03903.", + "Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.", + "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442-6454, Online. Association for Computational Linguistics." + ], + "bbox": [ + 115, + 85, + 489, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822, Online. Association for Computational Linguistics.", + "Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879-3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", + "Yi Yang and Arzoo Katiyar. 2020. Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6365-6375, Online. Association for Computational Linguistics.", + "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020a. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics.", + "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020b. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics.", + "Ningyu Zhang, Shumin Deng, Zhen Bi, Haiyang Yu, Jiacheng Yang, Mosha Chen, Fei Huang, Wei Zhang, and Huajun Chen. 2020. OpenUE: An open toolkit of universal extraction from text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 1-8, Online. Association for Computational Linguistics." + ], + "bbox": [ + 510, + 85, + 884, + 657 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A Appendix", + "text_level": 1, + "bbox": [ + 510, + 669, + 633, + 686 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "In this section, we will discuss the remaining two NER settings: nested NER and discontinuous NER. Because the text-to-text structure of our proposed method can be easily adapted to all three NER settings, which will result in a unified structure for solving NER problems. Here, we mainly discuss standard NER scenarios with abundant data.", + "bbox": [ + 510, + 694, + 884, + 804 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "For data abundant nested NER, We conduct experiments on Genia (Kim et al., 2003). We follow BARTNER (Yan et al., 2021) to use five entities types and split the train, dev, test as 8.1:0.9:1.0. The results are in Table 6.", + "bbox": [ + 510, + 806, + 884, + 885 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "For data abundant discontinuous NER, we conduct experiments on CADEC (Karimi et al., 2015).", + "bbox": [ + 510, + 887, + 884, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "3950", + "bbox": [ + 480, + 928, + 521, + 940 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/aa1ce3f42503e4314d4568f3745c8ccb113375eb4bdf63eb875421d10a6ab47a.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Genia: ModelPRF
(Li et al., 2020b)[BERT-Large]†81.2576.3678.72
(Yu et al., 2020b)[BERT-Large]†79.4378.3278.87
(Wang et al., 2020)[BERT-Large]79.4578.9479.19
BARTNER (Yan et al., 2021)78.8779.679.23
2INER82.980.7481.81
", + "bbox": [ + 115, + 80, + 502, + 165 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/d493dd0845cbc10bf9c8e07a0261bca83ad302c5b4731f8980439a864f39fefe.jpg", + "table_caption": [ + "Table 6: Span-F1 (%) on Genia Nested data abundant NER setting. The \"†\" mean the reproduction by (Yan et al., 2021)." + ], + "table_footnote": [], + "table_body": "
CADEC: ModelPRF
(Metke-Jimenez and Karimi, 2016)64.456.560.2
(Tang et al., 2018)67.864.966.3
(Dai et al., 2020)[ELMo]68.969.069.0
BARTNER (Yan et al., 2021)70.0871.2170.64
2INER71.1875.2673.16
", + "bbox": [ + 115, + 230, + 515, + 313 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Table 7: Span-F1 (%) on CADEC discontinuous data abundant NER setting.", + "bbox": [ + 112, + 323, + 485, + 351 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Following BARTNER (Yan et al., 2021), since only the Adverse Drug Events (ADEs) entities include discontinuous data, only these entities were considered. The results are in Table 7.", + "bbox": [ + 112, + 376, + 487, + 441 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The experiment settings are the same as flat NER. We use T5-large as the backbone model and report span-level F1. The results show that in data abundant nested and discontinuous NER setting, our proposed method greatly outperforms BARTNER (Yan et al., 2021) and other SOTA methods, which demonstrates that our methods do have a potential to handle different NER settings in a unified framework.", + "bbox": [ + 112, + 442, + 489, + 586 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "3951", + "bbox": [ + 480, + 928, + 517, + 940 + ], + "page_idx": 11 + } +] \ No newline at end of file diff --git a/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_model.json b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1f5f19856aab973624365af7e05629e7f9ad8335 --- /dev/null +++ b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_model.json @@ -0,0 +1,2160 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.125, + 0.082, + 0.876, + 0.123 + ], + "angle": 0, + "content": "2INER: Instructive and In-Context Learning on Few-Shot Named Entity Recognition" + }, + { + "type": "text", + "bbox": [ + 0.259, + 0.13, + 0.744, + 0.148 + ], + "angle": 0, + "content": "Jiasheng Zhang\\(^{1}\\) Xikai Liu\\(^{2}\\) Xinyi Lai\\(^{3}\\) Yan Gao\\(^{2}\\)" + }, + { + "type": "text", + "bbox": [ + 0.323, + 0.149, + 0.678, + 0.165 + ], + "angle": 0, + "content": "Shusen Wang² Yao Hu² Yiqing LIN" + }, + { + "type": "text", + "bbox": [ + 0.185, + 0.165, + 0.82, + 0.183 + ], + "angle": 0, + "content": "\\(^{1}\\)Shanghai Jiaotong University \\(^{2}\\)Xiaohongshu Inc. \\(^{3}\\)Chongqing University" + }, + { + "type": "text", + "bbox": [ + 0.334, + 0.183, + 0.669, + 0.199 + ], + "angle": 0, + "content": "{js.zhang,yiqing.lin}@sjtu.edu.cn" + }, + { + "type": "text", + "bbox": [ + 0.285, + 0.199, + 0.72, + 0.215 + ], + "angle": 0, + "content": "{xikai,yadun,haxian,xiahou}@xiaohongshu.com" + }, + { + "type": "text", + "bbox": [ + 0.405, + 0.216, + 0.598, + 0.231 + ], + "angle": 0, + "content": "laixinyi@cqu.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.268 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.28, + 0.461, + 0.606 + ], + "angle": 0, + "content": "Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER (Wang et al., 2022) to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extraction, to enhance the model's understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.62, + 0.262, + 0.636 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.646, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Named Entity Recognition (NER) has been a fundamental task of Natural Language Processing (NLP) and there are three types of sub-tasks in NER: flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015). All three sub-tasks aim to locate named entities, extract the entity spans, and classify each span into pre-defined label categories. In terms of the flat NER which is the main focus of this paper, it can be formulated as a sequence labeling paradigm by assigning labels to each token in the sentence through token-classification models. The dominant methods include combining Pre-trained Language Models(PLMs) (Devlin et al., 2019) with label-specific classifier (LC) (Strubell et al., 2017; Cui and Zhang, 2019). However, the fixed shape of the output LC" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.254, + 0.885, + 0.364 + ], + "angle": 0, + "content": "layer necessitates a consistent label set for both the training and testing data, which poses a challenge for knowledge transfer. Therefore, these models need to be trained from scratch to adapt to a new domain with a different label set, highlighting the requirement for a large amount of data for these methods." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.369, + 0.885, + 0.706 + ], + "angle": 0, + "content": "Due to the high cost of sequence labeling annotation in real-world scenarios, labeled data for NER is often limited. As a result, few-shot NER has gained significant attention due to its practical applications. Meanwhile, applying prompt-base learning (Han et al., 2021) on PLMs is an effective way to solve few-shot problems (Brown et al., 2020). PLMs can learn a lot of knowledge regarding human languages by training on a large amount of self-supervised corpus. In order to explore the potential of PLMs, prompt-based learning reformulate the downstream tasks to text-to-text framework with additional prompt indicating task descriptions (e.g. instruction fine-tuning (Wei et al., 2021; Chung et al., 2022; Sanh et al., 2021)). Through this approach, the model can effectively leverage the knowledge present in PLMs to enhance downstream skills without the need for additional large amounts of downstream data. This enables the model to achieve remarkable performance in few-shot settings." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.71, + 0.887, + 0.919 + ], + "angle": 0, + "content": "Recently, many prompt-based NER methods have emerged to address the limitations of traditional few-shot NER approaches. TemplateNER (Cui et al., 2021) treats original sentence and predicted template filled by entity spans as source and target sequence, respectively, but all candidate spans must be enumerated during inference, leading to a high computational cost. CARTNER (Yan et al., 2021) proposed a pointer mechanism to unified all NER sub-tasks into one sequence-to-sequence (seq2seq) framework. CARTNER utilizes the raw sentence as input and outputs pointer index and tag index which represent the location" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.522, + 0.941 + ], + "angle": 0, + "content": "3940" + }, + { + "type": "footer", + "bbox": [ + 0.218, + 0.946, + 0.779, + 0.959 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3940-3951" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.959, + 0.72, + 0.973 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.376 + ], + "angle": 0, + "content": "of the span and the corresponding label index in the category, respectively. To further adapt BARTNER for few-shot settings, LightNER (Chen et al., 2022b) proposed a lightweight tuning approach for low-resource settings by adding a unified learnable verbalizer and incorporating learnable parameters into the self-attention layers. Nonetheless, due to the fact that pointer mechanism only outputs the indexes of entities and labels, the model encounters challenges in effectively leveraging the capabilities of PLMs to directly comprehend the semantic meaning between entities and labels. Thus instead of using a pointer mechanism, InstructionNER (Wang et al., 2022) directly generates entity spans and types in the target sequence and applies instruction fine-tuning with two auxiliary tasks to further mining the capabilities of PLMs, which leads to significant few-shot improvements." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.379, + 0.49, + 0.59 + ], + "angle": 0, + "content": "In terms of the auxiliary tasks in InstructionNER, they propose two auxiliary tasks from two perspectives: span recognition (Entity Extraction) and entity labeling (Entity Typing). However, we argue that NER can be further divided into three parts: 1) understand the relationship between the label and semantic meaning of the sentence. 2) extract the spans. 3) annotate the given spans. We believe that both span recognition and entity labeling can be benefit from having a deeper understanding of the label semantics. Therefore, we proposed a new auxiliary task, called Type Extraction, to help the model to acquire this ability." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.594, + 0.49, + 0.819 + ], + "angle": 0, + "content": "Meanwhile, none of the above methods take the additional external knowledge into account. Current literature related to utilize external knowledge in NER involve (Chen et al., 2022a) and (Lee et al., 2022a). SDNet (Chen et al., 2022a) proposes a self-describing mechanism to leverage external resources by self-describing both entity types and mentions, while (Lee et al., 2022a) uses a demonstration-based method by incorporating examples to the input but without a text-to-text framework. Therefore, to the best of our knowledge, there is currently no existing literature that combines in-context external knowledge with instruction fine-tuning for few-shot NER." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.49, + 0.921 + ], + "angle": 0, + "content": "In this paper, we propose 2INER(Instructive and In-Context Learning on Few-Shot NER). We build upon the work of InstructionNER by incorporating in-context examples and a novel auxiliary task. Specifically, we first reformulate the NER tasks into a text-to-text framework and then employ T5" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.085, + 0.885, + 0.405 + ], + "angle": 0, + "content": "(Raffel et al., 2020) for natural language generation. In terms of the source sentence, we use instructions to distinguish between tasks by giving a comprehensive task description and include an alternative field to identify the entity type that requires detection. Moreover, we suggest incorporating in-context demonstration examples into the source sentence to enable the model to learn from external knowledge. For the target sentence, we use natural language to represent entity spans and types instead of pointer mechanism. In addition to the two auxiliary tasks used in InstructionNER, we propose a new task called type extraction to further explore the potential of PLMs to understand label semantics. Type Extraction task requires the model to identify all the entity types presented in the original sentence and learn to understand the meaning of entity types at the overall semantic level of the sentence. Our contributions can be summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.408, + 0.887, + 0.503 + ], + "angle": 0, + "content": "- To utilize external knowledge, we apply demonstration-based in-context learning examples to the instruction template. The in-context examples enable the model to directly learn which spans correspond to which types from these additional information, leading to better few-shot abilities." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.505, + 0.886, + 0.633 + ], + "angle": 0, + "content": "- We expand the NER capabilities by dividing them into three components instead of two. And we propose a novel auxiliary task for instructions fine-tuning, called type extraction, to address the existing gap. It can enable the model to understand the meaning of the entity types through the overall semantic level of the sentence, which will improve span recognition and entity labeling abilities." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.634, + 0.886, + 0.698 + ], + "angle": 0, + "content": "- We conduct extensive experiments on four datasets, demonstrating that 2NER outperforms existing few-shot NER methods and remains competitive with SOTA standard NER algorithms." + }, + { + "type": "list", + "bbox": [ + 0.508, + 0.408, + 0.887, + 0.698 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.71, + 0.667, + 0.726 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.738, + 0.766, + 0.753 + ], + "angle": 0, + "content": "2.1 Named Entity Recognition" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.758, + 0.885, + 0.92 + ], + "angle": 0, + "content": "Currently, NER tasks can be divided into flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015), while in this paper, we mainly focus on the flat NER task. The current dominant method to solve flat NER is using token-level classification by turning it into a sequence labeling problem (Chiu and Nichols, 2016; Liu et al., 2019; Zhang et al., 2020; Liu et al., 2021), which apply a text encoder and CRF (Ma and Hovy, 2016) in" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.941 + ], + "angle": 0, + "content": "3941" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.198 + ], + "angle": 0, + "content": "sequence. Recently, CARTNER (Yan et al., 2021) formulate all three NER tasks into a text-to-text framework to solve them concurrently. CARTNER generate entity span sequences by a pointer-based model based on BART (Lewis et al., 2020) so that special design of tagging schema or spans post-processing are no longer needed." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.21, + 0.35, + 0.226 + ], + "angle": 0, + "content": "2.2 Prompt-based Learning" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.231, + 0.49, + 0.456 + ], + "angle": 0, + "content": "With the emergence of GPT-3 (Brown et al., 2020), prompt-based learning has gained increasing attention. It can better stimulate the knowledge model learned in pre-training stages and integrate different tasks together compared to the paradigm of fine-tuning separate model for each task, especially in few-shot settings (Han et al., 2021). To push prompt-based learning further, instruction-based learning (Wei et al., 2021) is proposed to fine-tune the PLMs on a collection of task descriptions which enables the model to better follow human instructions and generalize to unseen tasks with better zero-shot and few-shot abilities (Chung et al., 2022; Sanh et al., 2021)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.468, + 0.353, + 0.483 + ], + "angle": 0, + "content": "2.3 Few-Shot NER Methods" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.489, + 0.49, + 0.829 + ], + "angle": 0, + "content": "One line of work in few-shot NER is to apply contrastive learning to assign the labels by searching for the closest token (Das et al., 2022; Chen et al., 2022c), prototype (Snell et al., 2017; Fritzler et al., 2019; Ma et al., 2022b) or label semantic (Ma et al., 2022a; Huang et al., 2022) in the support set. Another line of researches is prompt-based learning using a unified text-to-text framework to make full use of the PLMs abilities. (Cui et al., 2021) applies span classification using BART and (Chen et al., 2022b; Yan et al., 2021) use a pointer mechanism to generate indexes of spans and types. (Wang et al., 2022) utilizes instruction fine-tuning and two auxiliary tasks to train T5. Meanwhile, to apply external knowledge to the model, (Chen et al., 2022a) introduces a self-describing mechanism and (Lee et al., 2022a) uses a demonstration-based method. Therefore, our methods introduce in-context learning via instruction fine-tuning together to achieve better few-shot NER abilities, which haven't been fully discussed yet in seq2seq NER settings." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.84, + 0.265, + 0.856 + ], + "angle": 0, + "content": "3 Methodology" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.866, + 0.283, + 0.88 + ], + "angle": 0, + "content": "3.1 NER Definition" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.888, + 0.49, + 0.92 + ], + "angle": 0, + "content": "NER aims to predict all spans in the input sentence as well as the entity types associated with the spans." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.262 + ], + "angle": 0, + "content": "The standard flatten-NER can be formulated as follows, given the input sentence containing \\( n \\) tokens \\( X = [x_{1}, x_{2}, \\ldots, x_{n}] \\), the model has to predict the target sentence \\( Y = [l_{1}, l_{2}, \\ldots, l_{n}] \\). We use \\( V_{BIO} \\) to denote the BIO label set, so \\( \\forall l_{i}, l_{i} \\in V_{BIO} \\). While in the sequence-to-sequence modeling scenario, the input sentence is still \\( X \\) but instead of predicting \\( Y \\), the model predicts each entity \\( y_{i} = (e_{i}, s_{i}) \\) directly, where \\( s_{i} \\) represents the entity span in \\( X \\). And \\( e_{i} \\in V \\) represents the entity type of \\( s_{i} \\), where \\( V \\) is the set of entity types." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.262, + 0.885, + 0.359 + ], + "angle": 0, + "content": "More specifically, we use \\(l\\) and \\(r\\) to indicate the left and right boundary of an entity span in \\(X\\), so \\(s_i\\) can be simplified as \\(s_i = x_{l:r}\\), where \\(x_{l:r} = [x_l, x_{l+1}, \\dots, x_r]\\). Therefore, the NER model has to predict each \\(y_i\\) in \\(X\\), indicating that the span \\(s_i\\) belongs to the \\(e_i\\) entity type." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.369, + 0.822, + 0.383 + ], + "angle": 0, + "content": "3.2 Convert NER to Text-to-text Task" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.389, + 0.885, + 0.662 + ], + "angle": 0, + "content": "Using language models like T5 (Raffel et al., 2020) to solve most NLP tasks in a unified text-to-text framework can not only fully utilize the knowledge model learned in the pre-training stage but also simplify the training by using same data format, same loss and same model architecture. Moreover, Compared to using simple prompts, using instruction finetuning can further explore the capabilities of the model (Chung et al., 2022; Sanh et al., 2021). Besides, utilizing in-context learning can further enhance the model's few-shot capabilities in general (Brown et al., 2020) and specifically NER abilities (Lee et al., 2022b). Therefore, we transform the NER task into a text-to-text format and employ instruction finetuning and in-context learning to unleash the model's few-shot capabilities, as shown in Figure 1. The backbone we used is T5." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.663, + 0.884, + 0.711 + ], + "angle": 0, + "content": "The basic text-to-text format of the main NER tasks consists of the following three parts, which is inspired by InstructionNER (Wang et al., 2022) \\(^{1}\\):" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.72, + 0.885, + 0.847 + ], + "angle": 0, + "content": "Instruction The instruction is a prompt that informs the model about the current task it needs to perform. The model is expected to follow the instructions provided within the prompt and complete the task accordingly. The instruction for the main NER task is: Please extract entities and their types from the Sentence, choose entity types from Alternatives." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.857, + 0.883, + 0.888 + ], + "angle": 0, + "content": "Sentence The sentence is the input \\(X\\) from which entities need to be extracted." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.893, + 0.884, + 0.919 + ], + "angle": 0, + "content": "1The templates of auxiliary tasks and in-context Example will be discussed in 3.4 and 3.3 respectively." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.522, + 0.941 + ], + "angle": 0, + "content": "3942" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.141, + 0.092, + 0.882, + 0.273 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.112, + 0.29, + 0.884, + 0.321 + ], + "angle": 0, + "content": "Figure 1: The model architecture of our proposed 2INER. The left and right sides are the source and target sentence of the model, respectively." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.345, + 0.487, + 0.456 + ], + "angle": 0, + "content": "Alternatives Alternatives is a list of entity types \\((V)\\) split by comma, from which the model needs to select the corresponding type to annotate the corresponding span. Alternatives serves as a constraint and a guiding factor, informing the model that it can only select entity types from within this list." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.474, + 0.489, + 0.603 + ], + "angle": 0, + "content": "In order to formulate the NER output to natural language, for each NER output \\( y_{i} = (e_{i}, s_{i}) \\), we use the following template to convert it to text: \\( s_{i} \\) is \\( a / an \\) \\( e_{i} \\), and we use dot to concatenate all detected entity occurrences \\( y_{i} \\) to form the output text. In terms of the entity types \\( e_{i} \\), we use natural language to represent the entity instead of adding special tokens to the model2." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.613, + 0.285, + 0.629 + ], + "angle": 0, + "content": "3.3 Auxiliary Tasks" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.633, + 0.489, + 0.776 + ], + "angle": 0, + "content": "To enhance the NER performance, in addition to the main task, we need to introduce several auxiliary tasks. In InstructionNER (Wang et al., 2022), they employed two auxiliary tasks: entity extraction and entity typing. Moreover, in this paper, a new auxiliary task called type extraction will be introduced. During training, the auxiliary task will also be in the form of text-to-text data, trained alongside the main task data." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.778, + 0.489, + 0.874 + ], + "angle": 0, + "content": "The auxiliary task primarily aims to improve NER capabilities from three perspectives: understand label semantic, span recognition and entity labeling, since NER can be decomposed into three steps: understand the relationship between the label and semantic meaning of the sentence, then extract" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.345, + 0.883, + 0.393 + ], + "angle": 0, + "content": "the spans and finally annotate the given spans. We will discuss the configuration of the auxiliary task in detail from these three perspectives." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.404, + 0.78, + 0.418 + ], + "angle": 0, + "content": "3.3.1 Understand label semantic" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.423, + 0.884, + 0.745 + ], + "angle": 0, + "content": "Type Extraction The goal of the Type Extraction task is to identify all the entity types present in the original sentence. The Instruction is changed to: Please extract all entity types appeared in the Sentence. We will remove the Alternatives in this case, which means that there will be no constraints or hints regarding entity types in the input text, aiming to increase the difficulty of the task. And the output template is: \\( e_i \\) type exists in the sentence. The Type Extraction task involves detecting whether a specific entity type appears in the sentence, without focusing on specific spans or associating spans with entity types. This task will assist the model in understanding the meaning of entity types at the overall semantic level of the sentence. We believe that once the model gains a deeper understanding of entity types, it will be able to comprehend the relationship between spans and types more accurately. As a result, it will enhance both span recognition and entity labeling capabilities simultaneously." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.756, + 0.704, + 0.771 + ], + "angle": 0, + "content": "3.3.2 Span recognition" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.775, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Entity Extraction The goal of the entity extraction task is to extract useful entity spans from the original sentence without the need for annotating the extracted spans. The instruction has been modified to: Please extract entities from the Sentence. Because the model doesn't need to type spans, the Alternatives field is deleted. And the output template has been changed to: \\( s_i \\) is an entity word, since \\( e_i \\) is no longer needed. Because the entity ex" + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.88, + 0.488, + 0.919 + ], + "angle": 0, + "content": "2.e.g. \"Character_Name\" will be represented as \"Character Name\" instead of adding a special token named \"Character_Name\"" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "3943" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.489, + 0.165 + ], + "angle": 0, + "content": "traction task only require the model to predict useful spans regardless of the associated entity types, this task will guide the model to extract correct spans, enhancing the span-F1 accuracy, moreover, overall main task F1 as well (Wang et al., 2022)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.166, + 0.489, + 0.309 + ], + "angle": 0, + "content": "The original InstructionNER (Wang et al., 2022) paper only employed span concatenation as the output(e.g. \\( s_1, s_2, s_3 \\)). However, we believe that since the output of the main task consists of complete sentences with subject-verb-object structures, it would be more cohesive to follow the same pattern for the auxiliary tasks. And more structured output can fully utilize the PLMs's understanding of the task as well." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.32, + 0.299, + 0.336 + ], + "angle": 0, + "content": "3.3.3 Entity Labeling" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.34, + 0.49, + 0.549 + ], + "angle": 0, + "content": "Entity Typing The entity typing task aims to type the given span with the correct label. The instruction has been modified to: Please type these entities according to the Sentence: . The Alternatives prompt and output template is the same as those in main task. During training, the given spans in the Instruction is the exact entity spans that have labels on. In entity typing task, since the spans are given, the model doesn't need to worry about the correctness of the span extracted, so the model can focus more on learning how to label the entity accurately, enhancing the main task NER ability." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.561, + 0.325, + 0.576 + ], + "angle": 0, + "content": "3.4 In-Context Learning" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.582, + 0.49, + 0.806 + ], + "angle": 0, + "content": "In-context learning will be applied to further enhance few-shot NER capabilities. The main approach of in-context learning is to append Examples at the end of the input sentence, hoping that the model can directly learn which spans correspond to which types from these Examples, without the need for additional gradient updates. Besides, the in-context examples are also presented in natural language format, which closely resembles the output text format, serving as a reminder for the model about the desired format it should generate and making it easier for PLMs to understand. This similarity helps bridge the gap and facilitates the model's comprehension." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.807, + 0.49, + 0.92 + ], + "angle": 0, + "content": "The in-context example format in NER is inspired by (Lee et al., 2022b). All examples in this context follow the template: span is a/an entity-type. And we will concatenate an additional prompt (based on the knowledge in Examples) after the Instruction to hint the model to learn from the Examples. During training stage, in-context Examples" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.882, + 0.133 + ], + "angle": 0, + "content": "will only be added to the main NER tasks and there will be no Examples added in auxiliary tasks, which will be discussed in detail in Analysis 5.2." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.134, + 0.885, + 0.342 + ], + "angle": 0, + "content": "In terms of the choices of the samples in Examples, we randomly choose some spans appeared in the train set as well as their corresponding entity types to create Examples. Since we are uncertain about the entity types present in the sentence, we will provide at least one example for each entity type in the Alternatives list within the Examples. The number of samples of each entity types in Examples will also be the same \\( ^3 \\) (e.g. in terms of MIT Movie dataset, there are 12 entity types. If we set the number of examples to 5, there will be 5 examples for each entity types, resulting in a total of \\( 5^{*}12 \\) examples in the field)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.355, + 0.631, + 0.369 + ], + "angle": 0, + "content": "3.5 Inference" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.377, + 0.884, + 0.699 + ], + "angle": 0, + "content": "During inference time, we first use the template of the main NER task to wrap the input sentence \\( X \\), and then feed the sentence to 2INER to get the predicted output text. In terms of the Example field, the example spans are sampled from the training support set, so the model won't see the groundtruth in the Examples during evaluation, avoiding information leakage. After the output text is generated, a decoding strategy will be applied to get the predicted entity \\( (e_i, s_i) \\): (1) We use dot to split the whole output text to obtain individual sub-texts. (2) We use \"is a\" or \"is an\" to split each sub-text if they can be found. (3) The span is the part before \"is a/an\" and the entity type is the part after it. Once we get the \\( (e_i, s_i) \\), we will check whether \\( s_i \\) is in the input sentence \\( X \\) and \\( e_i \\) is in the set of entity types \\( V \\). If it doesn't pass the check, then it isn't a valid entity and will be deleted. And if any of the three steps result in a match failure, then the sub-text will be skipped." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.711, + 0.649, + 0.727 + ], + "angle": 0, + "content": "4 Experiment" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.738, + 0.617, + 0.752 + ], + "angle": 0, + "content": "4.1 Dataset" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.759, + 0.885, + 0.872 + ], + "angle": 0, + "content": "We conduct NER experiments in standard and low-resources settings. For the rich-resources domain, we use CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) and for the low-resource domain, we use three datasets: MIT Movie Review, MIT Restaurant Review (Liu et al., 2013) and Airline Travel Information Systems (ATIS) (Hakkani" + }, + { + "type": "page_footnote", + "bbox": [ + 0.508, + 0.881, + 0.887, + 0.918 + ], + "angle": 0, + "content": "3We refer to \"the number of samples per entity types\" as \"the number of examples\" in the rest of the paper for convenience." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "3944" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.133 + ], + "angle": 0, + "content": "Tür et al., 2016), following (Wang et al., 2022; Chen et al., 2022b; Cui et al., 2021; Yan et al., 2021)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.147, + 0.351, + 0.162 + ], + "angle": 0, + "content": "4.2 Implementation settings" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.169, + 0.49, + 0.554 + ], + "angle": 0, + "content": "In Few-Shot NER scenario, in order to guarantee that each entity type has equal number of instances in the training set, we can't sample \\( k \\) sentences for each entity type directly because a single sentence may contain multiple entities, so the actual shot will exceed \\( k \\). Following (Wang et al., 2022), we will apply a greedy sampling strategy (Yang and Katiyar, 2020) instead, to sample the few-shot training set for each setting and due to the randomness of the sampling, we will repeat 3 times for each experiment. We use T5-large as the backbone model for fair comparison with (Wang et al., 2022). In terms of the number of examples in in-context Example field, we set the number to 5 for MIT Movie and MIT Restaurant dataset, and 1 for ATIS dataset as default. We only add in-context Example field on main-task, and don't include them in auxiliary tasks. The ratio of auxiliary tasks is set to \\( 1.0 \\). We set the batch size to \\( 2/4/8 \\), learning-rate to \\( 2\\mathrm{e}-5/5\\mathrm{e}-5 \\) for \\( 10/20/50 \\) Shot settings respectively, and set batch size to 32, learning-rate to \\( 1\\mathrm{e}-4 \\) for the abundant data setting. The optimizer is Adam and beam search is set to 2. For evaluation, we use F1 score as the metric for NER." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.556, + 0.489, + 0.701 + ], + "angle": 0, + "content": "The names InstructionNER in the tables mean training with main-task data only, indicating the base model, and the subscript words in the tables indicate addition to the base model: +ET, +EE, +TE, +EX means adding Entity Typing, Entity Extraction, Type Extraction, in-context examples, respectively. And we named InstructionNER+ET,EE,TE,EX as 2INER, which is our final model." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.714, + 0.339, + 0.73 + ], + "angle": 0, + "content": "4.3 Standard NER Setting" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.736, + 0.489, + 0.833 + ], + "angle": 0, + "content": "We use CoNLL-2003 dataset to conduct standard NER experiment. We combine the train and validation set as described in (Yan et al., 2021) to train the model. The result is in Table 1, which shows that even though our method mainly focuses on few-shot NER settings, it remains competitive with" + }, + { + "type": "table", + "bbox": [ + 0.511, + 0.081, + 0.886, + 0.245 + ], + "angle": 0, + "content": "
ModelF1Span-F1
(Yang et al., 2018)90.77-
(Ma and Hovy, 2016)91.21-
(Gui et al., 2020)92.02-
(Yamada et al., 2020)*94.3092.40
(Li et al., 2020a)†-92.87
(Yu et al., 2020a)‡-92.50
LC-BERT91.73-
LC-BART90.60-
TemplateNER91.90-
BARTNER-93.24
LightNER92.93-
2INER (InstructionNER+ET,EE,TE,EX)90.7193.93
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.254, + 0.885, + 0.355 + ], + "angle": 0, + "content": "Table 1: F1 and Span-F1 (%) on CoNLL-2003 Standard NER setting. Our method is competitive with SOTA algorithm and even outperform BARTNER (Yan et al., 2021) in span-F1. \" * \" indicates training on external data. \"†\" indicates the reproduction by (Yan et al., 2021). \"‡\" indicates the reproduction with only the sentence-level context by (Yan et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.382, + 0.884, + 0.479 + ], + "angle": 0, + "content": "SOTA algorithm under standard NER setting and even outperform BARTNER (Yan et al., 2021) in span-F1, which is designed for rich-resource NER settings. The performances of 2INER in data abundant nested and discontinuous NER settings are in Appendix A." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.494, + 0.734, + 0.51 + ], + "angle": 0, + "content": "4.4 Few-Shot NER Setting" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.518, + 0.886, + 0.92 + ], + "angle": 0, + "content": "Under Few-Shot NER setting, we only use K-Shot training samples to finetune our model and the results are in Table 2. According to the table, we can find that: (1) Our models consistently outperform InstructionNER as well as other baselines on all three datasets under 10/20/50 Shot settings (except 50Shot in ATIS, which is slightly lower than BARTNER). Especially in MIT Movie dataset, our models have \\(7.33\\%\\), \\(6.76\\%\\), \\(5.39\\%\\) improvements compared to InstructionNER under 10/20/50 settings. (2) Our 10Shot model even outperforms TempleNER's 50Shot model by \\(20.73\\%\\) and \\(7.06\\%\\) in MIT Movie and MIT Restaurant respectively, which highlights the superiority and capability of our model. (3) We have the same finding as InstructionNER (Wang et al., 2022) that F1 improvements are much more significant on MIT Movie than on MIT Restaurant (\\(7.33\\% / 6.76\\% / 5.39\\%\\) v.s. \\(6.86\\% / 3.24\\% / 3.3\\%\\) under 10/20/50 Shot settings), which indicates that although MIT Movie has more entity types, text-to-text framework and instruction-tuning can better utilize pre-training knowledge, and through in-context learning, the model can learn more about the relationships between entities. (4) In ATIS dataset, the improve" + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.844, + 0.383, + 0.858 + ], + "angle": 0, + "content": "4https://huggingface.co/t5-large" + }, + { + "type": "page_footnote", + "bbox": [ + 0.116, + 0.857, + 0.486, + 0.882 + ], + "angle": 0, + "content": "ATIS has 79 entity types so we set the number to 1 to avoid excessively long token lengths." + }, + { + "type": "page_footnote", + "bbox": [ + 0.116, + 0.882, + 0.487, + 0.919 + ], + "angle": 0, + "content": "The data size ratio between main task and each auxiliary tasks. 1.0 means that each sample will be extended into 4 samples: one for main task, one for EE, ET, TE, respectively." + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.844, + 0.487, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "3945" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.116, + 0.082, + 0.898, + 0.219 + ], + "angle": 0, + "content": "
ModelsMIT MovieMIT RestaurantATIS
102050102050102050
LC-BERT25.242.249.621.839.452.744.176.790.7
LC-BART10.227.544.26.38.551.342.072.787.5
TemplateNER37.348.552.246.057.158.771.779.492.6
BARTNER*41.154.067.744.056.064.077.786.193.4
LightNER41.757.873.148.558.062.076.385.392.8
InstructionNER64.4 (±2.1)70.0 (±0.3)74.1 (±1.2)58.7 (±1.2)65.5 (±1.4)71.2 (±1.1)90.14 (±0.12)†91.22 (±0.19)†92.53 (±0.14)†
InstructionNER+ET,EE65.6 (±3.0)70.1 (±1.9)74.7 (±0.3)58.9 (±0.8)66.1 (±0.9)71.1 (±0.9)90.04 (±0.02)†91.46 (±0.23)†92.62 (±0.04)†
InstructionNER+EX72.56 (±1.01)74.99 (±0.27)78.61 (±0.37)64.07 (±1.25)68.2 (±0.11)74.38 (±0.19)89.17 (±0.2)91.33 (±0.05)92.65 (±0.18)
InstructionNER+TE72.0 (±0.25)76.55 (±0.2)80.02 (±0.26)65.52 (±1.35)68.67 (±0.95)73.98 (±0.27)90.77 (±0.6)91.85 (±0.05)92.69 (±0.1)
InstructionNER+ET,EE,TE,EX72.93 (±0.91)76.86 (±0.53)80.09 (±0.22)65.76 (±0.47)69.34 (±0.81)74.4 (±0.4)90.47 (±0.26)92.11 (±0.09)92.83 (±0.15)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.228, + 0.885, + 0.286 + ], + "angle": 0, + "content": "Table 2: The F1(\\%) on three dataset under 10/20/50 Shot settings. The bold number means the best F1 across all models and the numbers in brackets means the standard deviation. The underline numbers mean the best results in our experiments. The \"+\" numbers mean the results of our reproduction. \"* means the reproduction by InstructionNER (Wang et al., 2022)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.311, + 0.49, + 0.504 + ], + "angle": 0, + "content": "ment of our model is less significant compared to other two datasets. We argue that this is because ATIS contains 79 entity types and even if we only provide one sample span for each entity types in in-context Example field, the average token length is 1099 compared to 368 with or without examples, where the token length of the Alternative filed is 327. So the actual input Sentence \\( X \\) only accounts for \\( 3.7\\% \\) of the total token length, which increases the difficulty for the model to extract key information from lengthy sentences. So too many entity types may potentially reduce model improvements." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.515, + 0.281, + 0.53 + ], + "angle": 0, + "content": "4.5 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.535, + 0.489, + 0.823 + ], + "angle": 0, + "content": "In order to find out the influence of our proposed type extraction task and in-context examples on model's few-shot abilities, we conduct ablation studies in Figure 2. The results indicate that adding type extraction task and in-context examples can further enhance the model's few-shot NER abilities. We set InstructionNER as the baseline here which only trains on main-task data without any auxiliary tasks. Then we add type extraction task (InstructionNER+TE) or in-context examples (InstructionNER+EX) respectively on the baseline model to explore their influences. The results from Figure 2 shows that in terms of 10/20/50 Shot settings in few-shot NER, type extraction task achieves an average improvements of \\(7.21\\%\\), \\(4.86\\%\\), \\(4.35\\%\\) F1 and in-context example achieves an average improvements of \\(6.76\\%\\), \\(3.84\\%\\), \\(3.84\\%\\) F1 in MIT Movie and MIT Restaurant dataset." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.825, + 0.489, + 0.874 + ], + "angle": 0, + "content": "Moreover, adding type extraction task can greatly increase the Span-F1 as well. Because Span-F1 indicates the model's ability to locate" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.311, + 0.877, + 0.472 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.481, + 0.884, + 0.54 + ], + "angle": 0, + "content": "Figure 2: F1 and Span-F1 \\((\\%)\\) on MIT Movie and MIT Restaurant through 10/20/50 Shot settings with different task combinations. The deep and light color indicate F1 and Span-F1 respectively." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.566, + 0.886, + 0.71 + ], + "angle": 0, + "content": "spans, the results reveal that through training on type extraction task, span recognition can be benefit from having a deeper understanding of the labels from the overall semantic level of sentence. Therefore, it proves the effectiveness of three steps of NER abilities we proposed in 3.3, and shows that type extraction task can simultaneously improve span recognition and entity labeling abilities through understanding label semantic." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.725, + 0.62, + 0.741 + ], + "angle": 0, + "content": "5 Analysis" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.752, + 0.767, + 0.769 + ], + "angle": 0, + "content": "5.1 Increase Example Number" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.775, + 0.885, + 0.919 + ], + "angle": 0, + "content": "In this section, we will focus on how the number of examples in in-context Example field influence the model performance. We will sequentially change the number of examples to 1, 3, 5, 10, and 15, and train corresponding models to observe the change of F1 on MIT Restaurant dataset. We train our model with main-task and in-context example without any auxiliary tasks (InstructionNER+EX) in this section. The results are in Table 3." + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.881, + 0.489, + 0.919 + ], + "angle": 0, + "content": "7We try to use special-tokens to represent the entity types, but the F1 is slightly lower than without using special-tokens and the proportion of \\( X \\) to the total number of tokens is \\( 4.5\\% \\)." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.941 + ], + "angle": 0, + "content": "3946" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.139, + 0.082, + 0.465, + 0.196 + ], + "angle": 0, + "content": "
InstructionNER+EX ExamplesMIT Restaurant
20 Shot50 Shot
065.5 (±1.4)71.2 (±1.1)
167.74 (±0.22)73.89 (±0.15)
367.89 (±0.3)74.15 (±0.39)
568.2 (±0.11)74.38 (±0.19)
1069.47 (±0.35)74.41 (±0.18)
1569.52 (±0.16)74.64 (±0.49)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.205, + 0.489, + 0.276 + ], + "angle": 0, + "content": "Table 3: F1 scores(%) on MIT Restaurant dataset while changing number of examples using InstructionNER+EX. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.303, + 0.49, + 0.496 + ], + "angle": 0, + "content": "As the number of examples increases, F1 score continues to increase and the largest improvement in F1 score occurs when going from zero examples to one example. As the number of examples increases further, the F1 will continue to increase but the rate of improvement gradually slows down. This suggests that when only one in-context example is provided, the model can quickly learn the specific meanings of each entity type from the example. While more examples may lead to repetitive cues to the model so a balance should be made between model performance and computational cost." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.509, + 0.486, + 0.54 + ], + "angle": 0, + "content": "5.2 Effect of In-Context Example on Auxiliary task" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.547, + 0.49, + 0.756 + ], + "angle": 0, + "content": "In this section, we will discuss whether to add in-context examples on auxiliary task. The model is 2INER (InstructionNER+ET,EE,TE,EX) and we will compare two settings: add examples only on main-task, add examples on main-task as well as three auxiliary tasks. The results in Table 4 indicate that adding examples on auxiliary task will slightly decrease the F1 performance. Because adding examples to auxiliary tasks may potentially reduce their difficulty and make it too easy for the model, thereby reducing the auxiliary tasks' effectiveness in aiding the main task. So adding examples only to the main task is a better approach." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.769, + 0.268, + 0.783 + ], + "angle": 0, + "content": "5.3 Increase Shot" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.791, + 0.49, + 0.92 + ], + "angle": 0, + "content": "In this section, we will discuss the model performance under relatively abundant settings. We increase the shots to 100, 200 and 500 in MIT Movie and MIT Restaurant datasets using 2INER (InstructionNER+ET,EE,TE,EX). As shown in Table 5, compared to InstructionNER, 2INER achieves \\(5.43\\%\\), \\(3.98\\%\\), \\(3.19\\%\\) improvements in F1 under 100/200/500 shots settings respectively." + }, + { + "type": "table", + "bbox": [ + 0.529, + 0.082, + 0.865, + 0.169 + ], + "angle": 0, + "content": "
MIT Restaurant
10 Shot20 Shot50 Shot
2INER65.2669.2774.2
Examples on all tasks(±0.49)(±0.89)(±0.45)
2INER65.7669.3474.4
Examples only on Main-Task(±0.47)(±0.81)(±0.4)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.178, + 0.883, + 0.234 + ], + "angle": 0, + "content": "Table 4: The comparison between adding in-context examples only on main-task and on all tasks including auxiliary tasks. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation." + }, + { + "type": "table", + "bbox": [ + 0.512, + 0.25, + 0.887, + 0.378 + ], + "angle": 0, + "content": "
ModelsMIT MovieMIT Restaurant
100200500100200500
LC-BERT50.759.374.453.557.461.3
LC-BART47.554.264.152.256.360.2
TemplateNER56.362.074.960.162.865.0
BARTNER*70.174.682.665.374.475.7
LightNER78.080.684.870.875.580.2
InstructionNER+ETEE74.378.482.372.775.576.6
2INER81.383.5486.1676.5778.3179.11
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.387, + 0.884, + 0.43 + ], + "angle": 0, + "content": "Table 5: The F1 (\\%) under relatively abundant settings. \" * \" indicates the reproduction results by (Wang et al., 2022). Bold numbers indicate the best F1." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.458, + 0.884, + 0.587 + ], + "angle": 0, + "content": "And 2INER outperforms LightNER in all settings except 500-shots in MIT Restaurant, which shows that 2INER has great NER abilities under data abundant scenario as well. We argue that the in-context Example field may help the model to learn from more diverse samples from the abundant training set and turn the general knowledge into specialized capabilities, leading to the improvement in F1." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.602, + 0.642, + 0.617 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.63, + 0.885, + 0.919 + ], + "angle": 0, + "content": "In this paper, we propose 2INER for few-shot NER using both instruction finetuning and in-context learning by converting NER into a text-to-text framework. Based on InstructionNER, we create a template to concatenate task-specific instructions, input sentence and entity alternatives to make full use of the pre-training knowledge. Besides, we decompose NER into three steps and introduce another auxiliary tasks, called type extraction, to help the model better understand the general semantic meaning of the entity types, which can improve both span recognition and entity labeling abilities. Moreover, we apply the in-context examples to enable the model to learn from additional contextual information, enhancing few-shot abilities. Multiple experiments on four NER datasets prove 2INER's effectiveness in few-shot NER scenario by consistently outperforming other baselines." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "3947" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.221, + 0.1 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.113, + 0.493, + 0.337 + ], + "angle": 0, + "content": "One limitation of our work is the extensive length of the Example and Alternative field when there are too many existed entity types. While incorporating in-context examples in the input sentence can improve few-shot NER performance, it poses a challenge when the Example field becomes too long because we add at least one examples for each potential entity type, especially when the Alternative list contains numerous entity types. This can result in less improvement gains and more computational costs. To address this issue, we assume that larger PLMs such as the recently proposed LLaMA (Touvron et al., 2023) could potentially be explored in future research as a means of resolution." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.352, + 0.266, + 0.367 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.38, + 0.49, + 0.508 + ], + "angle": 0, + "content": "In consideration of ethical concerns, we would make the following descriptions: (1) All of our experiments are conducted using existing datasets sourced from publicly available scientific papers. (2) Our few-shot methods don't require a lot of computational resources. (3) Our text generation models will generate texts based on existing templates, so it won't generate harmful sentences." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.538, + 0.214, + 0.552 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.562, + 0.489, + 0.642 + ], + "angle": 0, + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901." + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.655, + 0.49, + 0.748 + ], + "angle": 0, + "content": "Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, and Le Sun. 2022a. Few-shot named entity recognition with self-describing networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5711-5722, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.76, + 0.49, + 0.866 + ], + "angle": 0, + "content": "Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, and Ningyu Zhang. 2022b. LightNER: A lightweight tuning paradigm for low-resource NER via pluggable prompting. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2374-2387, Gyeongju, Republic of Korea. International Committee on Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.878, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Yanru Chen, Yanan Zheng, and Zhilin Yang. 2022c. Prompt-based metric learning for few-shot ner. arXiv preprint arXiv:2211.04337." + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.562, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.885, + 0.139 + ], + "angle": 0, + "content": "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357-370." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.149, + 0.885, + 0.215 + ], + "angle": 0, + "content": "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.226, + 0.885, + 0.305 + ], + "angle": 0, + "content": "Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1835–1845, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.315, + 0.885, + 0.42 + ], + "angle": 0, + "content": "Leyang Cui and Yue Zhang. 2019. Hierarchically-refined label attention network for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4115-4128, Hong Kong, China. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.43, + 0.884, + 0.51 + ], + "angle": 0, + "content": "Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5860-5870, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.52, + 0.884, + 0.612 + ], + "angle": 0, + "content": "Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER: Few-shot named entity recognition via contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6338-6353, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.622, + 0.885, + 0.74 + ], + "angle": 0, + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.75, + 0.885, + 0.817 + ], + "angle": 0, + "content": "Alexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019. Few-shot classification in named entity recognition task. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pages 993-1000." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.826, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Tao Gui, Jiacheng Ye, Qi Zhang, Zhengyan Li, Zichu Fei, Yeyun Gong, and Xuanjing Huang. 2020. Uncertainty-aware label refinement for sequence labeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2316-2326, Online. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.885, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "3948" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.151 + ], + "angle": 0, + "content": "Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, YunNung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Interspeech, pages 715-719." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.161, + 0.487, + 0.213 + ], + "angle": 0, + "content": "Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, et al. 2021. Pre-trained models: Past, present and future. AI Open, 2:225-250." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.222, + 0.487, + 0.327 + ], + "angle": 0, + "content": "Yucheng Huang, Kai He, Yige Wang, Xianli Zhang, Tieliang Gong, Rui Mao, and Chen Li. 2022. COPNER: Contrastive learning with prompt guiding for few-shot named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2515-2527, Gyeongju, Republic of Korea. International Committee on Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.336, + 0.487, + 0.388 + ], + "angle": 0, + "content": "Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.398, + 0.487, + 0.451 + ], + "angle": 0, + "content": "J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl_1):i180-i182." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.46, + 0.487, + 0.577 + ], + "angle": 0, + "content": "Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022a. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.586, + 0.487, + 0.704 + ], + "angle": 0, + "content": "Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022b. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.713, + 0.487, + 0.83 + ], + "angle": 0, + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.84, + 0.487, + 0.918 + ], + "angle": 0, + "content": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020a. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.165 + ], + "angle": 0, + "content": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020b. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.179, + 0.882, + 0.244 + ], + "angle": 0, + "content": "Jingjing Liu, Panupong Pasupat, Scott Cyphers, and Jim Glass. 2013. Asgard: A portable architecture for multilingual dialogue systems. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8386-8390. IEEE." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.258, + 0.882, + 0.363 + ], + "angle": 0, + "content": "Kun Liu, Yao Fu, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, and Sheng Gao. 2021. Noisy-labeled NER with confidence estimation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3437-3445, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.377, + 0.882, + 0.469 + ], + "angle": 0, + "content": "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for sequence labeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2431-2441, Florence, Italy. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.482, + 0.882, + 0.574 + ], + "angle": 0, + "content": "Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022a. Label semantics for few shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1956-1971, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.588, + 0.882, + 0.666 + ], + "angle": 0, + "content": "Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022b. Decomposed meta-learning for few-shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1584-1596, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.681, + 0.882, + 0.759 + ], + "angle": 0, + "content": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.773, + 0.882, + 0.825 + ], + "angle": 0, + "content": "Alejandro Metke-Jimenez and Sarvnaz Karimi. 2016. Concept identification and normalisation for adverse drug event discovery in medical forums. In BMDID@ ISWC." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.84, + 0.882, + 0.918 + ], + "angle": 0, + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "3949" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.165 + ], + "angle": 0, + "content": "Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.176, + 0.49, + 0.216 + ], + "angle": 0, + "content": "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.226, + 0.49, + 0.318 + ], + "angle": 0, + "content": "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2670-2680, Copenhagen, Denmark. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.329, + 0.488, + 0.395 + ], + "angle": 0, + "content": "Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using lstm-crf. Wireless Communications & Mobile Computing (Online), 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.405, + 0.49, + 0.483 + ], + "angle": 0, + "content": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.495, + 0.49, + 0.573 + ], + "angle": 0, + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.584, + 0.49, + 0.663 + ], + "angle": 0, + "content": "Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.674, + 0.49, + 0.738 + ], + "angle": 0, + "content": "Liwen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, and Weiran Xu. 2022. Instructionner: A multi-task instruction-based generative framework for few-shot ner. arXiv preprint arXiv:2203.03903." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.75, + 0.49, + 0.815 + ], + "angle": 0, + "content": "Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.826, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442-6454, Online. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.885, + 0.191 + ], + "angle": 0, + "content": "Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.2, + 0.885, + 0.279 + ], + "angle": 0, + "content": "Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879-3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.288, + 0.885, + 0.368 + ], + "angle": 0, + "content": "Yi Yang and Arzoo Katiyar. 2020. Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6365-6375, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.377, + 0.885, + 0.456 + ], + "angle": 0, + "content": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020a. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.465, + 0.885, + 0.544 + ], + "angle": 0, + "content": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020b. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.553, + 0.885, + 0.658 + ], + "angle": 0, + "content": "Ningyu Zhang, Shumin Deng, Zhen Bi, Haiyang Yu, Jiacheng Yang, Mosha Chen, Fei Huang, Wei Zhang, and Huajun Chen. 2020. OpenUE: An open toolkit of universal extraction from text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 1-8, Online. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.885, + 0.658 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.512, + 0.67, + 0.634, + 0.687 + ], + "angle": 0, + "content": "A Appendix" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.695, + 0.885, + 0.806 + ], + "angle": 0, + "content": "In this section, we will discuss the remaining two NER settings: nested NER and discontinuous NER. Because the text-to-text structure of our proposed method can be easily adapted to all three NER settings, which will result in a unified structure for solving NER problems. Here, we mainly discuss standard NER scenarios with abundant data." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.807, + 0.885, + 0.886 + ], + "angle": 0, + "content": "For data abundant nested NER, We conduct experiments on Genia (Kim et al., 2003). We follow BARTNER (Yan et al., 2021) to use five entities types and split the train, dev, test as 8.1:0.9:1.0. The results are in Table 6." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.888, + 0.885, + 0.919 + ], + "angle": 0, + "content": "For data abundant discontinuous NER, we conduct experiments on CADEC (Karimi et al., 2015)." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.941 + ], + "angle": 0, + "content": "3950" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.116, + 0.082, + 0.504, + 0.166 + ], + "angle": 0, + "content": "
Genia: ModelPRF
(Li et al., 2020b)[BERT-Large]†81.2576.3678.72
(Yu et al., 2020b)[BERT-Large]†79.4378.3278.87
(Wang et al., 2020)[BERT-Large]79.4578.9479.19
BARTNER (Yan et al., 2021)78.8779.679.23
2INER82.980.7481.81
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.175, + 0.489, + 0.218 + ], + "angle": 0, + "content": "Table 6: Span-F1 (%) on Genia Nested data abundant NER setting. The \"†\" mean the reproduction by (Yan et al., 2021)." + }, + { + "type": "table", + "bbox": [ + 0.116, + 0.231, + 0.516, + 0.315 + ], + "angle": 0, + "content": "
CADEC: ModelPRF
(Metke-Jimenez and Karimi, 2016)64.456.560.2
(Tang et al., 2018)67.864.966.3
(Dai et al., 2020)[ELMo]68.969.069.0
BARTNER (Yan et al., 2021)70.0871.2170.64
2INER71.1875.2673.16
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.324, + 0.486, + 0.352 + ], + "angle": 0, + "content": "Table 7: Span-F1 (%) on CADEC discontinuous data abundant NER setting." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.378, + 0.489, + 0.442 + ], + "angle": 0, + "content": "Following BARTNER (Yan et al., 2021), since only the Adverse Drug Events (ADEs) entities include discontinuous data, only these entities were considered. The results are in Table 7." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.443, + 0.49, + 0.587 + ], + "angle": 0, + "content": "The experiment settings are the same as flat NER. We use T5-large as the backbone model and report span-level F1. The results show that in data abundant nested and discontinuous NER setting, our proposed method greatly outperforms BARTNER (Yan et al., 2021) and other SOTA methods, which demonstrates that our methods do have a potential to handle different NER settings in a unified framework." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.518, + 0.941 + ], + "angle": 0, + "content": "3951" + } + ] +] \ No newline at end of file diff --git a/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_origin.pdf b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f674bb302b7605e5af38d0803ef6fb0a9877cdf1 --- /dev/null +++ b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/3de52cb5-1c81-4fb7-8fab-f06b43c089a4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0dd589f8ba7f9c07fc0cc5914770641cf5b04e10abda592b40c8f21b2740e3c3 +size 588891 diff --git a/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/full.md b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..99da18d1233972e3a0bc6f29561908718f2e4e4c --- /dev/null +++ b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/full.md @@ -0,0 +1,286 @@ +# 2INER: Instructive and In-Context Learning on Few-Shot Named Entity Recognition + +Jiasheng Zhang $^{1}$ Xikai Liu $^{2}$ Xinyi Lai $^{3}$ Yan Gao $^{2}$ + +Shusen Wang² Yao Hu² Yiqing LIN + +$^{1}$ Shanghai Jiaotong University $^{2}$ Xiaohongshu Inc. $^{3}$ Chongqing University + +{js.zhang,yiqing.lin}@sjtu.edu.cn + +{xikai,yadun,haxian,xiahou}@xiaohongshu.com + +laixinyi@cqu.edu.cn + +# Abstract + +Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER (Wang et al., 2022) to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extraction, to enhance the model's understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms. + +# 1 Introduction + +Named Entity Recognition (NER) has been a fundamental task of Natural Language Processing (NLP) and there are three types of sub-tasks in NER: flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015). All three sub-tasks aim to locate named entities, extract the entity spans, and classify each span into pre-defined label categories. In terms of the flat NER which is the main focus of this paper, it can be formulated as a sequence labeling paradigm by assigning labels to each token in the sentence through token-classification models. The dominant methods include combining Pre-trained Language Models(PLMs) (Devlin et al., 2019) with label-specific classifier (LC) (Strubell et al., 2017; Cui and Zhang, 2019). However, the fixed shape of the output LC + +layer necessitates a consistent label set for both the training and testing data, which poses a challenge for knowledge transfer. Therefore, these models need to be trained from scratch to adapt to a new domain with a different label set, highlighting the requirement for a large amount of data for these methods. + +Due to the high cost of sequence labeling annotation in real-world scenarios, labeled data for NER is often limited. As a result, few-shot NER has gained significant attention due to its practical applications. Meanwhile, applying prompt-base learning (Han et al., 2021) on PLMs is an effective way to solve few-shot problems (Brown et al., 2020). PLMs can learn a lot of knowledge regarding human languages by training on a large amount of self-supervised corpus. In order to explore the potential of PLMs, prompt-based learning reformulate the downstream tasks to text-to-text framework with additional prompt indicating task descriptions (e.g. instruction fine-tuning (Wei et al., 2021; Chung et al., 2022; Sanh et al., 2021)). Through this approach, the model can effectively leverage the knowledge present in PLMs to enhance downstream skills without the need for additional large amounts of downstream data. This enables the model to achieve remarkable performance in few-shot settings. + +Recently, many prompt-based NER methods have emerged to address the limitations of traditional few-shot NER approaches. TemplateNER (Cui et al., 2021) treats original sentence and predicted template filled by entity spans as source and target sequence, respectively, but all candidate spans must be enumerated during inference, leading to a high computational cost. CARTNER (Yan et al., 2021) proposed a pointer mechanism to unified all NER sub-tasks into one sequence-to-sequence (seq2seq) framework. CARTNER utilizes the raw sentence as input and outputs pointer index and tag index which represent the location + +of the span and the corresponding label index in the category, respectively. To further adapt BARTNER for few-shot settings, LightNER (Chen et al., 2022b) proposed a lightweight tuning approach for low-resource settings by adding a unified learnable verbalizer and incorporating learnable parameters into the self-attention layers. Nonetheless, due to the fact that pointer mechanism only outputs the indexes of entities and labels, the model encounters challenges in effectively leveraging the capabilities of PLMs to directly comprehend the semantic meaning between entities and labels. Thus instead of using a pointer mechanism, InstructionNER (Wang et al., 2022) directly generates entity spans and types in the target sequence and applies instruction fine-tuning with two auxiliary tasks to further mining the capabilities of PLMs, which leads to significant few-shot improvements. + +In terms of the auxiliary tasks in InstructionNER, they propose two auxiliary tasks from two perspectives: span recognition (Entity Extraction) and entity labeling (Entity Typing). However, we argue that NER can be further divided into three parts: 1) understand the relationship between the label and semantic meaning of the sentence. 2) extract the spans. 3) annotate the given spans. We believe that both span recognition and entity labeling can be benefit from having a deeper understanding of the label semantics. Therefore, we proposed a new auxiliary task, called Type Extraction, to help the model to acquire this ability. + +Meanwhile, none of the above methods take the additional external knowledge into account. Current literature related to utilize external knowledge in NER involve (Chen et al., 2022a) and (Lee et al., 2022a). SDNet (Chen et al., 2022a) proposes a self-describing mechanism to leverage external resources by self-describing both entity types and mentions, while (Lee et al., 2022a) uses a demonstration-based method by incorporating examples to the input but without a text-to-text framework. Therefore, to the best of our knowledge, there is currently no existing literature that combines in-context external knowledge with instruction fine-tuning for few-shot NER. + +In this paper, we propose 2INER(Instructive and In-Context Learning on Few-Shot NER). We build upon the work of InstructionNER by incorporating in-context examples and a novel auxiliary task. Specifically, we first reformulate the NER tasks into a text-to-text framework and then employ T5 + +(Raffel et al., 2020) for natural language generation. In terms of the source sentence, we use instructions to distinguish between tasks by giving a comprehensive task description and include an alternative field to identify the entity type that requires detection. Moreover, we suggest incorporating in-context demonstration examples into the source sentence to enable the model to learn from external knowledge. For the target sentence, we use natural language to represent entity spans and types instead of pointer mechanism. In addition to the two auxiliary tasks used in InstructionNER, we propose a new task called type extraction to further explore the potential of PLMs to understand label semantics. Type Extraction task requires the model to identify all the entity types presented in the original sentence and learn to understand the meaning of entity types at the overall semantic level of the sentence. Our contributions can be summarized as follows: + +- To utilize external knowledge, we apply demonstration-based in-context learning examples to the instruction template. The in-context examples enable the model to directly learn which spans correspond to which types from these additional information, leading to better few-shot abilities. +- We expand the NER capabilities by dividing them into three components instead of two. And we propose a novel auxiliary task for instructions fine-tuning, called type extraction, to address the existing gap. It can enable the model to understand the meaning of the entity types through the overall semantic level of the sentence, which will improve span recognition and entity labeling abilities. +- We conduct extensive experiments on four datasets, demonstrating that 2NER outperforms existing few-shot NER methods and remains competitive with SOTA standard NER algorithms. + +# 2 Related Work + +# 2.1 Named Entity Recognition + +Currently, NER tasks can be divided into flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015), while in this paper, we mainly focus on the flat NER task. The current dominant method to solve flat NER is using token-level classification by turning it into a sequence labeling problem (Chiu and Nichols, 2016; Liu et al., 2019; Zhang et al., 2020; Liu et al., 2021), which apply a text encoder and CRF (Ma and Hovy, 2016) in + +sequence. Recently, CARTNER (Yan et al., 2021) formulate all three NER tasks into a text-to-text framework to solve them concurrently. CARTNER generate entity span sequences by a pointer-based model based on BART (Lewis et al., 2020) so that special design of tagging schema or spans post-processing are no longer needed. + +# 2.2 Prompt-based Learning + +With the emergence of GPT-3 (Brown et al., 2020), prompt-based learning has gained increasing attention. It can better stimulate the knowledge model learned in pre-training stages and integrate different tasks together compared to the paradigm of fine-tuning separate model for each task, especially in few-shot settings (Han et al., 2021). To push prompt-based learning further, instruction-based learning (Wei et al., 2021) is proposed to fine-tune the PLMs on a collection of task descriptions which enables the model to better follow human instructions and generalize to unseen tasks with better zero-shot and few-shot abilities (Chung et al., 2022; Sanh et al., 2021). + +# 2.3 Few-Shot NER Methods + +One line of work in few-shot NER is to apply contrastive learning to assign the labels by searching for the closest token (Das et al., 2022; Chen et al., 2022c), prototype (Snell et al., 2017; Fritzler et al., 2019; Ma et al., 2022b) or label semantic (Ma et al., 2022a; Huang et al., 2022) in the support set. Another line of researches is prompt-based learning using a unified text-to-text framework to make full use of the PLMs abilities. (Cui et al., 2021) applies span classification using BART and (Chen et al., 2022b; Yan et al., 2021) use a pointer mechanism to generate indexes of spans and types. (Wang et al., 2022) utilizes instruction fine-tuning and two auxiliary tasks to train T5. Meanwhile, to apply external knowledge to the model, (Chen et al., 2022a) introduces a self-describing mechanism and (Lee et al., 2022a) uses a demonstration-based method. Therefore, our methods introduce in-context learning via instruction fine-tuning together to achieve better few-shot NER abilities, which haven't been fully discussed yet in seq2seq NER settings. + +# 3 Methodology + +# 3.1 NER Definition + +NER aims to predict all spans in the input sentence as well as the entity types associated with the spans. + +The standard flatten-NER can be formulated as follows, given the input sentence containing $n$ tokens $X = [x_{1}, x_{2}, \ldots, x_{n}]$ , the model has to predict the target sentence $Y = [l_{1}, l_{2}, \ldots, l_{n}]$ . We use $V_{BIO}$ to denote the BIO label set, so $\forall l_{i}, l_{i} \in V_{BIO}$ . While in the sequence-to-sequence modeling scenario, the input sentence is still $X$ but instead of predicting $Y$ , the model predicts each entity $y_{i} = (e_{i}, s_{i})$ directly, where $s_{i}$ represents the entity span in $X$ . And $e_{i} \in V$ represents the entity type of $s_{i}$ , where $V$ is the set of entity types. + +More specifically, we use $l$ and $r$ to indicate the left and right boundary of an entity span in $X$ , so $s_i$ can be simplified as $s_i = x_{l:r}$ , where $x_{l:r} = [x_l, x_{l+1}, \dots, x_r]$ . Therefore, the NER model has to predict each $y_i$ in $X$ , indicating that the span $s_i$ belongs to the $e_i$ entity type. + +# 3.2 Convert NER to Text-to-text Task + +Using language models like T5 (Raffel et al., 2020) to solve most NLP tasks in a unified text-to-text framework can not only fully utilize the knowledge model learned in the pre-training stage but also simplify the training by using same data format, same loss and same model architecture. Moreover, Compared to using simple prompts, using instruction finetuning can further explore the capabilities of the model (Chung et al., 2022; Sanh et al., 2021). Besides, utilizing in-context learning can further enhance the model's few-shot capabilities in general (Brown et al., 2020) and specifically NER abilities (Lee et al., 2022b). Therefore, we transform the NER task into a text-to-text format and employ instruction finetuning and in-context learning to unleash the model's few-shot capabilities, as shown in Figure 1. The backbone we used is T5. + +The basic text-to-text format of the main NER tasks consists of the following three parts, which is inspired by InstructionNER (Wang et al., 2022) $^{1}$ : + +Instruction The instruction is a prompt that informs the model about the current task it needs to perform. The model is expected to follow the instructions provided within the prompt and complete the task accordingly. The instruction for the main NER task is: Please extract entities and their types from the Sentence, choose entity types from Alternatives. + +Sentence The sentence is the input $X$ from which entities need to be extracted. + +1The templates of auxiliary tasks and in-context Example will be discussed in 3.4 and 3.3 respectively. + +![](images/373832b663573beaab873631fd7634757242b782f75d1348c308506edd49c34b.jpg) +Figure 1: The model architecture of our proposed 2INER. The left and right sides are the source and target sentence of the model, respectively. + +Alternatives Alternatives is a list of entity types $(V)$ split by comma, from which the model needs to select the corresponding type to annotate the corresponding span. Alternatives serves as a constraint and a guiding factor, informing the model that it can only select entity types from within this list. + +In order to formulate the NER output to natural language, for each NER output $y_{i} = (e_{i}, s_{i})$ , we use the following template to convert it to text: $s_{i}$ is $a / an$ $e_{i}$ , and we use dot to concatenate all detected entity occurrences $y_{i}$ to form the output text. In terms of the entity types $e_{i}$ , we use natural language to represent the entity instead of adding special tokens to the model2. + +# 3.3 Auxiliary Tasks + +To enhance the NER performance, in addition to the main task, we need to introduce several auxiliary tasks. In InstructionNER (Wang et al., 2022), they employed two auxiliary tasks: entity extraction and entity typing. Moreover, in this paper, a new auxiliary task called type extraction will be introduced. During training, the auxiliary task will also be in the form of text-to-text data, trained alongside the main task data. + +The auxiliary task primarily aims to improve NER capabilities from three perspectives: understand label semantic, span recognition and entity labeling, since NER can be decomposed into three steps: understand the relationship between the label and semantic meaning of the sentence, then extract + +the spans and finally annotate the given spans. We will discuss the configuration of the auxiliary task in detail from these three perspectives. + +# 3.3.1 Understand label semantic + +Type Extraction The goal of the Type Extraction task is to identify all the entity types present in the original sentence. The Instruction is changed to: Please extract all entity types appeared in the Sentence. We will remove the Alternatives in this case, which means that there will be no constraints or hints regarding entity types in the input text, aiming to increase the difficulty of the task. And the output template is: $e_i$ type exists in the sentence. The Type Extraction task involves detecting whether a specific entity type appears in the sentence, without focusing on specific spans or associating spans with entity types. This task will assist the model in understanding the meaning of entity types at the overall semantic level of the sentence. We believe that once the model gains a deeper understanding of entity types, it will be able to comprehend the relationship between spans and types more accurately. As a result, it will enhance both span recognition and entity labeling capabilities simultaneously. + +# 3.3.2 Span recognition + +Entity Extraction The goal of the entity extraction task is to extract useful entity spans from the original sentence without the need for annotating the extracted spans. The instruction has been modified to: Please extract entities from the Sentence. Because the model doesn't need to type spans, the Alternatives field is deleted. And the output template has been changed to: $s_i$ is an entity word, since $e_i$ is no longer needed. Because the entity ex + +traction task only require the model to predict useful spans regardless of the associated entity types, this task will guide the model to extract correct spans, enhancing the span-F1 accuracy, moreover, overall main task F1 as well (Wang et al., 2022). + +The original InstructionNER (Wang et al., 2022) paper only employed span concatenation as the output(e.g. $s_1, s_2, s_3$ ). However, we believe that since the output of the main task consists of complete sentences with subject-verb-object structures, it would be more cohesive to follow the same pattern for the auxiliary tasks. And more structured output can fully utilize the PLMs's understanding of the task as well. + +# 3.3.3 Entity Labeling + +Entity Typing The entity typing task aims to type the given span with the correct label. The instruction has been modified to: Please type these entities according to the Sentence: . The Alternatives prompt and output template is the same as those in main task. During training, the given spans in the Instruction is the exact entity spans that have labels on. In entity typing task, since the spans are given, the model doesn't need to worry about the correctness of the span extracted, so the model can focus more on learning how to label the entity accurately, enhancing the main task NER ability. + +# 3.4 In-Context Learning + +In-context learning will be applied to further enhance few-shot NER capabilities. The main approach of in-context learning is to append Examples at the end of the input sentence, hoping that the model can directly learn which spans correspond to which types from these Examples, without the need for additional gradient updates. Besides, the in-context examples are also presented in natural language format, which closely resembles the output text format, serving as a reminder for the model about the desired format it should generate and making it easier for PLMs to understand. This similarity helps bridge the gap and facilitates the model's comprehension. + +The in-context example format in NER is inspired by (Lee et al., 2022b). All examples in this context follow the template: span is a/an entity-type. And we will concatenate an additional prompt (based on the knowledge in Examples) after the Instruction to hint the model to learn from the Examples. During training stage, in-context Examples + +will only be added to the main NER tasks and there will be no Examples added in auxiliary tasks, which will be discussed in detail in Analysis 5.2. + +In terms of the choices of the samples in Examples, we randomly choose some spans appeared in the train set as well as their corresponding entity types to create Examples. Since we are uncertain about the entity types present in the sentence, we will provide at least one example for each entity type in the Alternatives list within the Examples. The number of samples of each entity types in Examples will also be the same $^3$ (e.g. in terms of MIT Movie dataset, there are 12 entity types. If we set the number of examples to 5, there will be 5 examples for each entity types, resulting in a total of $5^{*}12$ examples in the field). + +# 3.5 Inference + +During inference time, we first use the template of the main NER task to wrap the input sentence $X$ , and then feed the sentence to 2INER to get the predicted output text. In terms of the Example field, the example spans are sampled from the training support set, so the model won't see the groundtruth in the Examples during evaluation, avoiding information leakage. After the output text is generated, a decoding strategy will be applied to get the predicted entity $(e_i, s_i)$ : (1) We use dot to split the whole output text to obtain individual sub-texts. (2) We use "is a" or "is an" to split each sub-text if they can be found. (3) The span is the part before "is a/an" and the entity type is the part after it. Once we get the $(e_i, s_i)$ , we will check whether $s_i$ is in the input sentence $X$ and $e_i$ is in the set of entity types $V$ . If it doesn't pass the check, then it isn't a valid entity and will be deleted. And if any of the three steps result in a match failure, then the sub-text will be skipped. + +# 4 Experiment + +# 4.1 Dataset + +We conduct NER experiments in standard and low-resources settings. For the rich-resources domain, we use CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) and for the low-resource domain, we use three datasets: MIT Movie Review, MIT Restaurant Review (Liu et al., 2013) and Airline Travel Information Systems (ATIS) (Hakkani + +Tür et al., 2016), following (Wang et al., 2022; Chen et al., 2022b; Cui et al., 2021; Yan et al., 2021). + +# 4.2 Implementation settings + +In Few-Shot NER scenario, in order to guarantee that each entity type has equal number of instances in the training set, we can't sample $k$ sentences for each entity type directly because a single sentence may contain multiple entities, so the actual shot will exceed $k$ . Following (Wang et al., 2022), we will apply a greedy sampling strategy (Yang and Katiyar, 2020) instead, to sample the few-shot training set for each setting and due to the randomness of the sampling, we will repeat 3 times for each experiment. We use T5-large as the backbone model for fair comparison with (Wang et al., 2022). In terms of the number of examples in in-context Example field, we set the number to 5 for MIT Movie and MIT Restaurant dataset, and 1 for ATIS dataset as default. We only add in-context Example field on main-task, and don't include them in auxiliary tasks. The ratio of auxiliary tasks is set to $1.0$ . We set the batch size to $2/4/8$ , learning-rate to $2\mathrm{e}-5/5\mathrm{e}-5$ for $10/20/50$ Shot settings respectively, and set batch size to 32, learning-rate to $1\mathrm{e}-4$ for the abundant data setting. The optimizer is Adam and beam search is set to 2. For evaluation, we use F1 score as the metric for NER. + +The names InstructionNER in the tables mean training with main-task data only, indicating the base model, and the subscript words in the tables indicate addition to the base model: +ET, +EE, +TE, +EX means adding Entity Typing, Entity Extraction, Type Extraction, in-context examples, respectively. And we named InstructionNER+ET,EE,TE,EX as 2INER, which is our final model. + +# 4.3 Standard NER Setting + +We use CoNLL-2003 dataset to conduct standard NER experiment. We combine the train and validation set as described in (Yan et al., 2021) to train the model. The result is in Table 1, which shows that even though our method mainly focuses on few-shot NER settings, it remains competitive with + +
ModelF1Span-F1
(Yang et al., 2018)90.77-
(Ma and Hovy, 2016)91.21-
(Gui et al., 2020)92.02-
(Yamada et al., 2020)*94.3092.40
(Li et al., 2020a)†-92.87
(Yu et al., 2020a)‡-92.50
LC-BERT91.73-
LC-BART90.60-
TemplateNER91.90-
BARTNER-93.24
LightNER92.93-
2INER (InstructionNER+ET,EE,TE,EX)90.7193.93
+ +Table 1: F1 and Span-F1 (%) on CoNLL-2003 Standard NER setting. Our method is competitive with SOTA algorithm and even outperform BARTNER (Yan et al., 2021) in span-F1. " * " indicates training on external data. "†" indicates the reproduction by (Yan et al., 2021). "‡" indicates the reproduction with only the sentence-level context by (Yan et al., 2021). + +SOTA algorithm under standard NER setting and even outperform BARTNER (Yan et al., 2021) in span-F1, which is designed for rich-resource NER settings. The performances of 2INER in data abundant nested and discontinuous NER settings are in Appendix A. + +# 4.4 Few-Shot NER Setting + +Under Few-Shot NER setting, we only use K-Shot training samples to finetune our model and the results are in Table 2. According to the table, we can find that: (1) Our models consistently outperform InstructionNER as well as other baselines on all three datasets under 10/20/50 Shot settings (except 50Shot in ATIS, which is slightly lower than BARTNER). Especially in MIT Movie dataset, our models have $7.33\%$ , $6.76\%$ , $5.39\%$ improvements compared to InstructionNER under 10/20/50 settings. (2) Our 10Shot model even outperforms TempleNER's 50Shot model by $20.73\%$ and $7.06\%$ in MIT Movie and MIT Restaurant respectively, which highlights the superiority and capability of our model. (3) We have the same finding as InstructionNER (Wang et al., 2022) that F1 improvements are much more significant on MIT Movie than on MIT Restaurant ( $7.33\% / 6.76\% / 5.39\%$ v.s. $6.86\% / 3.24\% / 3.3\%$ under 10/20/50 Shot settings), which indicates that although MIT Movie has more entity types, text-to-text framework and instruction-tuning can better utilize pre-training knowledge, and through in-context learning, the model can learn more about the relationships between entities. (4) In ATIS dataset, the improve + +
ModelsMIT MovieMIT RestaurantATIS
102050102050102050
LC-BERT25.242.249.621.839.452.744.176.790.7
LC-BART10.227.544.26.38.551.342.072.787.5
TemplateNER37.348.552.246.057.158.771.779.492.6
BARTNER*41.154.067.744.056.064.077.786.193.4
LightNER41.757.873.148.558.062.076.385.392.8
InstructionNER64.4 (±2.1)70.0 (±0.3)74.1 (±1.2)58.7 (±1.2)65.5 (±1.4)71.2 (±1.1)90.14 (±0.12)†91.22 (±0.19)†92.53 (±0.14)†
InstructionNER+ET,EE65.6 (±3.0)70.1 (±1.9)74.7 (±0.3)58.9 (±0.8)66.1 (±0.9)71.1 (±0.9)90.04 (±0.02)†91.46 (±0.23)†92.62 (±0.04)†
InstructionNER+EX72.56 (±1.01)74.99 (±0.27)78.61 (±0.37)64.07 (±1.25)68.2 (±0.11)74.38 (±0.19)89.17 (±0.2)91.33 (±0.05)92.65 (±0.18)
InstructionNER+TE72.0 (±0.25)76.55 (±0.2)80.02 (±0.26)65.52 (±1.35)68.67 (±0.95)73.98 (±0.27)90.77 (±0.6)91.85 (±0.05)92.69 (±0.1)
InstructionNER+ET,EE,TE,EX72.93 (±0.91)76.86 (±0.53)80.09 (±0.22)65.76 (±0.47)69.34 (±0.81)74.4 (±0.4)90.47 (±0.26)92.11 (±0.09)92.83 (±0.15)
+ +Table 2: The F1(\%) on three dataset under 10/20/50 Shot settings. The bold number means the best F1 across all models and the numbers in brackets means the standard deviation. The underline numbers mean the best results in our experiments. The "+" numbers mean the results of our reproduction. "* means the reproduction by InstructionNER (Wang et al., 2022). + +ment of our model is less significant compared to other two datasets. We argue that this is because ATIS contains 79 entity types and even if we only provide one sample span for each entity types in in-context Example field, the average token length is 1099 compared to 368 with or without examples, where the token length of the Alternative filed is 327. So the actual input Sentence $X$ only accounts for $3.7\%$ of the total token length, which increases the difficulty for the model to extract key information from lengthy sentences. So too many entity types may potentially reduce model improvements. + +# 4.5 Ablation Study + +In order to find out the influence of our proposed type extraction task and in-context examples on model's few-shot abilities, we conduct ablation studies in Figure 2. The results indicate that adding type extraction task and in-context examples can further enhance the model's few-shot NER abilities. We set InstructionNER as the baseline here which only trains on main-task data without any auxiliary tasks. Then we add type extraction task (InstructionNER+TE) or in-context examples (InstructionNER+EX) respectively on the baseline model to explore their influences. The results from Figure 2 shows that in terms of 10/20/50 Shot settings in few-shot NER, type extraction task achieves an average improvements of $7.21\%$ , $4.86\%$ , $4.35\%$ F1 and in-context example achieves an average improvements of $6.76\%$ , $3.84\%$ , $3.84\%$ F1 in MIT Movie and MIT Restaurant dataset. + +Moreover, adding type extraction task can greatly increase the Span-F1 as well. Because Span-F1 indicates the model's ability to locate + +![](images/ccbd7a22ac82ef0bf7eff5a54809f4a260d406dd5983e0a34343f0d4560b51d5.jpg) +Figure 2: F1 and Span-F1 $(\%)$ on MIT Movie and MIT Restaurant through 10/20/50 Shot settings with different task combinations. The deep and light color indicate F1 and Span-F1 respectively. + +spans, the results reveal that through training on type extraction task, span recognition can be benefit from having a deeper understanding of the labels from the overall semantic level of sentence. Therefore, it proves the effectiveness of three steps of NER abilities we proposed in 3.3, and shows that type extraction task can simultaneously improve span recognition and entity labeling abilities through understanding label semantic. + +# 5 Analysis + +# 5.1 Increase Example Number + +In this section, we will focus on how the number of examples in in-context Example field influence the model performance. We will sequentially change the number of examples to 1, 3, 5, 10, and 15, and train corresponding models to observe the change of F1 on MIT Restaurant dataset. We train our model with main-task and in-context example without any auxiliary tasks (InstructionNER+EX) in this section. The results are in Table 3. + +
InstructionNER+EX ExamplesMIT Restaurant
20 Shot50 Shot
065.5 (±1.4)71.2 (±1.1)
167.74 (±0.22)73.89 (±0.15)
367.89 (±0.3)74.15 (±0.39)
568.2 (±0.11)74.38 (±0.19)
1069.47 (±0.35)74.41 (±0.18)
1569.52 (±0.16)74.64 (±0.49)
+ +As the number of examples increases, F1 score continues to increase and the largest improvement in F1 score occurs when going from zero examples to one example. As the number of examples increases further, the F1 will continue to increase but the rate of improvement gradually slows down. This suggests that when only one in-context example is provided, the model can quickly learn the specific meanings of each entity type from the example. While more examples may lead to repetitive cues to the model so a balance should be made between model performance and computational cost. + +# 5.2 Effect of In-Context Example on Auxiliary task + +In this section, we will discuss whether to add in-context examples on auxiliary task. The model is 2INER (InstructionNER+ET,EE,TE,EX) and we will compare two settings: add examples only on main-task, add examples on main-task as well as three auxiliary tasks. The results in Table 4 indicate that adding examples on auxiliary task will slightly decrease the F1 performance. Because adding examples to auxiliary tasks may potentially reduce their difficulty and make it too easy for the model, thereby reducing the auxiliary tasks' effectiveness in aiding the main task. So adding examples only to the main task is a better approach. + +# 5.3 Increase Shot + +In this section, we will discuss the model performance under relatively abundant settings. We increase the shots to 100, 200 and 500 in MIT Movie and MIT Restaurant datasets using 2INER (InstructionNER+ET,EE,TE,EX). As shown in Table 5, compared to InstructionNER, 2INER achieves $5.43\%$ , $3.98\%$ , $3.19\%$ improvements in F1 under 100/200/500 shots settings respectively. + +Table 3: F1 scores(%) on MIT Restaurant dataset while changing number of examples using InstructionNER+EX. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation. + +
MIT Restaurant
10 Shot20 Shot50 Shot
2INER65.2669.2774.2
Examples on all tasks(±0.49)(±0.89)(±0.45)
2INER65.7669.3474.4
Examples only on Main-Task(±0.47)(±0.81)(±0.4)
+ +Table 4: The comparison between adding in-context examples only on main-task and on all tasks including auxiliary tasks. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation. + +
ModelsMIT MovieMIT Restaurant
100200500100200500
LC-BERT50.759.374.453.557.461.3
LC-BART47.554.264.152.256.360.2
TemplateNER56.362.074.960.162.865.0
BARTNER*70.174.682.665.374.475.7
LightNER78.080.684.870.875.580.2
InstructionNER+ETEE74.378.482.372.775.576.6
2INER81.383.5486.1676.5778.3179.11
+ +Table 5: The F1 (\%) under relatively abundant settings. " * " indicates the reproduction results by (Wang et al., 2022). Bold numbers indicate the best F1. + +And 2INER outperforms LightNER in all settings except 500-shots in MIT Restaurant, which shows that 2INER has great NER abilities under data abundant scenario as well. We argue that the in-context Example field may help the model to learn from more diverse samples from the abundant training set and turn the general knowledge into specialized capabilities, leading to the improvement in F1. + +# 6 Conclusion + +In this paper, we propose 2INER for few-shot NER using both instruction finetuning and in-context learning by converting NER into a text-to-text framework. Based on InstructionNER, we create a template to concatenate task-specific instructions, input sentence and entity alternatives to make full use of the pre-training knowledge. Besides, we decompose NER into three steps and introduce another auxiliary tasks, called type extraction, to help the model better understand the general semantic meaning of the entity types, which can improve both span recognition and entity labeling abilities. Moreover, we apply the in-context examples to enable the model to learn from additional contextual information, enhancing few-shot abilities. Multiple experiments on four NER datasets prove 2INER's effectiveness in few-shot NER scenario by consistently outperforming other baselines. + +# Limitations + +One limitation of our work is the extensive length of the Example and Alternative field when there are too many existed entity types. While incorporating in-context examples in the input sentence can improve few-shot NER performance, it poses a challenge when the Example field becomes too long because we add at least one examples for each potential entity type, especially when the Alternative list contains numerous entity types. This can result in less improvement gains and more computational costs. To address this issue, we assume that larger PLMs such as the recently proposed LLaMA (Touvron et al., 2023) could potentially be explored in future research as a means of resolution. + +# Ethics Statement + +In consideration of ethical concerns, we would make the following descriptions: (1) All of our experiments are conducted using existing datasets sourced from publicly available scientific papers. (2) Our few-shot methods don't require a lot of computational resources. (3) Our text generation models will generate texts based on existing templates, so it won't generate harmful sentences. + +# References + +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. +Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, and Le Sun. 2022a. Few-shot named entity recognition with self-describing networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5711-5722, Dublin, Ireland. Association for Computational Linguistics. +Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, and Ningyu Zhang. 2022b. LightNER: A lightweight tuning paradigm for low-resource NER via pluggable prompting. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2374-2387, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. +Yanru Chen, Yanan Zheng, and Zhilin Yang. 2022c. Prompt-based metric learning for few-shot ner. arXiv preprint arXiv:2211.04337. + +Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357-370. +Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. +Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1835–1845, Online. Association for Computational Linguistics. +Leyang Cui and Yue Zhang. 2019. Hierarchically-refined label attention network for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4115-4128, Hong Kong, China. Association for Computational Linguistics. +Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5860-5870, Online. Association for Computational Linguistics. +Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER: Few-shot named entity recognition via contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6338-6353, Dublin, Ireland. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Alexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019. Few-shot classification in named entity recognition task. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pages 993-1000. +Tao Gui, Jiacheng Ye, Qi Zhang, Zhengyan Li, Zichu Fei, Yeyun Gong, and Xuanjing Huang. 2020. Uncertainty-aware label refinement for sequence labeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2316-2326, Online. Association for Computational Linguistics. + +Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, YunNung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Interspeech, pages 715-719. +Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, et al. 2021. Pre-trained models: Past, present and future. AI Open, 2:225-250. +Yucheng Huang, Kai He, Yige Wang, Xianli Zhang, Tieliang Gong, Rui Mao, and Chen Li. 2022. COPNER: Contrastive learning with prompt guiding for few-shot named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2515-2527, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. +Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81. +J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl_1):i180-i182. +Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022a. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics. +Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022b. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020a. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics. + +Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020b. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics. +Jingjing Liu, Panupong Pasupat, Scott Cyphers, and Jim Glass. 2013. Asgard: A portable architecture for multilingual dialogue systems. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8386-8390. IEEE. +Kun Liu, Yao Fu, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, and Sheng Gao. 2021. Noisy-labeled NER with confidence estimation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3437-3445, Online. Association for Computational Linguistics. +Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for sequence labeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2431-2441, Florence, Italy. Association for Computational Linguistics. +Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022a. Label semantics for few shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1956-1971, Dublin, Ireland. Association for Computational Linguistics. +Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022b. Decomposed meta-learning for few-shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1584-1596, Dublin, Ireland. Association for Computational Linguistics. +Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics. +Alejandro Metke-Jimenez and Sarvnaz Karimi. 2016. Concept identification and normalisation for adverse drug event discovery in medical forums. In BMDID@ ISWC. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551. + +Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207. +Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30. +Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2670-2680, Copenhagen, Denmark. Association for Computational Linguistics. +Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using lstm-crf. Wireless Communications & Mobile Computing (Online), 2018. +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147. +Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. +Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928, Online. Association for Computational Linguistics. +Liwen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, and Weiran Xu. 2022. Instructionner: A multi-task instruction-based generative framework for few-shot ner. arXiv preprint arXiv:2203.03903. +Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652. +Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442-6454, Online. Association for Computational Linguistics. + +Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822, Online. Association for Computational Linguistics. +Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879-3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Yi Yang and Arzoo Katiyar. 2020. Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6365-6375, Online. Association for Computational Linguistics. +Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020a. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics. +Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020b. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics. +Ningyu Zhang, Shumin Deng, Zhen Bi, Haiyang Yu, Jiacheng Yang, Mosha Chen, Fei Huang, Wei Zhang, and Huajun Chen. 2020. OpenUE: An open toolkit of universal extraction from text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 1-8, Online. Association for Computational Linguistics. + +# A Appendix + +In this section, we will discuss the remaining two NER settings: nested NER and discontinuous NER. Because the text-to-text structure of our proposed method can be easily adapted to all three NER settings, which will result in a unified structure for solving NER problems. Here, we mainly discuss standard NER scenarios with abundant data. + +For data abundant nested NER, We conduct experiments on Genia (Kim et al., 2003). We follow BARTNER (Yan et al., 2021) to use five entities types and split the train, dev, test as 8.1:0.9:1.0. The results are in Table 6. + +For data abundant discontinuous NER, we conduct experiments on CADEC (Karimi et al., 2015). + +
Genia: ModelPRF
(Li et al., 2020b)[BERT-Large]†81.2576.3678.72
(Yu et al., 2020b)[BERT-Large]†79.4378.3278.87
(Wang et al., 2020)[BERT-Large]79.4578.9479.19
BARTNER (Yan et al., 2021)78.8779.679.23
2INER82.980.7481.81
+ +Table 6: Span-F1 (%) on Genia Nested data abundant NER setting. The "†" mean the reproduction by (Yan et al., 2021). + +
CADEC: ModelPRF
(Metke-Jimenez and Karimi, 2016)64.456.560.2
(Tang et al., 2018)67.864.966.3
(Dai et al., 2020)[ELMo]68.969.069.0
BARTNER (Yan et al., 2021)70.0871.2170.64
2INER71.1875.2673.16
+ +Table 7: Span-F1 (%) on CADEC discontinuous data abundant NER setting. + +Following BARTNER (Yan et al., 2021), since only the Adverse Drug Events (ADEs) entities include discontinuous data, only these entities were considered. The results are in Table 7. + +The experiment settings are the same as flat NER. We use T5-large as the backbone model and report span-level F1. The results show that in data abundant nested and discontinuous NER setting, our proposed method greatly outperforms BARTNER (Yan et al., 2021) and other SOTA methods, which demonstrates that our methods do have a potential to handle different NER settings in a unified framework. \ No newline at end of file diff --git a/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/images.zip b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..78577cc091d400fe9dba0b21f401c56e35e648ab --- /dev/null +++ b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72329fbca927d0b1c91a87f8ff7fc29fdbb67d3f8cb397d2c420db0cf9108635 +size 330837 diff --git a/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/layout.json b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..04258421561e6e0fba322ce1fb9c029c319b08c7 --- /dev/null +++ b/2023/2INER_ Instructive and In-Context Learning on Few-Shot Named Entity Recognition/layout.json @@ -0,0 +1,8059 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 74, + 68, + 521, + 103 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 74, + 68, + 521, + 103 + ], + "spans": [ + { + "bbox": [ + 74, + 68, + 521, + 103 + ], + "type": "text", + "content": "2INER: Instructive and In-Context Learning on Few-Shot Named Entity Recognition" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "spans": [ + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "text", + "content": "Jiasheng Zhang" + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "text", + "content": " Xikai Liu" + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "text", + "content": " Xinyi Lai" + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "text", + "content": " Yan Gao" + }, + { + "bbox": [ + 154, + 109, + 442, + 124 + ], + "type": "inline_equation", + "content": "^{2}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 192, + 125, + 403, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 125, + 403, + 138 + ], + "spans": [ + { + "bbox": [ + 192, + 125, + 403, + 138 + ], + "type": "text", + "content": "Shusen Wang² Yao Hu² Yiqing LIN" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "spans": [ + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "type": "text", + "content": "Shanghai Jiaotong University " + }, + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "type": "text", + "content": "Xiaohongshu Inc. " + }, + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 110, + 138, + 487, + 153 + ], + "type": "text", + "content": "Chongqing University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 198, + 153, + 398, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 198, + 153, + 398, + 167 + ], + "spans": [ + { + "bbox": [ + 198, + 153, + 398, + 167 + ], + "type": "text", + "content": "{js.zhang,yiqing.lin}@sjtu.edu.cn" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 169, + 167, + 428, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 169, + 167, + 428, + 180 + ], + "spans": [ + { + "bbox": [ + 169, + 167, + 428, + 180 + ], + "type": "text", + "content": "{xikai,yadun,haxian,xiahou}@xiaohongshu.com" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 240, + 181, + 355, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 240, + 181, + 355, + 194 + ], + "spans": [ + { + "bbox": [ + 240, + 181, + 355, + 194 + ], + "type": "text", + "content": "laixinyi@cqu.edu.cn" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 84, + 235, + 274, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 235, + 274, + 509 + ], + "spans": [ + { + "bbox": [ + 84, + 235, + 274, + 509 + ], + "type": "text", + "content": "Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER (Wang et al., 2022) to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extraction, to enhance the model's understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 68, + 521, + 155, + 534 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 521, + 155, + 534 + ], + "spans": [ + { + "bbox": [ + 68, + 521, + 155, + 534 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 543, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 543, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 543, + 291, + 772 + ], + "type": "text", + "content": "Named Entity Recognition (NER) has been a fundamental task of Natural Language Processing (NLP) and there are three types of sub-tasks in NER: flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015). All three sub-tasks aim to locate named entities, extract the entity spans, and classify each span into pre-defined label categories. In terms of the flat NER which is the main focus of this paper, it can be formulated as a sequence labeling paradigm by assigning labels to each token in the sentence through token-classification models. The dominant methods include combining Pre-trained Language Models(PLMs) (Devlin et al., 2019) with label-specific classifier (LC) (Strubell et al., 2017; Cui and Zhang, 2019). However, the fixed shape of the output LC" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 213, + 526, + 306 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 213, + 526, + 306 + ], + "spans": [ + { + "bbox": [ + 302, + 213, + 526, + 306 + ], + "type": "text", + "content": "layer necessitates a consistent label set for both the training and testing data, which poses a challenge for knowledge transfer. Therefore, these models need to be trained from scratch to adapt to a new domain with a different label set, highlighting the requirement for a large amount of data for these methods." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 310, + 526, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 310, + 526, + 593 + ], + "spans": [ + { + "bbox": [ + 302, + 310, + 526, + 593 + ], + "type": "text", + "content": "Due to the high cost of sequence labeling annotation in real-world scenarios, labeled data for NER is often limited. As a result, few-shot NER has gained significant attention due to its practical applications. Meanwhile, applying prompt-base learning (Han et al., 2021) on PLMs is an effective way to solve few-shot problems (Brown et al., 2020). PLMs can learn a lot of knowledge regarding human languages by training on a large amount of self-supervised corpus. In order to explore the potential of PLMs, prompt-based learning reformulate the downstream tasks to text-to-text framework with additional prompt indicating task descriptions (e.g. instruction fine-tuning (Wei et al., 2021; Chung et al., 2022; Sanh et al., 2021)). Through this approach, the model can effectively leverage the knowledge present in PLMs to enhance downstream skills without the need for additional large amounts of downstream data. This enables the model to achieve remarkable performance in few-shot settings." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 597, + 527, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 597, + 527, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 597, + 527, + 772 + ], + "type": "text", + "content": "Recently, many prompt-based NER methods have emerged to address the limitations of traditional few-shot NER approaches. TemplateNER (Cui et al., 2021) treats original sentence and predicted template filled by entity spans as source and target sequence, respectively, but all candidate spans must be enumerated during inference, leading to a high computational cost. CARTNER (Yan et al., 2021) proposed a pointer mechanism to unified all NER sub-tasks into one sequence-to-sequence (seq2seq) framework. CARTNER utilizes the raw sentence as input and outputs pointer index and tag index which represent the location" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "type": "text", + "content": "3940" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 129, + 795, + 463, + 806 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 795, + 463, + 806 + ], + "spans": [ + { + "bbox": [ + 129, + 795, + 463, + 806 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3940-3951" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "spans": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 316 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 316 + ], + "type": "text", + "content": "of the span and the corresponding label index in the category, respectively. To further adapt BARTNER for few-shot settings, LightNER (Chen et al., 2022b) proposed a lightweight tuning approach for low-resource settings by adding a unified learnable verbalizer and incorporating learnable parameters into the self-attention layers. Nonetheless, due to the fact that pointer mechanism only outputs the indexes of entities and labels, the model encounters challenges in effectively leveraging the capabilities of PLMs to directly comprehend the semantic meaning between entities and labels. Thus instead of using a pointer mechanism, InstructionNER (Wang et al., 2022) directly generates entity spans and types in the target sequence and applies instruction fine-tuning with two auxiliary tasks to further mining the capabilities of PLMs, which leads to significant few-shot improvements." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 318, + 291, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 318, + 291, + 496 + ], + "spans": [ + { + "bbox": [ + 67, + 318, + 291, + 496 + ], + "type": "text", + "content": "In terms of the auxiliary tasks in InstructionNER, they propose two auxiliary tasks from two perspectives: span recognition (Entity Extraction) and entity labeling (Entity Typing). However, we argue that NER can be further divided into three parts: 1) understand the relationship between the label and semantic meaning of the sentence. 2) extract the spans. 3) annotate the given spans. We believe that both span recognition and entity labeling can be benefit from having a deeper understanding of the label semantics. Therefore, we proposed a new auxiliary task, called Type Extraction, to help the model to acquire this ability." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 499, + 291, + 688 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 499, + 291, + 688 + ], + "spans": [ + { + "bbox": [ + 67, + 499, + 291, + 688 + ], + "type": "text", + "content": "Meanwhile, none of the above methods take the additional external knowledge into account. Current literature related to utilize external knowledge in NER involve (Chen et al., 2022a) and (Lee et al., 2022a). SDNet (Chen et al., 2022a) proposes a self-describing mechanism to leverage external resources by self-describing both entity types and mentions, while (Lee et al., 2022a) uses a demonstration-based method by incorporating examples to the input but without a text-to-text framework. Therefore, to the best of our knowledge, there is currently no existing literature that combines in-context external knowledge with instruction fine-tuning for few-shot NER." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 692, + 291, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 291, + 774 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 291, + 774 + ], + "type": "text", + "content": "In this paper, we propose 2INER(Instructive and In-Context Learning on Few-Shot NER). We build upon the work of InstructionNER by incorporating in-context examples and a novel auxiliary task. Specifically, we first reformulate the NER tasks into a text-to-text framework and then employ T5" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 301, + 71, + 526, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 71, + 526, + 340 + ], + "spans": [ + { + "bbox": [ + 301, + 71, + 526, + 340 + ], + "type": "text", + "content": "(Raffel et al., 2020) for natural language generation. In terms of the source sentence, we use instructions to distinguish between tasks by giving a comprehensive task description and include an alternative field to identify the entity type that requires detection. Moreover, we suggest incorporating in-context demonstration examples into the source sentence to enable the model to learn from external knowledge. For the target sentence, we use natural language to represent entity spans and types instead of pointer mechanism. In addition to the two auxiliary tasks used in InstructionNER, we propose a new task called type extraction to further explore the potential of PLMs to understand label semantics. Type Extraction task requires the model to identify all the entity types presented in the original sentence and learn to understand the meaning of entity types at the overall semantic level of the sentence. Our contributions can be summarized as follows:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 343, + 527, + 587 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 302, + 343, + 527, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 343, + 527, + 423 + ], + "spans": [ + { + "bbox": [ + 302, + 343, + 527, + 423 + ], + "type": "text", + "content": "- To utilize external knowledge, we apply demonstration-based in-context learning examples to the instruction template. The in-context examples enable the model to directly learn which spans correspond to which types from these additional information, leading to better few-shot abilities." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 424, + 527, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 424, + 527, + 532 + ], + "spans": [ + { + "bbox": [ + 302, + 424, + 527, + 532 + ], + "type": "text", + "content": "- We expand the NER capabilities by dividing them into three components instead of two. And we propose a novel auxiliary task for instructions fine-tuning, called type extraction, to address the existing gap. It can enable the model to understand the meaning of the entity types through the overall semantic level of the sentence, which will improve span recognition and entity labeling abilities." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 533, + 527, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 533, + 527, + 587 + ], + "spans": [ + { + "bbox": [ + 302, + 533, + 527, + 587 + ], + "type": "text", + "content": "- We conduct extensive experiments on four datasets, demonstrating that 2NER outperforms existing few-shot NER methods and remains competitive with SOTA standard NER algorithms." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 597, + 396, + 610 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 597, + 396, + 610 + ], + "spans": [ + { + "bbox": [ + 302, + 597, + 396, + 610 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 620, + 455, + 633 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 620, + 455, + 633 + ], + "spans": [ + { + "bbox": [ + 302, + 620, + 455, + 633 + ], + "type": "text", + "content": "2.1 Named Entity Recognition" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 637, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 637, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 301, + 637, + 526, + 773 + ], + "type": "text", + "content": "Currently, NER tasks can be divided into flat NER (Tjong Kim Sang and De Meulder, 2003), nested NER (Kim et al., 2003) and discontinuous NER (Karimi et al., 2015), while in this paper, we mainly focus on the flat NER task. The current dominant method to solve flat NER is using token-level classification by turning it into a sequence labeling problem (Chiu and Nichols, 2016; Liu et al., 2019; Zhang et al., 2020; Liu et al., 2021), which apply a text encoder and CRF (Ma and Hovy, 2016) in" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "3941" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 166 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 166 + ], + "type": "text", + "content": "sequence. Recently, CARTNER (Yan et al., 2021) formulate all three NER tasks into a text-to-text framework to solve them concurrently. CARTNER generate entity span sequences by a pointer-based model based on BART (Lewis et al., 2020) so that special design of tagging schema or spans post-processing are no longer needed." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 176, + 208, + 190 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 176, + 208, + 190 + ], + "spans": [ + { + "bbox": [ + 67, + 176, + 208, + 190 + ], + "type": "text", + "content": "2.2 Prompt-based Learning" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 194, + 291, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 194, + 291, + 383 + ], + "spans": [ + { + "bbox": [ + 67, + 194, + 291, + 383 + ], + "type": "text", + "content": "With the emergence of GPT-3 (Brown et al., 2020), prompt-based learning has gained increasing attention. It can better stimulate the knowledge model learned in pre-training stages and integrate different tasks together compared to the paradigm of fine-tuning separate model for each task, especially in few-shot settings (Han et al., 2021). To push prompt-based learning further, instruction-based learning (Wei et al., 2021) is proposed to fine-tune the PLMs on a collection of task descriptions which enables the model to better follow human instructions and generalize to unseen tasks with better zero-shot and few-shot abilities (Chung et al., 2022; Sanh et al., 2021)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 393, + 210, + 406 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 393, + 210, + 406 + ], + "spans": [ + { + "bbox": [ + 67, + 393, + 210, + 406 + ], + "type": "text", + "content": "2.3 Few-Shot NER Methods" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 411, + 291, + 697 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 411, + 291, + 697 + ], + "spans": [ + { + "bbox": [ + 67, + 411, + 291, + 697 + ], + "type": "text", + "content": "One line of work in few-shot NER is to apply contrastive learning to assign the labels by searching for the closest token (Das et al., 2022; Chen et al., 2022c), prototype (Snell et al., 2017; Fritzler et al., 2019; Ma et al., 2022b) or label semantic (Ma et al., 2022a; Huang et al., 2022) in the support set. Another line of researches is prompt-based learning using a unified text-to-text framework to make full use of the PLMs abilities. (Cui et al., 2021) applies span classification using BART and (Chen et al., 2022b; Yan et al., 2021) use a pointer mechanism to generate indexes of spans and types. (Wang et al., 2022) utilizes instruction fine-tuning and two auxiliary tasks to train T5. Meanwhile, to apply external knowledge to the model, (Chen et al., 2022a) introduces a self-describing mechanism and (Lee et al., 2022a) uses a demonstration-based method. Therefore, our methods introduce in-context learning via instruction fine-tuning together to achieve better few-shot NER abilities, which haven't been fully discussed yet in seq2seq NER settings." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 706, + 157, + 719 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 706, + 157, + 719 + ], + "spans": [ + { + "bbox": [ + 67, + 706, + 157, + 719 + ], + "type": "text", + "content": "3 Methodology" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 728, + 168, + 740 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 728, + 168, + 740 + ], + "spans": [ + { + "bbox": [ + 67, + 728, + 168, + 740 + ], + "type": "text", + "content": "3.1 NER Definition" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 746, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 291, + 773 + ], + "type": "text", + "content": "NER aims to predict all spans in the input sentence as well as the entity types associated with the spans." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": "The standard flatten-NER can be formulated as follows, given the input sentence containing " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": " tokens " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "X = [x_{1}, x_{2}, \\ldots, x_{n}]" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": ", the model has to predict the target sentence " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "Y = [l_{1}, l_{2}, \\ldots, l_{n}]" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": ". We use " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "V_{BIO}" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": " to denote the BIO label set, so " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "\\forall l_{i}, l_{i} \\in V_{BIO}" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": ". While in the sequence-to-sequence modeling scenario, the input sentence is still " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": " but instead of predicting " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "Y" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": ", the model predicts each entity " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "y_{i} = (e_{i}, s_{i})" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": " directly, where " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "s_{i}" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": " represents the entity span in " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": ". And " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "e_{i} \\in V" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": " represents the entity type of " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "s_{i}" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": " is the set of entity types." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "spans": [ + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": "More specifically, we use " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": " to indicate the left and right boundary of an entity span in " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": ", so " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "s_i" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": " can be simplified as " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "s_i = x_{l:r}" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "x_{l:r} = [x_l, x_{l+1}, \\dots, x_r]" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": ". Therefore, the NER model has to predict each " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "y_i" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": " in " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": ", indicating that the span " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "s_i" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": " belongs to the " + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "inline_equation", + "content": "e_i" + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": " entity type." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 310, + 489, + 322 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 310, + 489, + 322 + ], + "spans": [ + { + "bbox": [ + 302, + 310, + 489, + 322 + ], + "type": "text", + "content": "3.2 Convert NER to Text-to-text Task" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 327, + 526, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 327, + 526, + 556 + ], + "spans": [ + { + "bbox": [ + 302, + 327, + 526, + 556 + ], + "type": "text", + "content": "Using language models like T5 (Raffel et al., 2020) to solve most NLP tasks in a unified text-to-text framework can not only fully utilize the knowledge model learned in the pre-training stage but also simplify the training by using same data format, same loss and same model architecture. Moreover, Compared to using simple prompts, using instruction finetuning can further explore the capabilities of the model (Chung et al., 2022; Sanh et al., 2021). Besides, utilizing in-context learning can further enhance the model's few-shot capabilities in general (Brown et al., 2020) and specifically NER abilities (Lee et al., 2022b). Therefore, we transform the NER task into a text-to-text format and employ instruction finetuning and in-context learning to unleash the model's few-shot capabilities, as shown in Figure 1. The backbone we used is T5." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 557, + 525, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 557, + 525, + 597 + ], + "spans": [ + { + "bbox": [ + 302, + 557, + 525, + 597 + ], + "type": "text", + "content": "The basic text-to-text format of the main NER tasks consists of the following three parts, which is inspired by InstructionNER (Wang et al., 2022) " + }, + { + "bbox": [ + 302, + 557, + 525, + 597 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 302, + 557, + 525, + 597 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 605, + 526, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 605, + 526, + 712 + ], + "spans": [ + { + "bbox": [ + 302, + 605, + 526, + 712 + ], + "type": "text", + "content": "Instruction The instruction is a prompt that informs the model about the current task it needs to perform. The model is expected to follow the instructions provided within the prompt and complete the task accordingly. The instruction for the main NER task is: Please extract entities and their types from the Sentence, choose entity types from Alternatives." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 720, + 525, + 746 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 720, + 525, + 746 + ], + "spans": [ + { + "bbox": [ + 302, + 720, + 525, + 746 + ], + "type": "text", + "content": "Sentence The sentence is the input " + }, + { + "bbox": [ + 302, + 720, + 525, + 746 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 720, + 525, + 746 + ], + "type": "text", + "content": " from which entities need to be extracted." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "text", + "content": "1The templates of auxiliary tasks and in-context Example will be discussed in 3.4 and 3.3 respectively." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "type": "text", + "content": "3942" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 83, + 77, + 524, + 229 + ], + "blocks": [ + { + "bbox": [ + 83, + 77, + 524, + 229 + ], + "lines": [ + { + "bbox": [ + 83, + 77, + 524, + 229 + ], + "spans": [ + { + "bbox": [ + 83, + 77, + 524, + 229 + ], + "type": "image", + "image_path": "373832b663573beaab873631fd7634757242b782f75d1348c308506edd49c34b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 66, + 243, + 525, + 269 + ], + "lines": [ + { + "bbox": [ + 66, + 243, + 525, + 269 + ], + "spans": [ + { + "bbox": [ + 66, + 243, + 525, + 269 + ], + "type": "text", + "content": "Figure 1: The model architecture of our proposed 2INER. The left and right sides are the source and target sentence of the model, respectively." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 290, + 289, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 290, + 289, + 383 + ], + "spans": [ + { + "bbox": [ + 66, + 290, + 289, + 383 + ], + "type": "text", + "content": "Alternatives Alternatives is a list of entity types " + }, + { + "bbox": [ + 66, + 290, + 289, + 383 + ], + "type": "inline_equation", + "content": "(V)" + }, + { + "bbox": [ + 66, + 290, + 289, + 383 + ], + "type": "text", + "content": " split by comma, from which the model needs to select the corresponding type to annotate the corresponding span. Alternatives serves as a constraint and a guiding factor, informing the model that it can only select entity types from within this list." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "spans": [ + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "text", + "content": "In order to formulate the NER output to natural language, for each NER output " + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "inline_equation", + "content": "y_{i} = (e_{i}, s_{i})" + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "text", + "content": ", we use the following template to convert it to text: " + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "inline_equation", + "content": "s_{i}" + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "inline_equation", + "content": "a / an" + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "inline_equation", + "content": "e_{i}" + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "text", + "content": ", and we use dot to concatenate all detected entity occurrences " + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "inline_equation", + "content": "y_{i}" + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "text", + "content": " to form the output text. In terms of the entity types " + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "inline_equation", + "content": "e_{i}" + }, + { + "bbox": [ + 67, + 398, + 290, + 507 + ], + "type": "text", + "content": ", we use natural language to represent the entity instead of adding special tokens to the model2." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 515, + 169, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 515, + 169, + 528 + ], + "spans": [ + { + "bbox": [ + 67, + 515, + 169, + 528 + ], + "type": "text", + "content": "3.3 Auxiliary Tasks" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 66, + 532, + 290, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 532, + 290, + 652 + ], + "spans": [ + { + "bbox": [ + 66, + 532, + 290, + 652 + ], + "type": "text", + "content": "To enhance the NER performance, in addition to the main task, we need to introduce several auxiliary tasks. In InstructionNER (Wang et al., 2022), they employed two auxiliary tasks: entity extraction and entity typing. Moreover, in this paper, a new auxiliary task called type extraction will be introduced. During training, the auxiliary task will also be in the form of text-to-text data, trained alongside the main task data." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 654, + 290, + 735 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 654, + 290, + 735 + ], + "spans": [ + { + "bbox": [ + 67, + 654, + 290, + 735 + ], + "type": "text", + "content": "The auxiliary task primarily aims to improve NER capabilities from three perspectives: understand label semantic, span recognition and entity labeling, since NER can be decomposed into three steps: understand the relationship between the label and semantic meaning of the sentence, then extract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 290, + 525, + 330 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 290, + 525, + 330 + ], + "spans": [ + { + "bbox": [ + 302, + 290, + 525, + 330 + ], + "type": "text", + "content": "the spans and finally annotate the given spans. We will discuss the configuration of the auxiliary task in detail from these three perspectives." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 339, + 464, + 351 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 339, + 464, + 351 + ], + "spans": [ + { + "bbox": [ + 302, + 339, + 464, + 351 + ], + "type": "text", + "content": "3.3.1 Understand label semantic" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 301, + 355, + 525, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 355, + 525, + 626 + ], + "spans": [ + { + "bbox": [ + 301, + 355, + 525, + 626 + ], + "type": "text", + "content": "Type Extraction The goal of the Type Extraction task is to identify all the entity types present in the original sentence. The Instruction is changed to: Please extract all entity types appeared in the Sentence. We will remove the Alternatives in this case, which means that there will be no constraints or hints regarding entity types in the input text, aiming to increase the difficulty of the task. And the output template is: " + }, + { + "bbox": [ + 301, + 355, + 525, + 626 + ], + "type": "inline_equation", + "content": "e_i" + }, + { + "bbox": [ + 301, + 355, + 525, + 626 + ], + "type": "text", + "content": " type exists in the sentence. The Type Extraction task involves detecting whether a specific entity type appears in the sentence, without focusing on specific spans or associating spans with entity types. This task will assist the model in understanding the meaning of entity types at the overall semantic level of the sentence. We believe that once the model gains a deeper understanding of entity types, it will be able to comprehend the relationship between spans and types more accurately. As a result, it will enhance both span recognition and entity labeling capabilities simultaneously." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 635, + 418, + 648 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 635, + 418, + 648 + ], + "spans": [ + { + "bbox": [ + 302, + 635, + 418, + 648 + ], + "type": "text", + "content": "3.3.2 Span recognition" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 651, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 651, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 301, + 651, + 526, + 772 + ], + "type": "text", + "content": "Entity Extraction The goal of the entity extraction task is to extract useful entity spans from the original sentence without the need for annotating the extracted spans. The instruction has been modified to: Please extract entities from the Sentence. Because the model doesn't need to type spans, the Alternatives field is deleted. And the output template has been changed to: " + }, + { + "bbox": [ + 301, + 651, + 526, + 772 + ], + "type": "inline_equation", + "content": "s_i" + }, + { + "bbox": [ + 301, + 651, + 526, + 772 + ], + "type": "text", + "content": " is an entity word, since " + }, + { + "bbox": [ + 301, + 651, + 526, + 772 + ], + "type": "inline_equation", + "content": "e_i" + }, + { + "bbox": [ + 301, + 651, + 526, + 772 + ], + "type": "text", + "content": " is no longer needed. Because the entity ex" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": "2.e.g. \"Character_Name\" will be represented as \"Character Name\" instead of adding a special token named \"Character_Name\"" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "3943" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "content": "traction task only require the model to predict useful spans regardless of the associated entity types, this task will guide the model to extract correct spans, enhancing the span-F1 accuracy, moreover, overall main task F1 as well (Wang et al., 2022)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 139, + 290, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 139, + 290, + 259 + ], + "spans": [ + { + "bbox": [ + 67, + 139, + 290, + 259 + ], + "type": "text", + "content": "The original InstructionNER (Wang et al., 2022) paper only employed span concatenation as the output(e.g. " + }, + { + "bbox": [ + 67, + 139, + 290, + 259 + ], + "type": "inline_equation", + "content": "s_1, s_2, s_3" + }, + { + "bbox": [ + 67, + 139, + 290, + 259 + ], + "type": "text", + "content": "). However, we believe that since the output of the main task consists of complete sentences with subject-verb-object structures, it would be more cohesive to follow the same pattern for the auxiliary tasks. And more structured output can fully utilize the PLMs's understanding of the task as well." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 269, + 177, + 282 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 269, + 177, + 282 + ], + "spans": [ + { + "bbox": [ + 67, + 269, + 177, + 282 + ], + "type": "text", + "content": "3.3.3 Entity Labeling" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 285, + 291, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 285, + 291, + 461 + ], + "spans": [ + { + "bbox": [ + 67, + 285, + 291, + 461 + ], + "type": "text", + "content": "Entity Typing The entity typing task aims to type the given span with the correct label. The instruction has been modified to: Please type these entities according to the Sentence: . The Alternatives prompt and output template is the same as those in main task. During training, the given spans in the Instruction is the exact entity spans that have labels on. In entity typing task, since the spans are given, the model doesn't need to worry about the correctness of the span extracted, so the model can focus more on learning how to label the entity accurately, enhancing the main task NER ability." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 471, + 193, + 484 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 471, + 193, + 484 + ], + "spans": [ + { + "bbox": [ + 67, + 471, + 193, + 484 + ], + "type": "text", + "content": "3.4 In-Context Learning" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 489, + 291, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 489, + 291, + 677 + ], + "spans": [ + { + "bbox": [ + 67, + 489, + 291, + 677 + ], + "type": "text", + "content": "In-context learning will be applied to further enhance few-shot NER capabilities. The main approach of in-context learning is to append Examples at the end of the input sentence, hoping that the model can directly learn which spans correspond to which types from these Examples, without the need for additional gradient updates. Besides, the in-context examples are also presented in natural language format, which closely resembles the output text format, serving as a reminder for the model about the desired format it should generate and making it easier for PLMs to understand. This similarity helps bridge the gap and facilitates the model's comprehension." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "type": "text", + "content": "The in-context example format in NER is inspired by (Lee et al., 2022b). All examples in this context follow the template: span is a/an entity-type. And we will concatenate an additional prompt (based on the knowledge in Examples) after the Instruction to hint the model to learn from the Examples. During training stage, in-context Examples" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "type": "text", + "content": "will only be added to the main NER tasks and there will be no Examples added in auxiliary tasks, which will be discussed in detail in Analysis 5.2." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 112, + 526, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 112, + 526, + 287 + ], + "spans": [ + { + "bbox": [ + 302, + 112, + 526, + 287 + ], + "type": "text", + "content": "In terms of the choices of the samples in Examples, we randomly choose some spans appeared in the train set as well as their corresponding entity types to create Examples. Since we are uncertain about the entity types present in the sentence, we will provide at least one example for each entity type in the Alternatives list within the Examples. The number of samples of each entity types in Examples will also be the same " + }, + { + "bbox": [ + 302, + 112, + 526, + 287 + ], + "type": "inline_equation", + "content": "^3" + }, + { + "bbox": [ + 302, + 112, + 526, + 287 + ], + "type": "text", + "content": " (e.g. in terms of MIT Movie dataset, there are 12 entity types. If we set the number of examples to 5, there will be 5 examples for each entity types, resulting in a total of " + }, + { + "bbox": [ + 302, + 112, + 526, + 287 + ], + "type": "inline_equation", + "content": "5^{*}12" + }, + { + "bbox": [ + 302, + 112, + 526, + 287 + ], + "type": "text", + "content": " examples in the field)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 298, + 375, + 310 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 298, + 375, + 310 + ], + "spans": [ + { + "bbox": [ + 302, + 298, + 375, + 310 + ], + "type": "text", + "content": "3.5 Inference" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "spans": [ + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": "During inference time, we first use the template of the main NER task to wrap the input sentence " + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": ", and then feed the sentence to 2INER to get the predicted output text. In terms of the Example field, the example spans are sampled from the training support set, so the model won't see the groundtruth in the Examples during evaluation, avoiding information leakage. After the output text is generated, a decoding strategy will be applied to get the predicted entity " + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "inline_equation", + "content": "(e_i, s_i)" + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": ": (1) We use dot to split the whole output text to obtain individual sub-texts. (2) We use \"is a\" or \"is an\" to split each sub-text if they can be found. (3) The span is the part before \"is a/an\" and the entity type is the part after it. Once we get the " + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "inline_equation", + "content": "(e_i, s_i)" + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": ", we will check whether " + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "inline_equation", + "content": "s_i" + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": " is in the input sentence " + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "inline_equation", + "content": "e_i" + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": " is in the set of entity types " + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 302, + 317, + 525, + 587 + ], + "type": "text", + "content": ". If it doesn't pass the check, then it isn't a valid entity and will be deleted. And if any of the three steps result in a match failure, then the sub-text will be skipped." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 597, + 386, + 611 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 597, + 386, + 611 + ], + "spans": [ + { + "bbox": [ + 302, + 597, + 386, + 611 + ], + "type": "text", + "content": "4 Experiment" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 620, + 367, + 632 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 620, + 367, + 632 + ], + "spans": [ + { + "bbox": [ + 302, + 620, + 367, + 632 + ], + "type": "text", + "content": "4.1 Dataset" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 638, + 526, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 638, + 526, + 733 + ], + "spans": [ + { + "bbox": [ + 302, + 638, + 526, + 733 + ], + "type": "text", + "content": "We conduct NER experiments in standard and low-resources settings. For the rich-resources domain, we use CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) and for the low-resource domain, we use three datasets: MIT Movie Review, MIT Restaurant Review (Liu et al., 2013) and Airline Travel Information Systems (ATIS) (Hakkani" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 740, + 527, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 740, + 527, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 740, + 527, + 772 + ], + "type": "text", + "content": "3We refer to \"the number of samples per entity types\" as \"the number of examples\" in the rest of the paper for convenience." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "3944" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 111 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 111 + ], + "type": "text", + "content": "Tür et al., 2016), following (Wang et al., 2022; Chen et al., 2022b; Cui et al., 2021; Yan et al., 2021)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 123, + 208, + 136 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 123, + 208, + 136 + ], + "spans": [ + { + "bbox": [ + 67, + 123, + 208, + 136 + ], + "type": "text", + "content": "4.2 Implementation settings" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "spans": [ + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": "In Few-Shot NER scenario, in order to guarantee that each entity type has equal number of instances in the training set, we can't sample " + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": " sentences for each entity type directly because a single sentence may contain multiple entities, so the actual shot will exceed " + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": ". Following (Wang et al., 2022), we will apply a greedy sampling strategy (Yang and Katiyar, 2020) instead, to sample the few-shot training set for each setting and due to the randomness of the sampling, we will repeat 3 times for each experiment. We use T5-large as the backbone model for fair comparison with (Wang et al., 2022). In terms of the number of examples in in-context Example field, we set the number to 5 for MIT Movie and MIT Restaurant dataset, and 1 for ATIS dataset as default. We only add in-context Example field on main-task, and don't include them in auxiliary tasks. The ratio of auxiliary tasks is set to " + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "inline_equation", + "content": "1.0" + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": ". We set the batch size to " + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "inline_equation", + "content": "2/4/8" + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": ", learning-rate to " + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "inline_equation", + "content": "2\\mathrm{e}-5/5\\mathrm{e}-5" + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "inline_equation", + "content": "10/20/50" + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": " Shot settings respectively, and set batch size to 32, learning-rate to " + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "inline_equation", + "content": "1\\mathrm{e}-4" + }, + { + "bbox": [ + 69, + 142, + 291, + 465 + ], + "type": "text", + "content": " for the abundant data setting. The optimizer is Adam and beam search is set to 2. For evaluation, we use F1 score as the metric for NER." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 467, + 290, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 467, + 290, + 589 + ], + "spans": [ + { + "bbox": [ + 66, + 467, + 290, + 589 + ], + "type": "text", + "content": "The names InstructionNER in the tables mean training with main-task data only, indicating the base model, and the subscript words in the tables indicate addition to the base model: +ET, +EE, +TE, +EX means adding Entity Typing, Entity Extraction, Type Extraction, in-context examples, respectively. And we named InstructionNER+ET,EE,TE,EX as 2INER, which is our final model." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 600, + 201, + 613 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 600, + 201, + 613 + ], + "spans": [ + { + "bbox": [ + 67, + 600, + 201, + 613 + ], + "type": "text", + "content": "4.3 Standard NER Setting" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 618, + 290, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 618, + 290, + 700 + ], + "spans": [ + { + "bbox": [ + 67, + 618, + 290, + 700 + ], + "type": "text", + "content": "We use CoNLL-2003 dataset to conduct standard NER experiment. We combine the train and validation set as described in (Yan et al., 2021) to train the model. The result is in Table 1, which shows that even though our method mainly focuses on few-shot NER settings, it remains competitive with" + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 304, + 68, + 527, + 206 + ], + "blocks": [ + { + "bbox": [ + 304, + 68, + 527, + 206 + ], + "lines": [ + { + "bbox": [ + 304, + 68, + 527, + 206 + ], + "spans": [ + { + "bbox": [ + 304, + 68, + 527, + 206 + ], + "type": "table", + "html": "
ModelF1Span-F1
(Yang et al., 2018)90.77-
(Ma and Hovy, 2016)91.21-
(Gui et al., 2020)92.02-
(Yamada et al., 2020)*94.3092.40
(Li et al., 2020a)†-92.87
(Yu et al., 2020a)‡-92.50
LC-BERT91.73-
LC-BART90.60-
TemplateNER91.90-
BARTNER-93.24
LightNER92.93-
2INER (InstructionNER+ET,EE,TE,EX)90.7193.93
", + "image_path": "40d453eddc62f1ffaa3e37262e4d8b138dda7d3e43144b071e0a0090ab6cdbc6.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 213, + 526, + 298 + ], + "lines": [ + { + "bbox": [ + 302, + 213, + 526, + 298 + ], + "spans": [ + { + "bbox": [ + 302, + 213, + 526, + 298 + ], + "type": "text", + "content": "Table 1: F1 and Span-F1 (%) on CoNLL-2003 Standard NER setting. Our method is competitive with SOTA algorithm and even outperform BARTNER (Yan et al., 2021) in span-F1. \" * \" indicates training on external data. \"†\" indicates the reproduction by (Yan et al., 2021). \"‡\" indicates the reproduction with only the sentence-level context by (Yan et al., 2021)." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 321, + 525, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 321, + 525, + 402 + ], + "spans": [ + { + "bbox": [ + 302, + 321, + 525, + 402 + ], + "type": "text", + "content": "SOTA algorithm under standard NER setting and even outperform BARTNER (Yan et al., 2021) in span-F1, which is designed for rich-resource NER settings. The performances of 2INER in data abundant nested and discontinuous NER settings are in Appendix A." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 415, + 436, + 428 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 415, + 436, + 428 + ], + "spans": [ + { + "bbox": [ + 302, + 415, + 436, + 428 + ], + "type": "text", + "content": "4.4 Few-Shot NER Setting" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "spans": [ + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": "Under Few-Shot NER setting, we only use K-Shot training samples to finetune our model and the results are in Table 2. According to the table, we can find that: (1) Our models consistently outperform InstructionNER as well as other baselines on all three datasets under 10/20/50 Shot settings (except 50Shot in ATIS, which is slightly lower than BARTNER). Especially in MIT Movie dataset, our models have " + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "inline_equation", + "content": "7.33\\%" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "inline_equation", + "content": "6.76\\%" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "inline_equation", + "content": "5.39\\%" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": " improvements compared to InstructionNER under 10/20/50 settings. (2) Our 10Shot model even outperforms TempleNER's 50Shot model by " + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "inline_equation", + "content": "20.73\\%" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "inline_equation", + "content": "7.06\\%" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": " in MIT Movie and MIT Restaurant respectively, which highlights the superiority and capability of our model. (3) We have the same finding as InstructionNER (Wang et al., 2022) that F1 improvements are much more significant on MIT Movie than on MIT Restaurant (" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "inline_equation", + "content": "7.33\\% / 6.76\\% / 5.39\\%" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": " v.s. " + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "inline_equation", + "content": "6.86\\% / 3.24\\% / 3.3\\%" + }, + { + "bbox": [ + 301, + 435, + 527, + 773 + ], + "type": "text", + "content": " under 10/20/50 Shot settings), which indicates that although MIT Movie has more entity types, text-to-text framework and instruction-tuning can better utilize pre-training knowledge, and through in-context learning, the model can learn more about the relationships between entities. (4) In ATIS dataset, the improve" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 80, + 709, + 227, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 709, + 227, + 721 + ], + "spans": [ + { + "bbox": [ + 80, + 709, + 227, + 721 + ], + "type": "text", + "content": "4https://huggingface.co/t5-large" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 720, + 289, + 741 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 720, + 289, + 741 + ], + "spans": [ + { + "bbox": [ + 69, + 720, + 289, + 741 + ], + "type": "text", + "content": "ATIS has 79 entity types so we set the number to 1 to avoid excessively long token lengths." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 69, + 741, + 289, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 741, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 741, + 289, + 772 + ], + "type": "text", + "content": "The data size ratio between main task and each auxiliary tasks. 1.0 means that each sample will be extended into 4 samples: one for main task, one for EE, ET, TE, respectively." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "3945" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 69, + 68, + 534, + 184 + ], + "blocks": [ + { + "bbox": [ + 69, + 68, + 534, + 184 + ], + "lines": [ + { + "bbox": [ + 69, + 68, + 534, + 184 + ], + "spans": [ + { + "bbox": [ + 69, + 68, + 534, + 184 + ], + "type": "table", + "html": "
ModelsMIT MovieMIT RestaurantATIS
102050102050102050
LC-BERT25.242.249.621.839.452.744.176.790.7
LC-BART10.227.544.26.38.551.342.072.787.5
TemplateNER37.348.552.246.057.158.771.779.492.6
BARTNER*41.154.067.744.056.064.077.786.193.4
LightNER41.757.873.148.558.062.076.385.392.8
InstructionNER64.4 (±2.1)70.0 (±0.3)74.1 (±1.2)58.7 (±1.2)65.5 (±1.4)71.2 (±1.1)90.14 (±0.12)†91.22 (±0.19)†92.53 (±0.14)†
InstructionNER+ET,EE65.6 (±3.0)70.1 (±1.9)74.7 (±0.3)58.9 (±0.8)66.1 (±0.9)71.1 (±0.9)90.04 (±0.02)†91.46 (±0.23)†92.62 (±0.04)†
InstructionNER+EX72.56 (±1.01)74.99 (±0.27)78.61 (±0.37)64.07 (±1.25)68.2 (±0.11)74.38 (±0.19)89.17 (±0.2)91.33 (±0.05)92.65 (±0.18)
InstructionNER+TE72.0 (±0.25)76.55 (±0.2)80.02 (±0.26)65.52 (±1.35)68.67 (±0.95)73.98 (±0.27)90.77 (±0.6)91.85 (±0.05)92.69 (±0.1)
InstructionNER+ET,EE,TE,EX72.93 (±0.91)76.86 (±0.53)80.09 (±0.22)65.76 (±0.47)69.34 (±0.81)74.4 (±0.4)90.47 (±0.26)92.11 (±0.09)92.83 (±0.15)
", + "image_path": "28f1fbe800713b94355e78ec56f8f504b6928eb426379cc9871af2b667ed6bb4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 191, + 526, + 240 + ], + "lines": [ + { + "bbox": [ + 67, + 191, + 526, + 240 + ], + "spans": [ + { + "bbox": [ + 67, + 191, + 526, + 240 + ], + "type": "text", + "content": "Table 2: The F1(\\%) on three dataset under 10/20/50 Shot settings. The bold number means the best F1 across all models and the numbers in brackets means the standard deviation. The underline numbers mean the best results in our experiments. The \"+\" numbers mean the results of our reproduction. \"* means the reproduction by InstructionNER (Wang et al., 2022)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 261, + 291, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 261, + 291, + 423 + ], + "spans": [ + { + "bbox": [ + 67, + 261, + 291, + 423 + ], + "type": "text", + "content": "ment of our model is less significant compared to other two datasets. We argue that this is because ATIS contains 79 entity types and even if we only provide one sample span for each entity types in in-context Example field, the average token length is 1099 compared to 368 with or without examples, where the token length of the Alternative filed is 327. So the actual input Sentence " + }, + { + "bbox": [ + 67, + 261, + 291, + 423 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 67, + 261, + 291, + 423 + ], + "type": "text", + "content": " only accounts for " + }, + { + "bbox": [ + 67, + 261, + 291, + 423 + ], + "type": "inline_equation", + "content": "3.7\\%" + }, + { + "bbox": [ + 67, + 261, + 291, + 423 + ], + "type": "text", + "content": " of the total token length, which increases the difficulty for the model to extract key information from lengthy sentences. So too many entity types may potentially reduce model improvements." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 433, + 167, + 445 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 433, + 167, + 445 + ], + "spans": [ + { + "bbox": [ + 67, + 433, + 167, + 445 + ], + "type": "text", + "content": "4.5 Ablation Study" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "spans": [ + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "content": "In order to find out the influence of our proposed type extraction task and in-context examples on model's few-shot abilities, we conduct ablation studies in Figure 2. The results indicate that adding type extraction task and in-context examples can further enhance the model's few-shot NER abilities. We set InstructionNER as the baseline here which only trains on main-task data without any auxiliary tasks. Then we add type extraction task (InstructionNER+TE) or in-context examples (InstructionNER+EX) respectively on the baseline model to explore their influences. The results from Figure 2 shows that in terms of 10/20/50 Shot settings in few-shot NER, type extraction task achieves an average improvements of " + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "inline_equation", + "content": "7.21\\%" + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "inline_equation", + "content": "4.86\\%" + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "inline_equation", + "content": "4.35\\%" + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "content": " F1 and in-context example achieves an average improvements of " + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "inline_equation", + "content": "6.76\\%" + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "inline_equation", + "content": "3.84\\%" + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "inline_equation", + "content": "3.84\\%" + }, + { + "bbox": [ + 67, + 449, + 290, + 692 + ], + "type": "text", + "content": " F1 in MIT Movie and MIT Restaurant dataset." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 693, + 290, + 735 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 693, + 290, + 735 + ], + "spans": [ + { + "bbox": [ + 67, + 693, + 290, + 735 + ], + "type": "text", + "content": "Moreover, adding type extraction task can greatly increase the Span-F1 as well. Because Span-F1 indicates the model's ability to locate" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 307, + 261, + 521, + 396 + ], + "blocks": [ + { + "bbox": [ + 307, + 261, + 521, + 396 + ], + "lines": [ + { + "bbox": [ + 307, + 261, + 521, + 396 + ], + "spans": [ + { + "bbox": [ + 307, + 261, + 521, + 396 + ], + "type": "image", + "image_path": "ccbd7a22ac82ef0bf7eff5a54809f4a260d406dd5983e0a34343f0d4560b51d5.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 404, + 525, + 454 + ], + "lines": [ + { + "bbox": [ + 302, + 404, + 525, + 454 + ], + "spans": [ + { + "bbox": [ + 302, + 404, + 525, + 454 + ], + "type": "text", + "content": "Figure 2: F1 and Span-F1 " + }, + { + "bbox": [ + 302, + 404, + 525, + 454 + ], + "type": "inline_equation", + "content": "(\\%)" + }, + { + "bbox": [ + 302, + 404, + 525, + 454 + ], + "type": "text", + "content": " on MIT Movie and MIT Restaurant through 10/20/50 Shot settings with different task combinations. The deep and light color indicate F1 and Span-F1 respectively." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 476, + 527, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 476, + 527, + 597 + ], + "spans": [ + { + "bbox": [ + 302, + 476, + 527, + 597 + ], + "type": "text", + "content": "spans, the results reveal that through training on type extraction task, span recognition can be benefit from having a deeper understanding of the labels from the overall semantic level of sentence. Therefore, it proves the effectiveness of three steps of NER abilities we proposed in 3.3, and shows that type extraction task can simultaneously improve span recognition and entity labeling abilities through understanding label semantic." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 609, + 368, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 609, + 368, + 623 + ], + "spans": [ + { + "bbox": [ + 302, + 609, + 368, + 623 + ], + "type": "text", + "content": "5 Analysis" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 632, + 456, + 646 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 632, + 456, + 646 + ], + "spans": [ + { + "bbox": [ + 302, + 632, + 456, + 646 + ], + "type": "text", + "content": "5.1 Increase Example Number" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 651, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 651, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 651, + 526, + 772 + ], + "type": "text", + "content": "In this section, we will focus on how the number of examples in in-context Example field influence the model performance. We will sequentially change the number of examples to 1, 3, 5, 10, and 15, and train corresponding models to observe the change of F1 on MIT Restaurant dataset. We train our model with main-task and in-context example without any auxiliary tasks (InstructionNER+EX) in this section. The results are in Table 3." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": "7We try to use special-tokens to represent the entity types, but the F1 is slightly lower than without using special-tokens and the proportion of " + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": " to the total number of tokens is " + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "inline_equation", + "content": "4.5\\%" + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "text", + "content": "3946" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 82, + 68, + 276, + 164 + ], + "blocks": [ + { + "bbox": [ + 82, + 68, + 276, + 164 + ], + "lines": [ + { + "bbox": [ + 82, + 68, + 276, + 164 + ], + "spans": [ + { + "bbox": [ + 82, + 68, + 276, + 164 + ], + "type": "table", + "html": "
InstructionNER+EX ExamplesMIT Restaurant
20 Shot50 Shot
065.5 (±1.4)71.2 (±1.1)
167.74 (±0.22)73.89 (±0.15)
367.89 (±0.3)74.15 (±0.39)
568.2 (±0.11)74.38 (±0.19)
1069.47 (±0.35)74.41 (±0.18)
1569.52 (±0.16)74.64 (±0.49)
", + "image_path": "efc06f9375437155a196c6ed767da44b29109efda6cf4cb5025ddf0b9e98f199.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 254, + 291, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 254, + 291, + 417 + ], + "spans": [ + { + "bbox": [ + 67, + 254, + 291, + 417 + ], + "type": "text", + "content": "As the number of examples increases, F1 score continues to increase and the largest improvement in F1 score occurs when going from zero examples to one example. As the number of examples increases further, the F1 will continue to increase but the rate of improvement gradually slows down. This suggests that when only one in-context example is provided, the model can quickly learn the specific meanings of each entity type from the example. While more examples may lead to repetitive cues to the model so a balance should be made between model performance and computational cost." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 428, + 289, + 454 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 428, + 289, + 454 + ], + "spans": [ + { + "bbox": [ + 67, + 428, + 289, + 454 + ], + "type": "text", + "content": "5.2 Effect of In-Context Example on Auxiliary task" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 460, + 291, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 460, + 291, + 635 + ], + "spans": [ + { + "bbox": [ + 67, + 460, + 291, + 635 + ], + "type": "text", + "content": "In this section, we will discuss whether to add in-context examples on auxiliary task. The model is 2INER (InstructionNER+ET,EE,TE,EX) and we will compare two settings: add examples only on main-task, add examples on main-task as well as three auxiliary tasks. The results in Table 4 indicate that adding examples on auxiliary task will slightly decrease the F1 performance. Because adding examples to auxiliary tasks may potentially reduce their difficulty and make it too easy for the model, thereby reducing the auxiliary tasks' effectiveness in aiding the main task. So adding examples only to the main task is a better approach." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 646, + 159, + 658 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 646, + 159, + 658 + ], + "spans": [ + { + "bbox": [ + 67, + 646, + 159, + 658 + ], + "type": "text", + "content": "5.3 Increase Shot" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "text", + "content": "In this section, we will discuss the model performance under relatively abundant settings. We increase the shots to 100, 200 and 500 in MIT Movie and MIT Restaurant datasets using 2INER (InstructionNER+ET,EE,TE,EX). As shown in Table 5, compared to InstructionNER, 2INER achieves " + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "inline_equation", + "content": "5.43\\%" + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "inline_equation", + "content": "3.98\\%" + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "inline_equation", + "content": "3.19\\%" + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "text", + "content": " improvements in F1 under 100/200/500 shots settings respectively." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 314, + 68, + 514, + 142 + ], + "blocks": [ + { + "bbox": [ + 67, + 172, + 290, + 232 + ], + "lines": [ + { + "bbox": [ + 67, + 172, + 290, + 232 + ], + "spans": [ + { + "bbox": [ + 67, + 172, + 290, + 232 + ], + "type": "text", + "content": "Table 3: F1 scores(%) on MIT Restaurant dataset while changing number of examples using InstructionNER+EX. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 314, + 68, + 514, + 142 + ], + "lines": [ + { + "bbox": [ + 314, + 68, + 514, + 142 + ], + "spans": [ + { + "bbox": [ + 314, + 68, + 514, + 142 + ], + "type": "table", + "html": "
MIT Restaurant
10 Shot20 Shot50 Shot
2INER65.2669.2774.2
Examples on all tasks(±0.49)(±0.89)(±0.45)
2INER65.7669.3474.4
Examples only on Main-Task(±0.47)(±0.81)(±0.4)
", + "image_path": "4be07f0e7b5395e6ccb2a20ce1f412ef23ef29dee2051b238deb08b31040cb4c.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 304, + 210, + 527, + 317 + ], + "blocks": [ + { + "bbox": [ + 302, + 149, + 525, + 196 + ], + "lines": [ + { + "bbox": [ + 302, + 149, + 525, + 196 + ], + "spans": [ + { + "bbox": [ + 302, + 149, + 525, + 196 + ], + "type": "text", + "content": "Table 4: The comparison between adding in-context examples only on main-task and on all tasks including auxiliary tasks. Bold numbers indicate the best F1 and the numbers in brackets means the standard deviation." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 304, + 210, + 527, + 317 + ], + "lines": [ + { + "bbox": [ + 304, + 210, + 527, + 317 + ], + "spans": [ + { + "bbox": [ + 304, + 210, + 527, + 317 + ], + "type": "table", + "html": "
ModelsMIT MovieMIT Restaurant
100200500100200500
LC-BERT50.759.374.453.557.461.3
LC-BART47.554.264.152.256.360.2
TemplateNER56.362.074.960.162.865.0
BARTNER*70.174.682.665.374.475.7
LightNER78.080.684.870.875.580.2
InstructionNER+ETEE74.378.482.372.775.576.6
2INER81.383.5486.1676.5778.3179.11
", + "image_path": "87d6cd817fcf3ed5d42bad417f5dfbf3862d451896fdc56b3645e3ef318ad6db.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 325, + 525, + 361 + ], + "lines": [ + { + "bbox": [ + 302, + 325, + 525, + 361 + ], + "spans": [ + { + "bbox": [ + 302, + 325, + 525, + 361 + ], + "type": "text", + "content": "Table 5: The F1 (\\%) under relatively abundant settings. \" * \" indicates the reproduction results by (Wang et al., 2022). Bold numbers indicate the best F1." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 385, + 525, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 385, + 525, + 493 + ], + "spans": [ + { + "bbox": [ + 302, + 385, + 525, + 493 + ], + "type": "text", + "content": "And 2INER outperforms LightNER in all settings except 500-shots in MIT Restaurant, which shows that 2INER has great NER abilities under data abundant scenario as well. We argue that the in-context Example field may help the model to learn from more diverse samples from the abundant training set and turn the general knowledge into specialized capabilities, leading to the improvement in F1." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 506, + 381, + 518 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 506, + 381, + 518 + ], + "spans": [ + { + "bbox": [ + 302, + 506, + 381, + 518 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 301, + 529, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 529, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 301, + 529, + 526, + 772 + ], + "type": "text", + "content": "In this paper, we propose 2INER for few-shot NER using both instruction finetuning and in-context learning by converting NER into a text-to-text framework. Based on InstructionNER, we create a template to concatenate task-specific instructions, input sentence and entity alternatives to make full use of the pre-training knowledge. Besides, we decompose NER into three steps and introduce another auxiliary tasks, called type extraction, to help the model better understand the general semantic meaning of the entity types, which can improve both span recognition and entity labeling abilities. Moreover, we apply the in-context examples to enable the model to learn from additional contextual information, enhancing few-shot abilities. Multiple experiments on four NER datasets prove 2INER's effectiveness in few-shot NER scenario by consistently outperforming other baselines." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "3947" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 131, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 131, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 131, + 84 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 95, + 293, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 95, + 293, + 283 + ], + "spans": [ + { + "bbox": [ + 67, + 95, + 293, + 283 + ], + "type": "text", + "content": "One limitation of our work is the extensive length of the Example and Alternative field when there are too many existed entity types. While incorporating in-context examples in the input sentence can improve few-shot NER performance, it poses a challenge when the Example field becomes too long because we add at least one examples for each potential entity type, especially when the Alternative list contains numerous entity types. This can result in less improvement gains and more computational costs. To address this issue, we assume that larger PLMs such as the recently proposed LLaMA (Touvron et al., 2023) could potentially be explored in future research as a means of resolution." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 296, + 158, + 308 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 296, + 158, + 308 + ], + "spans": [ + { + "bbox": [ + 68, + 296, + 158, + 308 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 319, + 291, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 319, + 291, + 427 + ], + "spans": [ + { + "bbox": [ + 67, + 319, + 291, + 427 + ], + "type": "text", + "content": "In consideration of ethical concerns, we would make the following descriptions: (1) All of our experiments are conducted using existing datasets sourced from publicly available scientific papers. (2) Our few-shot methods don't require a lot of computational resources. (3) Our text generation models will generate texts based on existing templates, so it won't generate harmful sentences." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 452, + 127, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 452, + 127, + 464 + ], + "spans": [ + { + "bbox": [ + 68, + 452, + 127, + 464 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 472, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 69, + 472, + 290, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 472, + 290, + 539 + ], + "spans": [ + { + "bbox": [ + 69, + 472, + 290, + 539 + ], + "type": "text", + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 550, + 291, + 629 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 550, + 291, + 629 + ], + "spans": [ + { + "bbox": [ + 69, + 550, + 291, + 629 + ], + "type": "text", + "content": "Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, and Le Sun. 2022a. Few-shot named entity recognition with self-describing networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5711-5722, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 639, + 291, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 639, + 291, + 728 + ], + "spans": [ + { + "bbox": [ + 69, + 639, + 291, + 728 + ], + "type": "text", + "content": "Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, and Ningyu Zhang. 2022b. LightNER: A lightweight tuning paradigm for low-resource NER via pluggable prompting. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2374-2387, Gyeongju, Republic of Korea. International Committee on Computational Linguistics." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 738, + 291, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 738, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 738, + 291, + 772 + ], + "type": "text", + "content": "Yanru Chen, Yanan Zheng, and Zhilin Yang. 2022c. Prompt-based metric learning for few-shot ner. arXiv preprint arXiv:2211.04337." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 526, + 772 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 304, + 72, + 526, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 526, + 116 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 526, + 116 + ], + "type": "text", + "content": "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357-370." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 125, + 526, + 180 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 125, + 526, + 180 + ], + "spans": [ + { + "bbox": [ + 304, + 125, + 526, + 180 + ], + "type": "text", + "content": "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 190, + 526, + 256 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 190, + 526, + 256 + ], + "spans": [ + { + "bbox": [ + 304, + 190, + 526, + 256 + ], + "type": "text", + "content": "Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1835–1845, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 264, + 526, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 264, + 526, + 353 + ], + "spans": [ + { + "bbox": [ + 304, + 264, + 526, + 353 + ], + "type": "text", + "content": "Leyang Cui and Yue Zhang. 2019. Hierarchically-refined label attention network for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4115-4128, Hong Kong, China. Association for Computational Linguistics." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 361, + 525, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 361, + 525, + 428 + ], + "spans": [ + { + "bbox": [ + 304, + 361, + 525, + 428 + ], + "type": "text", + "content": "Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5860-5870, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 437, + 525, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 437, + 525, + 514 + ], + "spans": [ + { + "bbox": [ + 304, + 437, + 525, + 514 + ], + "type": "text", + "content": "Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER: Few-shot named entity recognition via contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6338-6353, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 523, + 526, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 523, + 526, + 622 + ], + "spans": [ + { + "bbox": [ + 304, + 523, + 526, + 622 + ], + "type": "text", + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 630, + 526, + 687 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 630, + 526, + 687 + ], + "spans": [ + { + "bbox": [ + 304, + 630, + 526, + 687 + ], + "type": "text", + "content": "Alexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019. Few-shot classification in named entity recognition task. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pages 993-1000." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 694, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 694, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 694, + 525, + 772 + ], + "type": "text", + "content": "Tao Gui, Jiacheng Ye, Qi Zhang, Zhengyan Li, Zichu Fei, Yeyun Gong, and Xuanjing Huang. 2020. Uncertainty-aware label refinement for sequence labeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2316-2326, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "3948" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 126 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 126 + ], + "type": "text", + "content": "Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, YunNung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Interspeech, pages 715-719." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 135, + 289, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 135, + 289, + 179 + ], + "spans": [ + { + "bbox": [ + 69, + 135, + 289, + 179 + ], + "type": "text", + "content": "Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, et al. 2021. Pre-trained models: Past, present and future. AI Open, 2:225-250." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 186, + 289, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 186, + 289, + 275 + ], + "spans": [ + { + "bbox": [ + 69, + 186, + 289, + 275 + ], + "type": "text", + "content": "Yucheng Huang, Kai He, Yige Wang, Xianli Zhang, Tieliang Gong, Rui Mao, and Chen Li. 2022. COPNER: Contrastive learning with prompt guiding for few-shot named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2515-2527, Gyeongju, Republic of Korea. International Committee on Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 282, + 289, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 282, + 289, + 326 + ], + "spans": [ + { + "bbox": [ + 69, + 282, + 289, + 326 + ], + "type": "text", + "content": "Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 334, + 289, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 334, + 289, + 379 + ], + "spans": [ + { + "bbox": [ + 69, + 334, + 289, + 379 + ], + "type": "text", + "content": "J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl_1):i180-i182." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 386, + 289, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 386, + 289, + 485 + ], + "spans": [ + { + "bbox": [ + 69, + 386, + 289, + 485 + ], + "type": "text", + "content": "Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022a. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 492, + 289, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 492, + 289, + 592 + ], + "spans": [ + { + "bbox": [ + 69, + 492, + 289, + 592 + ], + "type": "text", + "content": "Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022b. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687-2700, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 599, + 289, + 698 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 599, + 289, + 698 + ], + "spans": [ + { + "bbox": [ + 69, + 599, + 289, + 698 + ], + "type": "text", + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "type": "text", + "content": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020a. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 138 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 138 + ], + "type": "text", + "content": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020b. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 150, + 524, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 150, + 524, + 205 + ], + "spans": [ + { + "bbox": [ + 304, + 150, + 524, + 205 + ], + "type": "text", + "content": "Jingjing Liu, Panupong Pasupat, Scott Cyphers, and Jim Glass. 2013. Asgard: A portable architecture for multilingual dialogue systems. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8386-8390. IEEE." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 216, + 524, + 305 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 216, + 524, + 305 + ], + "spans": [ + { + "bbox": [ + 304, + 216, + 524, + 305 + ], + "type": "text", + "content": "Kun Liu, Yao Fu, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, and Sheng Gao. 2021. Noisy-labeled NER with confidence estimation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3437-3445, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 317, + 524, + 394 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 317, + 524, + 394 + ], + "spans": [ + { + "bbox": [ + 304, + 317, + 524, + 394 + ], + "type": "text", + "content": "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for sequence labeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2431-2441, Florence, Italy. Association for Computational Linguistics." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 405, + 524, + 482 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 405, + 524, + 482 + ], + "spans": [ + { + "bbox": [ + 304, + 405, + 524, + 482 + ], + "type": "text", + "content": "Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022a. Label semantics for few shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1956-1971, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 494, + 524, + 560 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 494, + 524, + 560 + ], + "spans": [ + { + "bbox": [ + 304, + 494, + 524, + 560 + ], + "type": "text", + "content": "Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022b. Decomposed meta-learning for few-shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1584-1596, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 572, + 524, + 638 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 572, + 524, + 638 + ], + "spans": [ + { + "bbox": [ + 304, + 572, + 524, + 638 + ], + "type": "text", + "content": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 650, + 524, + 693 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 650, + 524, + 693 + ], + "spans": [ + { + "bbox": [ + 304, + 650, + 524, + 693 + ], + "type": "text", + "content": "Alejandro Metke-Jimenez and Sarvnaz Karimi. 2016. Concept identification and normalisation for adverse drug event discovery in medical forums. In BMDID@ ISWC." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 706, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 706, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 706, + 524, + 772 + ], + "type": "text", + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "3949" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 138 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 138 + ], + "type": "text", + "content": "Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 148, + 291, + 181 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 148, + 291, + 181 + ], + "spans": [ + { + "bbox": [ + 69, + 148, + 291, + 181 + ], + "type": "text", + "content": "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 190, + 291, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 190, + 291, + 267 + ], + "spans": [ + { + "bbox": [ + 69, + 190, + 291, + 267 + ], + "type": "text", + "content": "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2670-2680, Copenhagen, Denmark. Association for Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 276, + 290, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 276, + 290, + 332 + ], + "spans": [ + { + "bbox": [ + 69, + 276, + 290, + 332 + ], + "type": "text", + "content": "Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using lstm-crf. Wireless Communications & Mobile Computing (Online), 2018." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 340, + 291, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 340, + 291, + 406 + ], + "spans": [ + { + "bbox": [ + 69, + 340, + 291, + 406 + ], + "type": "text", + "content": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 416, + 291, + 481 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 416, + 291, + 481 + ], + "spans": [ + { + "bbox": [ + 69, + 416, + 291, + 481 + ], + "type": "text", + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 491, + 291, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 491, + 291, + 557 + ], + "spans": [ + { + "bbox": [ + 69, + 491, + 291, + 557 + ], + "type": "text", + "content": "Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 566, + 291, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 566, + 291, + 620 + ], + "spans": [ + { + "bbox": [ + 69, + 566, + 291, + 620 + ], + "type": "text", + "content": "Liwen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, and Weiran Xu. 2022. Instructionner: A multi-task instruction-based generative framework for few-shot ner. arXiv preprint arXiv:2203.03903." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 630, + 291, + 685 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 630, + 291, + 685 + ], + "spans": [ + { + "bbox": [ + 69, + 630, + 291, + 685 + ], + "type": "text", + "content": "Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 694, + 291, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 694, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 694, + 291, + 772 + ], + "type": "text", + "content": "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442-6454, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 526, + 553 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 304, + 72, + 526, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 526, + 160 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 526, + 160 + ], + "type": "text", + "content": "Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 168, + 526, + 234 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 168, + 526, + 234 + ], + "spans": [ + { + "bbox": [ + 304, + 168, + 526, + 234 + ], + "type": "text", + "content": "Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879-3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 242, + 526, + 309 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 242, + 526, + 309 + ], + "spans": [ + { + "bbox": [ + 304, + 242, + 526, + 309 + ], + "type": "text", + "content": "Yi Yang and Arzoo Katiyar. 2020. Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6365-6375, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 317, + 526, + 383 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 317, + 526, + 383 + ], + "spans": [ + { + "bbox": [ + 304, + 317, + 526, + 383 + ], + "type": "text", + "content": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020a. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 391, + 526, + 457 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 391, + 526, + 457 + ], + "spans": [ + { + "bbox": [ + 304, + 391, + 526, + 457 + ], + "type": "text", + "content": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020b. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 465, + 526, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 465, + 526, + 553 + ], + "spans": [ + { + "bbox": [ + 304, + 465, + 526, + 553 + ], + "type": "text", + "content": "Ningyu Zhang, Shumin Deng, Zhen Bi, Haiyang Yu, Jiacheng Yang, Mosha Chen, Fei Huang, Wei Zhang, and Huajun Chen. 2020. OpenUE: An open toolkit of universal extraction from text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 1-8, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 563, + 377, + 577 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 563, + 377, + 577 + ], + "spans": [ + { + "bbox": [ + 304, + 563, + 377, + 577 + ], + "type": "text", + "content": "A Appendix" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 584, + 526, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 584, + 526, + 677 + ], + "spans": [ + { + "bbox": [ + 304, + 584, + 526, + 677 + ], + "type": "text", + "content": "In this section, we will discuss the remaining two NER settings: nested NER and discontinuous NER. Because the text-to-text structure of our proposed method can be easily adapted to all three NER settings, which will result in a unified structure for solving NER problems. Here, we mainly discuss standard NER scenarios with abundant data." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 678, + 526, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 678, + 526, + 745 + ], + "spans": [ + { + "bbox": [ + 304, + 678, + 526, + 745 + ], + "type": "text", + "content": "For data abundant nested NER, We conduct experiments on Genia (Kim et al., 2003). We follow BARTNER (Yan et al., 2021) to use five entities types and split the train, dev, test as 8.1:0.9:1.0. The results are in Table 6." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 746, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 746, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 746, + 526, + 772 + ], + "type": "text", + "content": "For data abundant discontinuous NER, we conduct experiments on CADEC (Karimi et al., 2015)." + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "text", + "content": "3950" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 69, + 68, + 299, + 139 + ], + "blocks": [ + { + "bbox": [ + 69, + 68, + 299, + 139 + ], + "lines": [ + { + "bbox": [ + 69, + 68, + 299, + 139 + ], + "spans": [ + { + "bbox": [ + 69, + 68, + 299, + 139 + ], + "type": "table", + "html": "
Genia: ModelPRF
(Li et al., 2020b)[BERT-Large]†81.2576.3678.72
(Yu et al., 2020b)[BERT-Large]†79.4378.3278.87
(Wang et al., 2020)[BERT-Large]79.4578.9479.19
BARTNER (Yan et al., 2021)78.8779.679.23
2INER82.980.7481.81
", + "image_path": "aa1ce3f42503e4314d4568f3745c8ccb113375eb4bdf63eb875421d10a6ab47a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 69, + 194, + 307, + 264 + ], + "blocks": [ + { + "bbox": [ + 67, + 147, + 290, + 183 + ], + "lines": [ + { + "bbox": [ + 67, + 147, + 290, + 183 + ], + "spans": [ + { + "bbox": [ + 67, + 147, + 290, + 183 + ], + "type": "text", + "content": "Table 6: Span-F1 (%) on Genia Nested data abundant NER setting. The \"†\" mean the reproduction by (Yan et al., 2021)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 69, + 194, + 307, + 264 + ], + "lines": [ + { + "bbox": [ + 69, + 194, + 307, + 264 + ], + "spans": [ + { + "bbox": [ + 69, + 194, + 307, + 264 + ], + "type": "table", + "html": "
CADEC: ModelPRF
(Metke-Jimenez and Karimi, 2016)64.456.560.2
(Tang et al., 2018)67.864.966.3
(Dai et al., 2020)[ELMo]68.969.069.0
BARTNER (Yan et al., 2021)70.0871.2170.64
2INER71.1875.2673.16
", + "image_path": "d493dd0845cbc10bf9c8e07a0261bca83ad302c5b4731f8980439a864f39fefe.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 272, + 289, + 296 + ], + "lines": [ + { + "bbox": [ + 67, + 272, + 289, + 296 + ], + "spans": [ + { + "bbox": [ + 67, + 272, + 289, + 296 + ], + "type": "text", + "content": "Table 7: Span-F1 (%) on CADEC discontinuous data abundant NER setting." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 317, + 290, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 317, + 290, + 371 + ], + "spans": [ + { + "bbox": [ + 67, + 317, + 290, + 371 + ], + "type": "text", + "content": "Following BARTNER (Yan et al., 2021), since only the Adverse Drug Events (ADEs) entities include discontinuous data, only these entities were considered. The results are in Table 7." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 372, + 291, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 372, + 291, + 493 + ], + "spans": [ + { + "bbox": [ + 67, + 372, + 291, + 493 + ], + "type": "text", + "content": "The experiment settings are the same as flat NER. We use T5-large as the backbone model and report span-level F1. The results show that in data abundant nested and discontinuous NER setting, our proposed method greatly outperforms BARTNER (Yan et al., 2021) and other SOTA methods, which demonstrates that our methods do have a potential to handle different NER settings in a unified framework." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "type": "text", + "content": "3951" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_content_list.json b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0a713bd2bd70855971df9fbad7f0cffc388d6c52 --- /dev/null +++ b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_content_list.json @@ -0,0 +1,1219 @@ +[ + { + "type": "text", + "text": "A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs", + "text_level": 1, + "bbox": [ + 124, + 89, + 872, + 111 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Adrian Kochsiek", + "bbox": [ + 258, + 137, + 410, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "University of Mannheim", + "bbox": [ + 233, + 154, + 433, + 170 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Germany", + "bbox": [ + 295, + 171, + 371, + 187 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "akochsiek@uni-mannheim.de", + "bbox": [ + 206, + 187, + 460, + 202 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Rainer Gemulla", + "bbox": [ + 594, + 137, + 737, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "University of Mannheim", + "bbox": [ + 566, + 154, + 766, + 168 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Germany", + "bbox": [ + 626, + 171, + 705, + 187 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "rgemulla@uni-mannheim.de", + "bbox": [ + 544, + 187, + 786, + 203 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Semi-inductive link prediction (LP) in knowledge graphs (KG) is the task of predicting facts for new, previously unseen entities based on context information. Although new entities can be integrated by retraining the model from scratch in principle, such an approach is infeasible for large-scale KGs, where retraining is expensive and new entities may arise frequently. In this paper, we propose and describe a large-scale benchmark to evaluate semi-inductive LP models. The benchmark is based on and extends Wikidata5M: It provides transductive, k-shot, and 0-shot LP tasks, each varying the available information from (i) only KG structure, to (ii) including textual mentions, and (iii) detailed descriptions of the entities. We report on a small study of recent approaches and found that semi-inductive LP performance is far from transductive performance on long-tail entities throughout all experiments. The benchmark provides a test bed for further research into integrating context and textual information in semi-inductive LP models.", + "bbox": [ + 144, + 281, + 458, + 620 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 634, + 258, + 650 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "A knowledge graph (KG) is a collection of facts describing relations between real-world entities. Facts are represented in the form of subject-relation-object triples such as (Dave Grohl, memberOf, Foo Fighters). In this paper, we consider link prediction (LP) tasks, i.e., the problem of inferring missing facts in the KG. LP can be transductive (TD; all entities known a priori), semi-inductive (SI; some entities known a priori), and inductive (no entities known a priori). We concentrate on semi-inductive and transductive LP.", + "bbox": [ + 112, + 661, + 489, + 835 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "SI-LP focuses on modeling entities that are unknown or unseen during LP, such as out-of-KG entities (not part or not yet part of the KG) or newly created entities, e.g., a new user, product, or event. Such previously unknown entities can be", + "bbox": [ + 112, + 838, + 489, + 917 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "handled by retraining in principle. For large-scale KGs, however, retraining is inherently expensive and new entities may arise frequently. Therefore, the goal of SI-LP is to avoid retraining and perform LP directly, i.e., to generalize beyond the entities seen during training.", + "bbox": [ + 507, + 252, + 884, + 348 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "To perform LP for unseen entities, context information about these entities is needed. The amount and form of context information varies widely and may take the form of facts and/or textual information, such as an entity mention and/or its description. For example, a new user in a social network may provide a name, basic facts such as gender or country of origin, and perhaps a textual self-description.", + "bbox": [ + 507, + 349, + 885, + 493 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this paper, we introduce the Wikidata5M-SI benchmark for SI-LP. Our benchmark is based on the popular Wikidata5M (Wang et al., 2021) benchmark and has four major design goals: (G1) It ensures that unseen entities are long tail entities since popular entities (such as, say, Foo Fighters) and/or types and taxons (such as human and organization) are unlikely to be unseen. (G2) It allows to evaluate each model with varying amounts of contextual facts (0-shot, few-shot, transductive), i.e., to explore individual models across a range of tasks. (G3) It provides a controlled amount of textual information (none, mention, full description), where each setting demands different modeling capabilities. Finally, (G4) the benchmark is large-scale so that retraining is not a suitable approach. All prior SI-LP benchmarks violate at least one of these criteria.", + "bbox": [ + 507, + 494, + 885, + 781 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We report on a small experimental study with recent LP approaches. In general, we found that", + "bbox": [ + 507, + 784, + 882, + 816 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. SI performance was far behind TD performance in all experiments for long-tail entities,", + "2. there was generally a trade-off between TD and SI performance,", + "3. textual information was highly valuable," + ], + "bbox": [ + 522, + 827, + 882, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "10634", + "bbox": [ + 475, + 927, + 524, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10634-10643", + "bbox": [ + 210, + 945, + 786, + 958 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 958, + 719, + 971 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "4. proper integration of context and textual information needs further exploration, and", + "5. facts involving less common relations provided more useful context." + ], + "bbox": [ + 127, + 84, + 487, + 154 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our benchmark provides directions and a test bed for further research into SI-LP.", + "bbox": [ + 112, + 171, + 485, + 200 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 112, + 218, + 267, + 233 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Multiple SI-LP datasets have been proposed in the literature. The benchmarks of Daza et al. (2021), Albooyeh et al. (2020), and Galkin et al. (2021) are obtained by first merging the splits of smaller transductive LP datasets and subsequently sampling unseen entities uniformly to construct validation and test splits. These benchmarks do not satisfy goals G1-G4. Shi and Weninger (2018) follow a similar approach but focus on only 0-shot evaluation based on textual features. Xie et al. (2016) and Shah et al. (2019) select entities from Freebase with connection to entities in FB15k (Bordes et al., 2013), also focussing on 0-shot evaluation using rich textual descriptions. These approaches do not satisfy G2 and G3. Finally, Wang et al. (2019) and Hamaguchi et al. (2017) uniformly sample test triples and mark occurring entities as unseen. These approaches do not focus on long-tail entities (and, in fact, the accumulated context of unseen entities may be larger than the training graph itself) and they do not satisfy G1-G3.", + "bbox": [ + 112, + 246, + 487, + 583 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "There are also several of fully-inductive LP benchmarks (Teru et al., 2020; Wang et al., 2021) involving KGs. While SI-LP aims to connect unseen entities to an existing KG, fully-inductive LP reasons about a new KG with completely separate entities (but shared relations). We do not consider this task in this work.", + "bbox": [ + 112, + 586, + 487, + 697 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3 The Wikidata5M-SI Benchmark", + "text_level": 1, + "bbox": [ + 112, + 713, + 421, + 728 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Wikidata5M-SI is based on the popular Wikidata5M (Wang et al., 2021) benchmark, which is induced by the 5M most common entities of Wikidata. Our benchmark contains transductive and semi-inductive valid/test splits; see Tab. 1 for an overview. Generally, we aimed to keep Wikidata5M-SI as close as possible to Wikidata5M. We did need to modify the original transductive valid and test splits, however, because they unintentionally contained both seen and unseen entities (i.e., these splits were not fully transductive). We", + "bbox": [ + 112, + 741, + 487, + 917 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/6df893522092a3826a3019d3e2f226989b49c48bb05359ce9afd18aaf33ac573.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TrainTransductiveSemi-inductive
ValidTestValidTest
Triples20,600,1874,9834,9775,5005,500
Entities4,593,1037,7687,7603,7223,793
Entities unseen-00500500
Relations822217211126115
", + "bbox": [ + 510, + 80, + 878, + 171 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Table 1: Statistics of the Wikidata5M-SI splits.", + "bbox": [ + 534, + 180, + 853, + 195 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "did that by simply removing all triples involving unseen entities.", + "bbox": [ + 507, + 223, + 880, + 253 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Unseen entities. To ensure that unseen entities in the semi-inductive splits are from the long tail (G1), we only considered entities of degree 20 or less. To be able to provide sufficient context for few-shot tasks (G2), we further did not consider entities of degree 10 or less. In more detail, we sampled 500 entities of degrees 11-20 (stratified sampling grouped by degree) for each semi-inductive split. All sampled entities, along with their facts, were removed from the train split. Note that these entities (naturally) have a different class distribution than all entities; see Sec. A.1 for details.", + "bbox": [ + 507, + 256, + 882, + 448 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Tasks and metrics. For TD tasks, we follow the standard protocol of Wikidata5M. To construct SI tasks, we include 11 of the original facts of each unseen entity into its SI split; each split thus contains 5,500 triples. This enables up to 10-shot SI tasks (1 fact to test, up to 10 facts for context). For entities of degree larger than 11, we select the 11 facts with the most frequent relations; see Tab. 2 for an example. The rationale is that more common relations (such as instanceof or country) may be considered more likely to be provided for unseen entities than rare ones (such as militaryBranch or publisher). We then construct a single $k$ -shot task for each triple $(s,p,o)$ in the SI split as follows. When, say, $s$ is the unseen entity, we consider the LP task $(s,p,?)$ and provide $k$ additional facts of form $(s,p',o')$ as context. Context facts are selected by frequency as above, but we also explored random and infrequent-relation context in our study. Models are asked to provide a ranking of predicted answers, and we determine the filtered mean reciprocal rank (MRR) and Hits@K of the correct answer $(o)$ .", + "bbox": [ + 507, + 451, + 882, + 819 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Textual information. For each entity, we provide its principal mention and a detailed description (both directly from Wikidata5M); see Tab. 2. This allows to differentiate model evaluation with varying amounts of textual information per entity (G3): (A) atomic, i.e., no textual information, (M) men", + "bbox": [ + 507, + 822, + 882, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "10635", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/cb2ab194c68802dab69337dd9266e6bedca5457458a49eb9115a16cea174c1bf.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
IDQ18918
MentionSam Witwer
DescriptionSamuel Stewart Witwer (born October 20, 1977) is an American actor and mu-sician. He is known for portraying Crashdown in Battlestar Galactica, Davis Bloome in Smallville, Aidan Waite in Being Human, and Ben Lockwood in Supergirl. He voiced the protagonist Galen Marek / Starkiller in Star Wars: The Force Unleashed, the Son in Star Wars: The Clone Wars and Emperor Palpatine in Star Wars Rebels, both of which he has also voiced Darth Maul.
Context triplesinstance of | humanM: ○ D: ○
country of citizenship | United States of AmericaM: × D: ○
occupation | musicianM: × D: ✓
occupation | actorM: × D: ✓
place of birth | GlenviewM: × D: ×
given name | SamuelM: ○ D: ✓
given name | SamM: ✓ D: ○
cast member | Battlestar GalacticaM: × D: ✓
cast member | Being Human - supernatural drama television seriesM: × D: ✓
cast member | Star Wars: The Force Unleashed IIM: × D: ○
cast member | The MistM: × D: ×
", + "bbox": [ + 126, + 80, + 870, + 418 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Table 2: Example of an entity from the semi-inductive validation set of Wikidata5M-SI. For each triple, we annotated whether the answer is contained in (✓), deducible from (○), or not contained in (×) mention (M) or description (D).", + "bbox": [ + 112, + 428, + 882, + 470 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "tions only, and (D) detailed textual descriptions as in (Kochsiek et al., 2023). This differentiation is especially important in the SI setting, as detailed text descriptions might not be provided for unseen entities and each setting demands different modeling capabilities. In fact, (A) performs reasoning only using graph structure, whereas (D) also benefits from information extraction to some extent. We discuss this further in Sec. 5.", + "bbox": [ + 112, + 495, + 487, + 640 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "4 Semi-Inductive Link Prediction Models", + "text_level": 1, + "bbox": [ + 112, + 652, + 420, + 683 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We briefly summarize recent models for SI-LP; we considered these models in our experimental study.", + "bbox": [ + 112, + 694, + 487, + 725 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Graph-only models. ComplEx (Trouillon et al., 2016) is the best-performing transductive KGE model on Wikidata5M (Kochsiek et al., 2022). To use ComplEx for SI-LP, we follow an approach explored by Jambor et al. (2021). In particular, we represent each entity as the sum of a local embedding (one per entity) and a global bias embedding. For 0-shot, we solely use the global bias for the unseen entity. For k-shot, we obtain the local embedding for the unseen entity by performing a single training step on the context triples (keeping all other embeddings fixed). An alternative", + "bbox": [ + 112, + 726, + 489, + 917 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "approach is taken by oDistMult-ERAvg (Albooyeh et al., 2020), which represents unseen entities by aggregating the embeddings of the relations and entities in the context. A more direct approach is taken by HittER (Chen et al., 2021), which contextualizes the query entity with its neighborhood for TD-LP. The approach can be used for SI-LP directly by using a masking token (akin to the global bias above) for an unseen entity. We originally planned to consider NodePiece (Galkin et al., 2021) (entity represented by a combination of anchor embeddings) and NBFNet (Zhu et al., 2021) (a GNN-based LP model); both support SI-LP directly. However, the available implementations did not scale to Wikidata5M-SI (out of memory).", + "bbox": [ + 507, + 495, + 884, + 737 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Text-based models. As a baseline approach to integrate textual information directly into KGE models, we consider the approach explored in the", + "bbox": [ + 507, + 740, + 882, + 789 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "To address the high memory footprint (Galkin et al., 2021) of oDistMult-ERAvg, we extend it with neighborhood sampling.", + "bbox": [ + 507, + 808, + 882, + 845 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "For NBFNet (Zhu et al., 2021), the large memory footprint is inherent to the model; it is a full-graph GNN and hard to scale. For NodePiece (Galkin et al., 2021), however, the problem mainly lies in the expensive evaluation. All intermediate representations are precomputed, leading to a large memory overhead.", + "bbox": [ + 507, + 846, + 882, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "10636", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "WikiKG90M benchmark (Hu et al., 2021); see Sec. A.2 for details. The remaining approaches are purely textual. SimKGC (Wang et al., 2022) utilizes two pretrained BERT Transformers: one to embed query entities (and relations) based on their mention or description, and one for tail entities. Using a contrastive learning approach, it measures cosine similarity between both representations for ranking. KGT5 (Saxena et al., 2022) is a sequence-to-sequence link prediction approach, which is trained to generate the mention of the answer entity using the mention or description of the query entity and relation as input. Both approaches support 0-shot SI-LP when textual information is provided for the query entity. They do not utilize additional context, however, i.e., do not support k-shot SI-LP. KGT5-context (Kochsiek et al., 2023) is an extension of KGT5, which extends the input of KGT5 by the one-hop neighborhood of the query entity and consequently supports k-shot LP directly.", + "bbox": [ + 115, + 82, + 490, + 420 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "5 Experimental Study", + "text_level": 1, + "bbox": [ + 112, + 435, + 321, + 451 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We evaluated all presented baseline models in the TD and SI setting on the atomic, mentions, and descriptions dataset. Further, we evaluated in detail which context was most useful and what information was conveyed by textual mentions and descriptions.", + "bbox": [ + 112, + 462, + 487, + 557 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Setup. Source code, configuration, and the benchmark itself are available at https://github. com/uma-pi1.wikidata5m-si. For further details on hyperparameter tuning and training see Sec. A.3.", + "bbox": [ + 112, + 560, + 489, + 638 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Main results. Transductive and SI performance in terms of MRR of all models is presented in Tab. 3; Hits@K in Tab. 7-9 (Sec. A). Note that overall transductive performance was oftentimes below best reported SI performance. This is due to varying degrees of query entities between both settings. Typically, models perform better predicting new relations for an entity (e.g., the birthplace) than predicting additional objects for a known relation (e.g., additional awards won by a person) (Saxena et al., 2022; Kochsiek et al., 2023). For a direct comparison between both settings, we additionally report TD performance on long tail query entities. $^{3}$", + "bbox": [ + 112, + 640, + 487, + 848 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Atomic. TD performance on the long tail was considerably higher than SI performance. As no in", + "bbox": [ + 112, + 850, + 489, + 881 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "formation was provided for unseen entities, 0-shot was not reasonably possible. Without text-based information, context was a necessity. A simple neighborhood aggregation—entity-relation average (ERAvg)—offered the best integration of context.", + "bbox": [ + 507, + 84, + 882, + 164 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Mentions. Integrating mentions did not improve performance on its own, as provided text information was still limited. However, additionally providing context information during inference (KGT5-context) simplified the learning problem and improved TD performance significantly. But for 0-shot, the limited text information provided with mentions allowed for reasonable performance. To analyze what information is conveyed for 0-shot, we annotated 100 valid triples; see Tab. 4. In $10\\%$ of cases, the answer was already contained in the mention, and it was deducible in at least $7\\%$ . This enabled basic reasoning without any further information. In contrast to the TD setting, KGT5 outperformed its context extension. KGT5-context was reliant on context which was lacking especially during 0-shot. This showed a trade-off between best performance in the SI and TD setting. This trade-off could be mitigated by applying (full and partial) context hiding. With such adapted training, KGT5-context reached a middle ground with a transductive MRR of 0.366 and 0-shot MRR of 0.283.4 However, even with full context (10-shot), performance was still only on par with KGT5. Therefore, context information did not bring any further benefits when text was provided.", + "bbox": [ + 507, + 167, + 884, + 585 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Descriptions. Further, integrating descriptions improved performance for both settings, TD and SI, considerably; see Tab. 3. Similar to the mentions-only setting, KGT5-context performed best in TD and KGT5 in the SI setting. Applying the same trade-off with context-hiding reached a middle ground with 0.418 TD-MRR and 0.449 SI-MRR.", + "bbox": [ + 507, + 588, + 882, + 700 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Descriptions were very detailed and partially contained the correct answer as well as the same information as contained in context triples; see Tab. 4. Therefore, performance did not further improve with context size. In such cases, models mainly benefit from information extraction capabilities. To judge how much information extraction helps, we grouped performance of KGT5+description in the 0-shot setting on validation data into the groups contained, deducible and not contained in descrip", + "bbox": [ + 507, + 703, + 882, + 863 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "4In $25\\% / 25\\% / 50\\%$ of cases, we hid the full context/sampled between 1-10 neighbors/used the full context, respectively.", + "bbox": [ + 507, + 879, + 882, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "3We define long tail query entities as entities with degree $\\leq 10$ as in the SI setting.", + "bbox": [ + 112, + 891, + 487, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "10637", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/4c597d069cebe5c1fb4baede01313e4f125e346507e24f70ac1253c37b7d6a27.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelTransductiveSemi-inductive (num. shots)Pre-trained
AllLong tail013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3080.5230.1240.1510.1760.1900.206no
DistMult + ERAvg (Albooyeh et al., 2020)0.2940.512-0.1710.2460.2950.333no
HittER (Chen et al., 2021)0.2840.5120.0190.1050.1530.1790.221no
DistMult + ERAvg + Mentions0.2990.535-0.1870.2350.2580.280yes
SimKGC (mentions only)0.2120.3610.220----yes
KGT5 (Saxena et al., 2022)0.2810.5420.310----no
KGT5-context (Kochsiek et al., 2023)0.3740.6780.2200.2170.2360.2590.311no
DistMult + ERAvg + Descriptions0.3130.585-0.2780.2810.2850.292yes
SimKGC + Descriptions (Wang et al., 2022)0.3530.6630.403----yes
KGT5 + Descriptions (Kochsiek et al., 2023)0.3640.7280.470----no
KGT5-context + Descriptions (Kochsiek et al., 2023)0.4200.7770.4170.4200.4160.4200.437no
", + "bbox": [ + 112, + 80, + 880, + 282 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/31d45159c5ece98c1e1cd7e33d1abfd26cc42b9c75893ec7721d32e30d4244fe.jpg", + "table_caption": [ + "Table 3: Transductive and semi-inductive link prediction results in terms of MRR on the dataset Wikidata5M-SI. The first group presets results on the atomic, the second on the mentions and the third on the descriptions dataset. Best per TD/SI in bold. Best per group underlined." + ], + "table_footnote": [], + "table_body": "
MentionDescription
Contained10%44%
Deductible7%10%
Not contained83%46%
", + "bbox": [ + 139, + 359, + 463, + 434 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/43bba62dbfaca9436f58a64500a6c77ee103085817d0c9ccbe840ad1fecebc7f.jpg", + "table_caption": [ + "Table 4: Information about a query answer contained in mentions and descriptions. Annotated for 100 sampled triples from 0-shot valid. For an example, see Tab. 2." + ], + "table_footnote": [], + "table_body": "
Context selection135
Most common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
", + "bbox": [ + 137, + 504, + 463, + 575 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 5: Influence of context selection. Semi-inductive test MRR of KGT5-context.", + "bbox": [ + 112, + 583, + 485, + 612 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "tion; see Fig. 1 in Sec. A. When contained, the correct answer was extracted in $\\approx 70\\%$ of cases.", + "bbox": [ + 112, + 643, + 485, + 674 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Context selection. We selected the most common relations as context triples so far, as this may be a more realistic setting. To investigate the effect of this selection approach, we compared the default selection of choosing most common relations to least common and random. Results for KGT5-context are shown in Tab. 5; for all other models in Tab. 10 in Sec. A. We found that the less common the relations of the provided context, the better the SI performance. More common context relations often described high-level concepts, while less common provided further detail; see the example in Tab. 2. While more common context may be more readily available, less common context was more helpful to describe a new entity.", + "bbox": [ + 112, + 678, + 487, + 917 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 362, + 640, + 376 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We proposed the new WikiData5M-SI large-scale benchmark for semi-supervised link prediction. The benchmark focuses on unseen entities from the long tail and allows to evaluate models with varying and controlled amounts of factual and textual context information. In our experimental evaluation, we found that semi-inductive LP performance fell behind transductive performance for long-tail entities in general, and that detailed textual information was often more valuable than factual context information. Moreover, current models did not integrate these two types of information adequately, suggesting a direction for future research.", + "bbox": [ + 507, + 388, + 882, + 596 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 509, + 611, + 613, + 626 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This study was performed on Wikidata5M-SI, i.e., a subset of a single knowledge graph. Model performance and insights may vary if graph structure and/or availability and usefulness of mentions and description is different. In particular, the entity descriptions provided with Wikidata5M-SI partly contained information relevant for link prediction so that models benefited from information extraction capabilities.", + "bbox": [ + 507, + 637, + 882, + 782 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 509, + 796, + 660, + 810 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This research adapts publicly available data, benchmarks, and codebases for evaluation. We believe that this research was conducted in an ethical manner in compliance with all relevant laws and regulations.", + "bbox": [ + 507, + 822, + 882, + 902 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "10638", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 115, + 84, + 213, + 98 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Marjan Albooyeh, Rishab Goel, and Seyed Mehran Kazemi. 2020. Out-of-sample representation learning for knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2657-2666.", + "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems (NIPS), pages 1-9.", + "Samuel Broscheit, Daniel Ruffinelli, Adrian Kochsiek, Patrick Betz, and Rainer Gemulla. 2020. LibKGE - A knowledge graph embedding library for reproducible research. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 165-174.", + "Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, and Yangfeng Ji. 2021. Hitter: Hierarchical transformers for knowledge graph embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10395-10407.", + "Daniel Daza, Michael Cochez, and Paul Groth. 2021. Inductive entity representations from text via link prediction. In Proceedings of the Web Conference 2021, pages 798-808.", + "Mikhail Galkin, Etienne Denis, Jiapeng Wu, and William L Hamilton. 2021. Nodepiece: Compositional and parameter-efficient representations of large knowledge graphs. In International Conference on Learning Representations.", + "Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 1802-1808.", + "Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. 2021. Ogb-lsc: A large-scale challenge for machine learning on graphs. Advances in Neural Information Processing Systems, 35.", + "Dora Jambor, Komal Teru, Joelle Pineau, and William L Hamilton. 2021. Exploring the limits of few-shot link prediction in knowledge graphs. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2816-2822.", + "Adrian Kochsiek, Fritz Niesel, and Rainer Gemulla. 2022. Start small, think big: On hyperparameter optimization for large-scale knowledge graph embeddings. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September" + ], + "bbox": [ + 115, + 105, + 489, + 917 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "19-23, 2022, Proceedings, Part II, pages 138-154. Springer.", + "Adrian Kochsiek, Apoorv Saxena, Inderjeet Nair, and Rainer Gemulla. 2023. Friendly neighbors: Contextualized sequence-to-sequence link prediction. In Proceedings of the 8th Workshop on Representation Learning for NLP.", + "Apoory Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to-sequence knowledge graph completion and question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2814-2828.", + "Haseeb Shah, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, and Faisal Shafait. 2019. An open-world extension to knowledge graph completion models. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3044-3051.", + "Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 32.", + "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. Advances in Neural Information Processing Systems, 33:16857-16867.", + "Komal Teru, Etienne Denis, and Will Hamilton. 2020. Inductive relation prediction by subgraph reasoning. In International Conference on Machine Learning, pages 9448-9457.", + "Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071-2080.", + "Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4281-4294.", + "Peifeng Wang, Jialong Han, Chenliang Li, and Rong Pan. 2019. Logic attention based neighborhood aggregation for inductive knowledge graph embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7152-7159.", + "Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021. Kepler: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176-194." + ], + "bbox": [ + 510, + 85, + 882, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "10639", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30.", + "bbox": [ + 115, + 85, + 487, + 151 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015.", + "bbox": [ + 115, + 161, + 487, + 227 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems, 34.", + "bbox": [ + 114, + 237, + 487, + 302 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "A Appendix", + "text_level": 1, + "bbox": [ + 114, + 316, + 236, + 331 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "A.1 Distribution of Unseen Entities", + "text_level": 1, + "bbox": [ + 114, + 341, + 405, + 355 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Long-tail entities have a different distribution than entities from the whole KG; see Tab. 6 for an overview of the distribution shift for the top 10 entity types. This difference is natural. In particular, high-degree entities in a KG such as Wikidata often refer to types/taxons (e.g., human, organization, ...) as well as popular named entities (e.g., Albert Einstein, Germany, ...). These entities are fundamental to the KG and/or of high interest and have many facts associated with them. For this reason, they do not form suitable candidates for benchmarking unseen or new entities. In addition, removing high-degree entities for the purpose of evaluating SI-LP is likely to distort the KG (e.g., consider removing type \"human\" or \"Germany\"). In contrast, Wikidata5M-SI focuses on entities for which knowledge is not yet abundant: long-tail entities are accompanied by no or few facts (at least initially) and our SI-LP benchmark tests reasoning capabilities with this limited information.", + "bbox": [ + 115, + 361, + 487, + 682 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "A.2 Integrating Text into KGE Models", + "text_level": 1, + "bbox": [ + 509, + 84, + 828, + 99 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To integrate text into traditional KGE models, we follow the baseline models of the WikiKG90M link prediction challenge (Hu et al., 2021). We embed mentions combined with descriptions using MPNet (Song et al., 2020), concatenate the resulting descriptions embedding with the entity embedding, and project it with a linear layer for the final representation of the entity. In combination with oDistMult-ERAvg (Albooyeh et al., 2020), we apply the aggregation of neighboring entities and relations on the entity embedding part only. The resulting aggregation is then concatenated with its description and finally projected.", + "bbox": [ + 507, + 105, + 882, + 312 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "This approach is closely related to BLP (Daza et al., 2021). The main differences to BLP are:", + "bbox": [ + 507, + 312, + 880, + 342 + ], + "page_idx": 6 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Hu et al. (2021) use MPNet, BLP uses BERT.", + "2. In combination with DistMult-ERAvg, we concatenate a learnable \"structural embedding\" to the CLS embedding of the language model, whereas BLP does not." + ], + "bbox": [ + 522, + 348, + 882, + 439 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "A.3 Experimental Setup", + "text_level": 1, + "bbox": [ + 509, + 451, + 717, + 466 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "For hyperparameter optimization for ComplEx (Trouillon et al., 2016), DistMult (Yang et al., 2015), and HittER (Chen et al., 2021), we used the multi-fidelity approach GraSH (Kochsiek et al., 2022) implemented in LibKGE (Broscheit et al., 2020) with 64 initial trials and trained for up to 64 epochs. For fold-in, we reused training hyperparameters and trained for a single epoch on the provided context. For text-based approaches, we used the hyperparameters and architectures proposed by the authors for the transductive split of Wikidata5M. We trained on up to 5 A6000-GPUs with 49GB of VRAM.", + "bbox": [ + 507, + 470, + 882, + 678 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "10640", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/89e501f861d5ef0214152d97d53cfcea733d7ae14141c864209868ba5ff5a7c3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
WikidataIDMentionAll entitiesLong-tail entities
Q5human39%61%
Q11424film3%8%
Q484170commune of France1%7%
Q482994album3%1%
Q16521taxon9%1%
Q134556single1%1%
Q747074commune of Italy0%1%
Q2074737municipality of Spain0%1%
Q571book1%1%
Q7889video game1%1%
", + "bbox": [ + 221, + 83, + 776, + 278 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/baae9bbd293af72cab83695b604ca04663ba5fd908d5419b7ced6cda5d4535df.jpg", + "image_caption": [ + "Figure 1: Number of correct (rank=1) and incorrect predictions by KGT5+descriptions on annotated examples per annotation label." + ], + "image_footnote": [], + "bbox": [ + 262, + 335, + 736, + 583 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/6026a441f3b98506f54bfb89cbc3672f5394b099a10392ef3cd43300a9c5328d.jpg", + "table_caption": [ + "Table 6: Distribution of top 10 entity types over long-tail entities with degree between 11 and 20 compared to all entities." + ], + "table_footnote": [], + "table_body": "
ModelTrans.Semi-inductive (num. shots)
013510
Complex + Bias + Fold in (Jambor et al., 2021)0.2600.0580.0970.1180.1240.132
DistMult + ERAvg (Albooyeh et al., 2020)0.237-0.1150.1510.1850.209
HittER (Chen et al., 2021)0.2340.0050.0760.1150.1320.153
DistMult + ERAvg + Mentions0.239-0.1060.1420.1530.167
SimKGC (mentions only)0.1820.187----
KGT5 (Saxena et al., 2022)0.2490.263----
KGT5-context (Kochsiek et al., 2023)0.3470.1840.1770.1950.2180.263
DistMult + ERAvg + Descriptions0.252-0.1520.1530.1530.161
SimKGC + Descriptions (Wang et al., 2022)0.3110.349----
KGT5 + Descriptions0.3320.430----
KGT5-context + Descriptions0.4000.3790.3820.3730.3780.393
", + "bbox": [ + 122, + 640, + 873, + 890 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 7: Transductive and semi-inductive link prediction results in terms of H@1 on the dataset Wikidata5M-SI.", + "bbox": [ + 117, + 898, + 875, + 913 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "10641", + "bbox": [ + 477, + 927, + 522, + 940 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/0d637e223560b8fe3fe0b4aa3b32b9e0fe281fae22778a0d6f9bdd23d08e37c0.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3370.1650.1800.2020.2190.242
DistMult + ERAvg (Albooyeh et al., 2020)0.328-0.1900.2920.3520.401
HittER (Chen et al., 2021)0.3090.0130.1090.1580.1880.242
DistMult + ERAvg + Mentions0.332-0.2390.2890.3140.340
SimKGC (mentions only)0.2230.227----
KGT5 (Saxena et al., 2022)0.2960.332----
KGT5-context (Kochsiek et al., 2023)0.3900.2360.2340.2570.2780.335
DistMult + ERAvg + Descriptions0.344-0.3680.3730.3780.380
SimKGC (Wang et al., 2022)0.3670.421----
KGT5 + Descriptions0.3850.490----
KGT5-context + Descriptions0.4320.4410.4430.4430.4470.463
", + "bbox": [ + 122, + 152, + 875, + 399 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/3fd2ce994ab5d7c80f165bf238ab17d6880c5876b2bd282cf885ac708209371a.jpg", + "table_caption": [ + "Table 8: Transductive and semi-inductive link prediction results in terms of H@3 on the dataset Wikidata5M-SI." + ], + "table_footnote": [], + "table_body": "
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3870.2310.2450.2820.3090.336
DistMult + ERAvg (Albooyeh et al., 2020)0.389-0.2700.4090.4930.564
HittER (Chen et al., 2021)0.3760.0500.1570.2260.2700.359
DistMult + ERAvg + Mentions0.411-0.3200.3920.4400.478
SimKGC (mentions only)0.2660.283----
KGT5 (Saxena et al., 2022)0.3440.398----
KGT5-context (Kochsiek et al., 2023)0.4230.2930.2950.3100.3360.400
DistMult + ERAvg + Descriptions0.425-0.4650.4720.4840.491
SimKGC (Wang et al., 2022)0.4320.504----
KGT5 + Descriptions0.4160.544----
KGT5-context + Descriptions0.4550.4840.4890.4890.4950.516
", + "bbox": [ + 122, + 571, + 875, + 820 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 9: Transductive and semi-inductive link prediction results in terms of H@10 on the dataset Wikidata5M-SI.", + "bbox": [ + 114, + 829, + 878, + 843 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "10642", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/b6a0092f4fc56e10af7c6f2a33953ece3eeddc716ad6b3f162b8fd09290bf1d1.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelContext selection135
ComplEx + fold-inMost common0.1510.1610.168
Least common0.1660.1850.195
Random0.1640.1870.196
DistMult + ERAvgMost common0.1710.2460.295
Least common0.2170.2990.323
Random0.2150.3030.318
oDistMult + ERAvg + MentionsMost common0.1870.2350.258
Least common0.2370.2740.279
Random0.2320.2650.272
HittERMost common0.1050.1530.179
Least common0.1510.1950.216
Random0.1360.1900.206
KGT5-contextMost common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
KGT5-context + Desc.Most common0.4200.4160.420
Least common0.4230.4240.430
Random0.4220.4300.430
", + "bbox": [ + 196, + 319, + 796, + 651 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Table 10: Influence of context selection. Semi-inductive test MRR. Best per model in bold.", + "bbox": [ + 191, + 659, + 803, + 675 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10643", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_model.json b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..56fdc2f17418066aa71c65db89ac8b91f3ba402b --- /dev/null +++ b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_model.json @@ -0,0 +1,1518 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.126, + 0.09, + 0.873, + 0.112 + ], + "angle": 0, + "content": "A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs" + }, + { + "type": "text", + "bbox": [ + 0.26, + 0.138, + 0.411, + 0.153 + ], + "angle": 0, + "content": "Adrian Kochsiek" + }, + { + "type": "text", + "bbox": [ + 0.235, + 0.155, + 0.435, + 0.171 + ], + "angle": 0, + "content": "University of Mannheim" + }, + { + "type": "text", + "bbox": [ + 0.296, + 0.172, + 0.373, + 0.188 + ], + "angle": 0, + "content": "Germany" + }, + { + "type": "text", + "bbox": [ + 0.207, + 0.189, + 0.462, + 0.203 + ], + "angle": 0, + "content": "akochsiek@uni-mannheim.de" + }, + { + "type": "text", + "bbox": [ + 0.595, + 0.138, + 0.738, + 0.153 + ], + "angle": 0, + "content": "Rainer Gemulla" + }, + { + "type": "text", + "bbox": [ + 0.567, + 0.155, + 0.767, + 0.17 + ], + "angle": 0, + "content": "University of Mannheim" + }, + { + "type": "text", + "bbox": [ + 0.628, + 0.172, + 0.706, + 0.188 + ], + "angle": 0, + "content": "Germany" + }, + { + "type": "text", + "bbox": [ + 0.545, + 0.189, + 0.788, + 0.204 + ], + "angle": 0, + "content": "rgemulla@uni-mannheim.de" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.269 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.282, + 0.46, + 0.621 + ], + "angle": 0, + "content": "Semi-inductive link prediction (LP) in knowledge graphs (KG) is the task of predicting facts for new, previously unseen entities based on context information. Although new entities can be integrated by retraining the model from scratch in principle, such an approach is infeasible for large-scale KGs, where retraining is expensive and new entities may arise frequently. In this paper, we propose and describe a large-scale benchmark to evaluate semi-inductive LP models. The benchmark is based on and extends Wikidata5M: It provides transductive, k-shot, and 0-shot LP tasks, each varying the available information from (i) only KG structure, to (ii) including textual mentions, and (iii) detailed descriptions of the entities. We report on a small study of recent approaches and found that semi-inductive LP performance is far from transductive performance on long-tail entities throughout all experiments. The benchmark provides a test bed for further research into integrating context and textual information in semi-inductive LP models." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.636, + 0.26, + 0.651 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.662, + 0.49, + 0.837 + ], + "angle": 0, + "content": "A knowledge graph (KG) is a collection of facts describing relations between real-world entities. Facts are represented in the form of subject-relation-object triples such as (Dave Grohl, memberOf, Foo Fighters). In this paper, we consider link prediction (LP) tasks, i.e., the problem of inferring missing facts in the KG. LP can be transductive (TD; all entities known a priori), semi-inductive (SI; some entities known a priori), and inductive (no entities known a priori). We concentrate on semi-inductive and transductive LP." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.839, + 0.49, + 0.919 + ], + "angle": 0, + "content": "SI-LP focuses on modeling entities that are unknown or unseen during LP, such as out-of-KG entities (not part or not yet part of the KG) or newly created entities, e.g., a new user, product, or event. Such previously unknown entities can be" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.253, + 0.885, + 0.349 + ], + "angle": 0, + "content": "handled by retraining in principle. For large-scale KGs, however, retraining is inherently expensive and new entities may arise frequently. Therefore, the goal of SI-LP is to avoid retraining and perform LP directly, i.e., to generalize beyond the entities seen during training." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.35, + 0.886, + 0.494 + ], + "angle": 0, + "content": "To perform LP for unseen entities, context information about these entities is needed. The amount and form of context information varies widely and may take the form of facts and/or textual information, such as an entity mention and/or its description. For example, a new user in a social network may provide a name, basic facts such as gender or country of origin, and perhaps a textual self-description." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.495, + 0.886, + 0.782 + ], + "angle": 0, + "content": "In this paper, we introduce the Wikidata5M-SI benchmark for SI-LP. Our benchmark is based on the popular Wikidata5M (Wang et al., 2021) benchmark and has four major design goals: (G1) It ensures that unseen entities are long tail entities since popular entities (such as, say, Foo Fighters) and/or types and taxons (such as human and organization) are unlikely to be unseen. (G2) It allows to evaluate each model with varying amounts of contextual facts (0-shot, few-shot, transductive), i.e., to explore individual models across a range of tasks. (G3) It provides a controlled amount of textual information (none, mention, full description), where each setting demands different modeling capabilities. Finally, (G4) the benchmark is large-scale so that retraining is not a suitable approach. All prior SI-LP benchmarks violate at least one of these criteria." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.785, + 0.884, + 0.817 + ], + "angle": 0, + "content": "We report on a small experimental study with recent LP approaches. In general, we found that" + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.828, + 0.884, + 0.86 + ], + "angle": 0, + "content": "1. SI performance was far behind TD performance in all experiments for long-tail entities," + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.866, + 0.881, + 0.898 + ], + "angle": 0, + "content": "2. there was generally a trade-off between TD and SI performance," + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.903, + 0.847, + 0.919 + ], + "angle": 0, + "content": "3. textual information was highly valuable," + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.828, + 0.884, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.477, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "10634" + }, + { + "type": "footer", + "bbox": [ + 0.211, + 0.946, + 0.788, + 0.959 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10634-10643" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.959, + 0.72, + 0.972 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.129, + 0.085, + 0.488, + 0.116 + ], + "angle": 0, + "content": "4. proper integration of context and textual information needs further exploration, and" + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.125, + 0.488, + 0.155 + ], + "angle": 0, + "content": "5. facts involving less common relations provided more useful context." + }, + { + "type": "list", + "bbox": [ + 0.129, + 0.085, + 0.488, + 0.155 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.172, + 0.486, + 0.202 + ], + "angle": 0, + "content": "Our benchmark provides directions and a test bed for further research into SI-LP." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.219, + 0.268, + 0.234 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.247, + 0.489, + 0.584 + ], + "angle": 0, + "content": "Multiple SI-LP datasets have been proposed in the literature. The benchmarks of Daza et al. (2021), Albooyeh et al. (2020), and Galkin et al. (2021) are obtained by first merging the splits of smaller transductive LP datasets and subsequently sampling unseen entities uniformly to construct validation and test splits. These benchmarks do not satisfy goals G1-G4. Shi and Weninger (2018) follow a similar approach but focus on only 0-shot evaluation based on textual features. Xie et al. (2016) and Shah et al. (2019) select entities from Freebase with connection to entities in FB15k (Bordes et al., 2013), also focussing on 0-shot evaluation using rich textual descriptions. These approaches do not satisfy G2 and G3. Finally, Wang et al. (2019) and Hamaguchi et al. (2017) uniformly sample test triples and mark occurring entities as unseen. These approaches do not focus on long-tail entities (and, in fact, the accumulated context of unseen entities may be larger than the training graph itself) and they do not satisfy G1-G3." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.587, + 0.489, + 0.698 + ], + "angle": 0, + "content": "There are also several of fully-inductive LP benchmarks (Teru et al., 2020; Wang et al., 2021) involving KGs. While SI-LP aims to connect unseen entities to an existing KG, fully-inductive LP reasons about a new KG with completely separate entities (but shared relations). We do not consider this task in this work." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.714, + 0.422, + 0.73 + ], + "angle": 0, + "content": "3 The Wikidata5M-SI Benchmark" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.743, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Wikidata5M-SI is based on the popular Wikidata5M (Wang et al., 2021) benchmark, which is induced by the 5M most common entities of Wikidata. Our benchmark contains transductive and semi-inductive valid/test splits; see Tab. 1 for an overview. Generally, we aimed to keep Wikidata5M-SI as close as possible to Wikidata5M. We did need to modify the original transductive valid and test splits, however, because they unintentionally contained both seen and unseen entities (i.e., these splits were not fully transductive). We" + }, + { + "type": "table", + "bbox": [ + 0.512, + 0.082, + 0.88, + 0.172 + ], + "angle": 0, + "content": "
TrainTransductiveSemi-inductive
ValidTestValidTest
Triples20,600,1874,9834,9775,5005,500
Entities4,593,1037,7687,7603,7223,793
Entities unseen-00500500
Relations822217211126115
" + }, + { + "type": "table_caption", + "bbox": [ + 0.536, + 0.181, + 0.855, + 0.196 + ], + "angle": 0, + "content": "Table 1: Statistics of the Wikidata5M-SI splits." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.224, + 0.881, + 0.254 + ], + "angle": 0, + "content": "did that by simply removing all triples involving unseen entities." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.258, + 0.884, + 0.449 + ], + "angle": 0, + "content": "Unseen entities. To ensure that unseen entities in the semi-inductive splits are from the long tail (G1), we only considered entities of degree 20 or less. To be able to provide sufficient context for few-shot tasks (G2), we further did not consider entities of degree 10 or less. In more detail, we sampled 500 entities of degrees 11-20 (stratified sampling grouped by degree) for each semi-inductive split. All sampled entities, along with their facts, were removed from the train split. Note that these entities (naturally) have a different class distribution than all entities; see Sec. A.1 for details." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.452, + 0.884, + 0.82 + ], + "angle": 0, + "content": "Tasks and metrics. For TD tasks, we follow the standard protocol of Wikidata5M. To construct SI tasks, we include 11 of the original facts of each unseen entity into its SI split; each split thus contains 5,500 triples. This enables up to 10-shot SI tasks (1 fact to test, up to 10 facts for context). For entities of degree larger than 11, we select the 11 facts with the most frequent relations; see Tab. 2 for an example. The rationale is that more common relations (such as instanceof or country) may be considered more likely to be provided for unseen entities than rare ones (such as militaryBranch or publisher). We then construct a single \\( k \\)-shot task for each triple \\( (s,p,o) \\) in the SI split as follows. When, say, \\( s \\) is the unseen entity, we consider the LP task \\( (s,p,?) \\) and provide \\( k \\) additional facts of form \\( (s,p',o') \\) as context. Context facts are selected by frequency as above, but we also explored random and infrequent-relation context in our study. Models are asked to provide a ranking of predicted answers, and we determine the filtered mean reciprocal rank (MRR) and Hits@K of the correct answer \\( (o) \\)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.824, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Textual information. For each entity, we provide its principal mention and a detailed description (both directly from Wikidata5M); see Tab. 2. This allows to differentiate model evaluation with varying amounts of textual information per entity (G3): (A) atomic, i.e., no textual information, (M) men" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "10635" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.127, + 0.082, + 0.871, + 0.419 + ], + "angle": 0, + "content": "
IDQ18918
MentionSam Witwer
DescriptionSamuel Stewart Witwer (born October 20, 1977) is an American actor and mu-sician. He is known for portraying Crashdown in Battlestar Galactica, Davis Bloome in Smallville, Aidan Waite in Being Human, and Ben Lockwood in Supergirl. He voiced the protagonist Galen Marek / Starkiller in Star Wars: The Force Unleashed, the Son in Star Wars: The Clone Wars and Emperor Palpatine in Star Wars Rebels, both of which he has also voiced Darth Maul.
Context triplesinstance of | humanM: ○ D: ○
country of citizenship | United States of AmericaM: × D: ○
occupation | musicianM: × D: ✓
occupation | actorM: × D: ✓
place of birth | GlenviewM: × D: ×
given name | SamuelM: ○ D: ✓
given name | SamM: ✓ D: ○
cast member | Battlestar GalacticaM: × D: ✓
cast member | Being Human - supernatural drama television seriesM: × D: ✓
cast member | Star Wars: The Force Unleashed IIM: × D: ○
cast member | The MistM: × D: ×
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.429, + 0.884, + 0.472 + ], + "angle": 0, + "content": "Table 2: Example of an entity from the semi-inductive validation set of Wikidata5M-SI. For each triple, we annotated whether the answer is contained in (✓), deducible from (○), or not contained in (×) mention (M) or description (D)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.497, + 0.489, + 0.641 + ], + "angle": 0, + "content": "tions only, and (D) detailed textual descriptions as in (Kochsiek et al., 2023). This differentiation is especially important in the SI setting, as detailed text descriptions might not be provided for unseen entities and each setting demands different modeling capabilities. In fact, (A) performs reasoning only using graph structure, whereas (D) also benefits from information extraction to some extent. We discuss this further in Sec. 5." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.653, + 0.421, + 0.684 + ], + "angle": 0, + "content": "4 Semi-Inductive Link Prediction Models" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.695, + 0.489, + 0.726 + ], + "angle": 0, + "content": "We briefly summarize recent models for SI-LP; we considered these models in our experimental study." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.727, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Graph-only models. ComplEx (Trouillon et al., 2016) is the best-performing transductive KGE model on Wikidata5M (Kochsiek et al., 2022). To use ComplEx for SI-LP, we follow an approach explored by Jambor et al. (2021). In particular, we represent each entity as the sum of a local embedding (one per entity) and a global bias embedding. For 0-shot, we solely use the global bias for the unseen entity. For k-shot, we obtain the local embedding for the unseen entity by performing a single training step on the context triples (keeping all other embeddings fixed). An alternative" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.497, + 0.885, + 0.738 + ], + "angle": 0, + "content": "approach is taken by oDistMult-ERAvg (Albooyeh et al., 2020), which represents unseen entities by aggregating the embeddings of the relations and entities in the context. A more direct approach is taken by HittER (Chen et al., 2021), which contextualizes the query entity with its neighborhood for TD-LP. The approach can be used for SI-LP directly by using a masking token (akin to the global bias above) for an unseen entity. We originally planned to consider NodePiece (Galkin et al., 2021) (entity represented by a combination of anchor embeddings) and NBFNet (Zhu et al., 2021) (a GNN-based LP model); both support SI-LP directly. However, the available implementations did not scale to Wikidata5M-SI (out of memory)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.741, + 0.884, + 0.79 + ], + "angle": 0, + "content": "Text-based models. As a baseline approach to integrate textual information directly into KGE models, we consider the approach explored in the" + }, + { + "type": "page_footnote", + "bbox": [ + 0.508, + 0.809, + 0.883, + 0.846 + ], + "angle": 0, + "content": "To address the high memory footprint (Galkin et al., 2021) of oDistMult-ERAvg, we extend it with neighborhood sampling." + }, + { + "type": "page_footnote", + "bbox": [ + 0.508, + 0.847, + 0.883, + 0.919 + ], + "angle": 0, + "content": "For NBFNet (Zhu et al., 2021), the large memory footprint is inherent to the model; it is a full-graph GNN and hard to scale. For NodePiece (Galkin et al., 2021), however, the problem mainly lies in the expensive evaluation. All intermediate representations are precomputed, leading to a large memory overhead." + }, + { + "type": "list", + "bbox": [ + 0.508, + 0.809, + 0.883, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "10636" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.117, + 0.083, + 0.492, + 0.421 + ], + "angle": 0, + "content": "WikiKG90M benchmark (Hu et al., 2021); see Sec. A.2 for details. The remaining approaches are purely textual. SimKGC (Wang et al., 2022) utilizes two pretrained BERT Transformers: one to embed query entities (and relations) based on their mention or description, and one for tail entities. Using a contrastive learning approach, it measures cosine similarity between both representations for ranking. KGT5 (Saxena et al., 2022) is a sequence-to-sequence link prediction approach, which is trained to generate the mention of the answer entity using the mention or description of the query entity and relation as input. Both approaches support 0-shot SI-LP when textual information is provided for the query entity. They do not utilize additional context, however, i.e., do not support k-shot SI-LP. KGT5-context (Kochsiek et al., 2023) is an extension of KGT5, which extends the input of KGT5 by the one-hop neighborhood of the query entity and consequently supports k-shot LP directly." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.436, + 0.322, + 0.453 + ], + "angle": 0, + "content": "5 Experimental Study" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.463, + 0.489, + 0.558 + ], + "angle": 0, + "content": "We evaluated all presented baseline models in the TD and SI setting on the atomic, mentions, and descriptions dataset. Further, we evaluated in detail which context was most useful and what information was conveyed by textual mentions and descriptions." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.561, + 0.49, + 0.639 + ], + "angle": 0, + "content": "Setup. Source code, configuration, and the benchmark itself are available at https://github. com/uma-pi1.wikidata5m-si. For further details on hyperparameter tuning and training see Sec. A.3." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.641, + 0.489, + 0.85 + ], + "angle": 0, + "content": "Main results. Transductive and SI performance in terms of MRR of all models is presented in Tab. 3; Hits@K in Tab. 7-9 (Sec. A). Note that overall transductive performance was oftentimes below best reported SI performance. This is due to varying degrees of query entities between both settings. Typically, models perform better predicting new relations for an entity (e.g., the birthplace) than predicting additional objects for a known relation (e.g., additional awards won by a person) (Saxena et al., 2022; Kochsiek et al., 2023). For a direct comparison between both settings, we additionally report TD performance on long tail query entities.\\(^{3}\\)" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.851, + 0.49, + 0.882 + ], + "angle": 0, + "content": "Atomic. TD performance on the long tail was considerably higher than SI performance. As no in" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.165 + ], + "angle": 0, + "content": "formation was provided for unseen entities, 0-shot was not reasonably possible. Without text-based information, context was a necessity. A simple neighborhood aggregation—entity-relation average (ERAvg)—offered the best integration of context." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.168, + 0.885, + 0.586 + ], + "angle": 0, + "content": "Mentions. Integrating mentions did not improve performance on its own, as provided text information was still limited. However, additionally providing context information during inference (KGT5-context) simplified the learning problem and improved TD performance significantly. But for 0-shot, the limited text information provided with mentions allowed for reasonable performance. To analyze what information is conveyed for 0-shot, we annotated 100 valid triples; see Tab. 4. In \\(10\\%\\) of cases, the answer was already contained in the mention, and it was deducible in at least \\(7\\%\\). This enabled basic reasoning without any further information. In contrast to the TD setting, KGT5 outperformed its context extension. KGT5-context was reliant on context which was lacking especially during 0-shot. This showed a trade-off between best performance in the SI and TD setting. This trade-off could be mitigated by applying (full and partial) context hiding. With such adapted training, KGT5-context reached a middle ground with a transductive MRR of 0.366 and 0-shot MRR of 0.283.4 However, even with full context (10-shot), performance was still only on par with KGT5. Therefore, context information did not bring any further benefits when text was provided." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.589, + 0.884, + 0.701 + ], + "angle": 0, + "content": "Descriptions. Further, integrating descriptions improved performance for both settings, TD and SI, considerably; see Tab. 3. Similar to the mentions-only setting, KGT5-context performed best in TD and KGT5 in the SI setting. Applying the same trade-off with context-hiding reached a middle ground with 0.418 TD-MRR and 0.449 SI-MRR." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.704, + 0.884, + 0.864 + ], + "angle": 0, + "content": "Descriptions were very detailed and partially contained the correct answer as well as the same information as contained in context triples; see Tab. 4. Therefore, performance did not further improve with context size. In such cases, models mainly benefit from information extraction capabilities. To judge how much information extraction helps, we grouped performance of KGT5+description in the 0-shot setting on validation data into the groups contained, deducible and not contained in descrip" + }, + { + "type": "page_footnote", + "bbox": [ + 0.508, + 0.881, + 0.884, + 0.919 + ], + "angle": 0, + "content": "4In \\(25\\% / 25\\% / 50\\%\\) of cases, we hid the full context/sampled between 1-10 neighbors/used the full context, respectively." + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.892, + 0.488, + 0.919 + ], + "angle": 0, + "content": "3We define long tail query entities as entities with degree \\(\\leq 10\\) as in the SI setting." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "10637" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.114, + 0.082, + 0.881, + 0.284 + ], + "angle": 0, + "content": "
ModelTransductiveSemi-inductive (num. shots)Pre-trained
AllLong tail013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3080.5230.1240.1510.1760.1900.206no
DistMult + ERAvg (Albooyeh et al., 2020)0.2940.512-0.1710.2460.2950.333no
HittER (Chen et al., 2021)0.2840.5120.0190.1050.1530.1790.221no
DistMult + ERAvg + Mentions0.2990.535-0.1870.2350.2580.280yes
SimKGC (mentions only)0.2120.3610.220----yes
KGT5 (Saxena et al., 2022)0.2810.5420.310----no
KGT5-context (Kochsiek et al., 2023)0.3740.6780.2200.2170.2360.2590.311no
DistMult + ERAvg + Descriptions0.3130.585-0.2780.2810.2850.292yes
SimKGC + Descriptions (Wang et al., 2022)0.3530.6630.403----yes
KGT5 + Descriptions (Kochsiek et al., 2023)0.3640.7280.470----no
KGT5-context + Descriptions (Kochsiek et al., 2023)0.4200.7770.4170.4200.4160.4200.437no
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.295, + 0.884, + 0.339 + ], + "angle": 0, + "content": "Table 3: Transductive and semi-inductive link prediction results in terms of MRR on the dataset Wikidata5M-SI. The first group presets results on the atomic, the second on the mentions and the third on the descriptions dataset. Best per TD/SI in bold. Best per group underlined." + }, + { + "type": "table", + "bbox": [ + 0.14, + 0.36, + 0.465, + 0.435 + ], + "angle": 0, + "content": "
MentionDescription
Contained10%44%
Deductible7%10%
Not contained83%46%
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.445, + 0.487, + 0.487 + ], + "angle": 0, + "content": "Table 4: Information about a query answer contained in mentions and descriptions. Annotated for 100 sampled triples from 0-shot valid. For an example, see Tab. 2." + }, + { + "type": "table", + "bbox": [ + 0.139, + 0.505, + 0.465, + 0.576 + ], + "angle": 0, + "content": "
Context selection135
Most common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.585, + 0.487, + 0.613 + ], + "angle": 0, + "content": "Table 5: Influence of context selection. Semi-inductive test MRR of KGT5-context." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.644, + 0.486, + 0.675 + ], + "angle": 0, + "content": "tion; see Fig. 1 in Sec. A. When contained, the correct answer was extracted in \\(\\approx 70\\%\\) of cases." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.679, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Context selection. We selected the most common relations as context triples so far, as this may be a more realistic setting. To investigate the effect of this selection approach, we compared the default selection of choosing most common relations to least common and random. Results for KGT5-context are shown in Tab. 5; for all other models in Tab. 10 in Sec. A. We found that the less common the relations of the provided context, the better the SI performance. More common context relations often described high-level concepts, while less common provided further detail; see the example in Tab. 2. While more common context may be more readily available, less common context was more helpful to describe a new entity." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.363, + 0.642, + 0.377 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.389, + 0.884, + 0.598 + ], + "angle": 0, + "content": "We proposed the new WikiData5M-SI large-scale benchmark for semi-supervised link prediction. The benchmark focuses on unseen entities from the long tail and allows to evaluate models with varying and controlled amounts of factual and textual context information. In our experimental evaluation, we found that semi-inductive LP performance fell behind transductive performance for long-tail entities in general, and that detailed textual information was often more valuable than factual context information. Moreover, current models did not integrate these two types of information adequately, suggesting a direction for future research." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.612, + 0.615, + 0.627 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.638, + 0.884, + 0.783 + ], + "angle": 0, + "content": "This study was performed on Wikidata5M-SI, i.e., a subset of a single knowledge graph. Model performance and insights may vary if graph structure and/or availability and usefulness of mentions and description is different. In particular, the entity descriptions provided with Wikidata5M-SI partly contained information relevant for link prediction so that models benefited from information extraction capabilities." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.797, + 0.661, + 0.812 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.823, + 0.884, + 0.903 + ], + "angle": 0, + "content": "This research adapts publicly available data, benchmarks, and codebases for evaluation. We believe that this research was conducted in an ethical manner in compliance with all relevant laws and regulations." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "10638" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.116, + 0.085, + 0.214, + 0.099 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.107, + 0.49, + 0.174 + ], + "angle": 0, + "content": "Marjan Albooyeh, Rishab Goel, and Seyed Mehran Kazemi. 2020. Out-of-sample representation learning for knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2657-2666." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.183, + 0.49, + 0.25 + ], + "angle": 0, + "content": "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems (NIPS), pages 1-9." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.258, + 0.49, + 0.35 + ], + "angle": 0, + "content": "Samuel Broscheit, Daniel Ruffinelli, Adrian Kochsiek, Patrick Betz, and Rainer Gemulla. 2020. LibKGE - A knowledge graph embedding library for reproducible research. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 165-174." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.36, + 0.49, + 0.44 + ], + "angle": 0, + "content": "Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, and Yangfeng Ji. 2021. Hitter: Hierarchical transformers for knowledge graph embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10395-10407." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.448, + 0.489, + 0.502 + ], + "angle": 0, + "content": "Daniel Daza, Michael Cochez, and Paul Groth. 2021. Inductive entity representations from text via link prediction. In Proceedings of the Web Conference 2021, pages 798-808." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.511, + 0.49, + 0.577 + ], + "angle": 0, + "content": "Mikhail Galkin, Etienne Denis, Jiapeng Wu, and William L Hamilton. 2021. Nodepiece: Compositional and parameter-efficient representations of large knowledge graphs. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.586, + 0.49, + 0.666 + ], + "angle": 0, + "content": "Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 1802-1808." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.675, + 0.49, + 0.742 + ], + "angle": 0, + "content": "Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. 2021. Ogb-lsc: A large-scale challenge for machine learning on graphs. Advances in Neural Information Processing Systems, 35." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.751, + 0.49, + 0.83 + ], + "angle": 0, + "content": "Dora Jambor, Komal Teru, Joelle Pineau, and William L Hamilton. 2021. Exploring the limits of few-shot link prediction in knowledge graphs. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2816-2822." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.84, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Adrian Kochsiek, Fritz Niesel, and Rainer Gemulla. 2022. Start small, think big: On hyperparameter optimization for large-scale knowledge graph embeddings. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September" + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.107, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.53, + 0.086, + 0.884, + 0.114 + ], + "angle": 0, + "content": "19-23, 2022, Proceedings, Part II, pages 138-154. Springer." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.125, + 0.884, + 0.191 + ], + "angle": 0, + "content": "Adrian Kochsiek, Apoorv Saxena, Inderjeet Nair, and Rainer Gemulla. 2023. Friendly neighbors: Contextualized sequence-to-sequence link prediction. In Proceedings of the 8th Workshop on Representation Learning for NLP." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.203, + 0.884, + 0.282 + ], + "angle": 0, + "content": "Apoory Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to-sequence knowledge graph completion and question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2814-2828." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.293, + 0.884, + 0.372 + ], + "angle": 0, + "content": "Haseeb Shah, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, and Faisal Shafait. 2019. An open-world extension to knowledge graph completion models. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3044-3051." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.384, + 0.884, + 0.436 + ], + "angle": 0, + "content": "Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 32." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.449, + 0.884, + 0.514 + ], + "angle": 0, + "content": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. Advances in Neural Information Processing Systems, 33:16857-16867." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.526, + 0.884, + 0.58 + ], + "angle": 0, + "content": "Komal Teru, Etienne Denis, and Will Hamilton. 2020. Inductive relation prediction by subgraph reasoning. In International Conference on Machine Learning, pages 9448-9457." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.592, + 0.884, + 0.658 + ], + "angle": 0, + "content": "Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071-2080." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.67, + 0.884, + 0.751 + ], + "angle": 0, + "content": "Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4281-4294." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.762, + 0.884, + 0.828 + ], + "angle": 0, + "content": "Peifeng Wang, Jialong Han, Chenliang Li, and Rong Pan. 2019. Logic attention based neighborhood aggregation for inductive knowledge graph embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7152-7159." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.84, + 0.884, + 0.918 + ], + "angle": 0, + "content": "Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021. Kepler: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176-194." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.884, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "10639" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.116, + 0.086, + 0.488, + 0.152 + ], + "angle": 0, + "content": "Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.162, + 0.488, + 0.228 + ], + "angle": 0, + "content": "Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.238, + 0.488, + 0.303 + ], + "angle": 0, + "content": "Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems, 34." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.317, + 0.238, + 0.332 + ], + "angle": 0, + "content": "A Appendix" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.342, + 0.406, + 0.356 + ], + "angle": 0, + "content": "A.1 Distribution of Unseen Entities" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.362, + 0.489, + 0.683 + ], + "angle": 0, + "content": "Long-tail entities have a different distribution than entities from the whole KG; see Tab. 6 for an overview of the distribution shift for the top 10 entity types. This difference is natural. In particular, high-degree entities in a KG such as Wikidata often refer to types/taxons (e.g., human, organization, ...) as well as popular named entities (e.g., Albert Einstein, Germany, ...). These entities are fundamental to the KG and/or of high interest and have many facts associated with them. For this reason, they do not form suitable candidates for benchmarking unseen or new entities. In addition, removing high-degree entities for the purpose of evaluating SI-LP is likely to distort the KG (e.g., consider removing type \"human\" or \"Germany\"). In contrast, Wikidata5M-SI focuses on entities for which knowledge is not yet abundant: long-tail entities are accompanied by no or few facts (at least initially) and our SI-LP benchmark tests reasoning capabilities with this limited information." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.085, + 0.83, + 0.1 + ], + "angle": 0, + "content": "A.2 Integrating Text into KGE Models" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.106, + 0.884, + 0.313 + ], + "angle": 0, + "content": "To integrate text into traditional KGE models, we follow the baseline models of the WikiKG90M link prediction challenge (Hu et al., 2021). We embed mentions combined with descriptions using MPNet (Song et al., 2020), concatenate the resulting descriptions embedding with the entity embedding, and project it with a linear layer for the final representation of the entity. In combination with oDistMult-ERAvg (Albooyeh et al., 2020), we apply the aggregation of neighboring entities and relations on the entity embedding part only. The resulting aggregation is then concatenated with its description and finally projected." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.313, + 0.882, + 0.343 + ], + "angle": 0, + "content": "This approach is closely related to BLP (Daza et al., 2021). The main differences to BLP are:" + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.349, + 0.884, + 0.365 + ], + "angle": 0, + "content": "1. Hu et al. (2021) use MPNet, BLP uses BERT." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.377, + 0.882, + 0.44 + ], + "angle": 0, + "content": "2. In combination with DistMult-ERAvg, we concatenate a learnable \"structural embedding\" to the CLS embedding of the language model, whereas BLP does not." + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.349, + 0.884, + 0.44 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.452, + 0.719, + 0.467 + ], + "angle": 0, + "content": "A.3 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.472, + 0.884, + 0.68 + ], + "angle": 0, + "content": "For hyperparameter optimization for ComplEx (Trouillon et al., 2016), DistMult (Yang et al., 2015), and HittER (Chen et al., 2021), we used the multi-fidelity approach GraSH (Kochsiek et al., 2022) implemented in LibKGE (Broscheit et al., 2020) with 64 initial trials and trained for up to 64 epochs. For fold-in, we reused training hyperparameters and trained for a single epoch on the provided context. For text-based approaches, we used the hyperparameters and architectures proposed by the authors for the transductive split of Wikidata5M. We trained on up to 5 A6000-GPUs with 49GB of VRAM." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "10640" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.223, + 0.084, + 0.778, + 0.279 + ], + "angle": 0, + "content": "
WikidataIDMentionAll entitiesLong-tail entities
Q5human39%61%
Q11424film3%8%
Q484170commune of France1%7%
Q482994album3%1%
Q16521taxon9%1%
Q134556single1%1%
Q747074commune of Italy0%1%
Q2074737municipality of Spain0%1%
Q571book1%1%
Q7889video game1%1%
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.288, + 0.884, + 0.318 + ], + "angle": 0, + "content": "Table 6: Distribution of top 10 entity types over long-tail entities with degree between 11 and 20 compared to all entities." + }, + { + "type": "image", + "bbox": [ + 0.263, + 0.336, + 0.737, + 0.584 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.603, + 0.884, + 0.631 + ], + "angle": 0, + "content": "Figure 1: Number of correct (rank=1) and incorrect predictions by KGT5+descriptions on annotated examples per annotation label." + }, + { + "type": "table", + "bbox": [ + 0.124, + 0.642, + 0.875, + 0.891 + ], + "angle": 0, + "content": "
ModelTrans.Semi-inductive (num. shots)
013510
Complex + Bias + Fold in (Jambor et al., 2021)0.2600.0580.0970.1180.1240.132
DistMult + ERAvg (Albooyeh et al., 2020)0.237-0.1150.1510.1850.209
HittER (Chen et al., 2021)0.2340.0050.0760.1150.1320.153
DistMult + ERAvg + Mentions0.239-0.1060.1420.1530.167
SimKGC (mentions only)0.1820.187----
KGT5 (Saxena et al., 2022)0.2490.263----
KGT5-context (Kochsiek et al., 2023)0.3470.1840.1770.1950.2180.263
DistMult + ERAvg + Descriptions0.252-0.1520.1530.1530.161
SimKGC + Descriptions (Wang et al., 2022)0.3110.349----
KGT5 + Descriptions0.3320.430----
KGT5-context + Descriptions0.4000.3790.3820.3730.3780.393
" + }, + { + "type": "table_caption", + "bbox": [ + 0.119, + 0.899, + 0.877, + 0.914 + ], + "angle": 0, + "content": "Table 7: Transductive and semi-inductive link prediction results in terms of H@1 on the dataset Wikidata5M-SI." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "10641" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.123, + 0.153, + 0.877, + 0.4 + ], + "angle": 0, + "content": "
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3370.1650.1800.2020.2190.242
DistMult + ERAvg (Albooyeh et al., 2020)0.328-0.1900.2920.3520.401
HittER (Chen et al., 2021)0.3090.0130.1090.1580.1880.242
DistMult + ERAvg + Mentions0.332-0.2390.2890.3140.340
SimKGC (mentions only)0.2230.227----
KGT5 (Saxena et al., 2022)0.2960.332----
KGT5-context (Kochsiek et al., 2023)0.3900.2360.2340.2570.2780.335
DistMult + ERAvg + Descriptions0.344-0.3680.3730.3780.380
SimKGC (Wang et al., 2022)0.3670.421----
KGT5 + Descriptions0.3850.490----
KGT5-context + Descriptions0.4320.4410.4430.4430.4470.463
" + }, + { + "type": "table_caption", + "bbox": [ + 0.12, + 0.409, + 0.877, + 0.424 + ], + "angle": 0, + "content": "Table 8: Transductive and semi-inductive link prediction results in terms of H@3 on the dataset Wikidata5M-SI." + }, + { + "type": "table", + "bbox": [ + 0.123, + 0.573, + 0.877, + 0.821 + ], + "angle": 0, + "content": "
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3870.2310.2450.2820.3090.336
DistMult + ERAvg (Albooyeh et al., 2020)0.389-0.2700.4090.4930.564
HittER (Chen et al., 2021)0.3760.0500.1570.2260.2700.359
DistMult + ERAvg + Mentions0.411-0.3200.3920.4400.478
SimKGC (mentions only)0.2660.283----
KGT5 (Saxena et al., 2022)0.3440.398----
KGT5-context (Kochsiek et al., 2023)0.4230.2930.2950.3100.3360.400
DistMult + ERAvg + Descriptions0.425-0.4650.4720.4840.491
SimKGC (Wang et al., 2022)0.4320.504----
KGT5 + Descriptions0.4160.544----
KGT5-context + Descriptions0.4550.4840.4890.4890.4950.516
" + }, + { + "type": "table_caption", + "bbox": [ + 0.115, + 0.83, + 0.88, + 0.844 + ], + "angle": 0, + "content": "Table 9: Transductive and semi-inductive link prediction results in terms of H@10 on the dataset Wikidata5M-SI." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "10642" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.197, + 0.32, + 0.798, + 0.652 + ], + "angle": 0, + "content": "
ModelContext selection135
ComplEx + fold-inMost common0.1510.1610.168
Least common0.1660.1850.195
Random0.1640.1870.196
DistMult + ERAvgMost common0.1710.2460.295
Least common0.2170.2990.323
Random0.2150.3030.318
oDistMult + ERAvg + MentionsMost common0.1870.2350.258
Least common0.2370.2740.279
Random0.2320.2650.272
HittERMost common0.1050.1530.179
Least common0.1510.1950.216
Random0.1360.1900.206
KGT5-contextMost common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
KGT5-context + Desc.Most common0.4200.4160.420
Least common0.4230.4240.430
Random0.4220.4300.430
" + }, + { + "type": "table_caption", + "bbox": [ + 0.192, + 0.661, + 0.804, + 0.676 + ], + "angle": 0, + "content": "Table 10: Influence of context selection. Semi-inductive test MRR. Best per model in bold." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "10643" + } + ] +] \ No newline at end of file diff --git a/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_origin.pdf b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d8107ff26c4b5a1ea859a745b86db7c6ed417462 --- /dev/null +++ b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/472ef1b2-669d-4b07-ae82-dafb730e88d4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0bcc909175c997fdcf394be70be0f3d49a831844a93b248d07e56456a42ab5e +size 221562 diff --git a/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/full.md b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..16ade4e5b4d489dc06f098410709ba4a6f1e0b80 --- /dev/null +++ b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/full.md @@ -0,0 +1,203 @@ +# A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs + +Adrian Kochsiek + +University of Mannheim + +Germany + +akochsiek@uni-mannheim.de + +Rainer Gemulla + +University of Mannheim + +Germany + +rgemulla@uni-mannheim.de + +# Abstract + +Semi-inductive link prediction (LP) in knowledge graphs (KG) is the task of predicting facts for new, previously unseen entities based on context information. Although new entities can be integrated by retraining the model from scratch in principle, such an approach is infeasible for large-scale KGs, where retraining is expensive and new entities may arise frequently. In this paper, we propose and describe a large-scale benchmark to evaluate semi-inductive LP models. The benchmark is based on and extends Wikidata5M: It provides transductive, k-shot, and 0-shot LP tasks, each varying the available information from (i) only KG structure, to (ii) including textual mentions, and (iii) detailed descriptions of the entities. We report on a small study of recent approaches and found that semi-inductive LP performance is far from transductive performance on long-tail entities throughout all experiments. The benchmark provides a test bed for further research into integrating context and textual information in semi-inductive LP models. + +# 1 Introduction + +A knowledge graph (KG) is a collection of facts describing relations between real-world entities. Facts are represented in the form of subject-relation-object triples such as (Dave Grohl, memberOf, Foo Fighters). In this paper, we consider link prediction (LP) tasks, i.e., the problem of inferring missing facts in the KG. LP can be transductive (TD; all entities known a priori), semi-inductive (SI; some entities known a priori), and inductive (no entities known a priori). We concentrate on semi-inductive and transductive LP. + +SI-LP focuses on modeling entities that are unknown or unseen during LP, such as out-of-KG entities (not part or not yet part of the KG) or newly created entities, e.g., a new user, product, or event. Such previously unknown entities can be + +handled by retraining in principle. For large-scale KGs, however, retraining is inherently expensive and new entities may arise frequently. Therefore, the goal of SI-LP is to avoid retraining and perform LP directly, i.e., to generalize beyond the entities seen during training. + +To perform LP for unseen entities, context information about these entities is needed. The amount and form of context information varies widely and may take the form of facts and/or textual information, such as an entity mention and/or its description. For example, a new user in a social network may provide a name, basic facts such as gender or country of origin, and perhaps a textual self-description. + +In this paper, we introduce the Wikidata5M-SI benchmark for SI-LP. Our benchmark is based on the popular Wikidata5M (Wang et al., 2021) benchmark and has four major design goals: (G1) It ensures that unseen entities are long tail entities since popular entities (such as, say, Foo Fighters) and/or types and taxons (such as human and organization) are unlikely to be unseen. (G2) It allows to evaluate each model with varying amounts of contextual facts (0-shot, few-shot, transductive), i.e., to explore individual models across a range of tasks. (G3) It provides a controlled amount of textual information (none, mention, full description), where each setting demands different modeling capabilities. Finally, (G4) the benchmark is large-scale so that retraining is not a suitable approach. All prior SI-LP benchmarks violate at least one of these criteria. + +We report on a small experimental study with recent LP approaches. In general, we found that + +1. SI performance was far behind TD performance in all experiments for long-tail entities, +2. there was generally a trade-off between TD and SI performance, +3. textual information was highly valuable, + +4. proper integration of context and textual information needs further exploration, and +5. facts involving less common relations provided more useful context. + +Our benchmark provides directions and a test bed for further research into SI-LP. + +# 2 Related Work + +Multiple SI-LP datasets have been proposed in the literature. The benchmarks of Daza et al. (2021), Albooyeh et al. (2020), and Galkin et al. (2021) are obtained by first merging the splits of smaller transductive LP datasets and subsequently sampling unseen entities uniformly to construct validation and test splits. These benchmarks do not satisfy goals G1-G4. Shi and Weninger (2018) follow a similar approach but focus on only 0-shot evaluation based on textual features. Xie et al. (2016) and Shah et al. (2019) select entities from Freebase with connection to entities in FB15k (Bordes et al., 2013), also focussing on 0-shot evaluation using rich textual descriptions. These approaches do not satisfy G2 and G3. Finally, Wang et al. (2019) and Hamaguchi et al. (2017) uniformly sample test triples and mark occurring entities as unseen. These approaches do not focus on long-tail entities (and, in fact, the accumulated context of unseen entities may be larger than the training graph itself) and they do not satisfy G1-G3. + +There are also several of fully-inductive LP benchmarks (Teru et al., 2020; Wang et al., 2021) involving KGs. While SI-LP aims to connect unseen entities to an existing KG, fully-inductive LP reasons about a new KG with completely separate entities (but shared relations). We do not consider this task in this work. + +# 3 The Wikidata5M-SI Benchmark + +Wikidata5M-SI is based on the popular Wikidata5M (Wang et al., 2021) benchmark, which is induced by the 5M most common entities of Wikidata. Our benchmark contains transductive and semi-inductive valid/test splits; see Tab. 1 for an overview. Generally, we aimed to keep Wikidata5M-SI as close as possible to Wikidata5M. We did need to modify the original transductive valid and test splits, however, because they unintentionally contained both seen and unseen entities (i.e., these splits were not fully transductive). We + +
TrainTransductiveSemi-inductive
ValidTestValidTest
Triples20,600,1874,9834,9775,5005,500
Entities4,593,1037,7687,7603,7223,793
Entities unseen-00500500
Relations822217211126115
+ +Table 1: Statistics of the Wikidata5M-SI splits. + +did that by simply removing all triples involving unseen entities. + +Unseen entities. To ensure that unseen entities in the semi-inductive splits are from the long tail (G1), we only considered entities of degree 20 or less. To be able to provide sufficient context for few-shot tasks (G2), we further did not consider entities of degree 10 or less. In more detail, we sampled 500 entities of degrees 11-20 (stratified sampling grouped by degree) for each semi-inductive split. All sampled entities, along with their facts, were removed from the train split. Note that these entities (naturally) have a different class distribution than all entities; see Sec. A.1 for details. + +Tasks and metrics. For TD tasks, we follow the standard protocol of Wikidata5M. To construct SI tasks, we include 11 of the original facts of each unseen entity into its SI split; each split thus contains 5,500 triples. This enables up to 10-shot SI tasks (1 fact to test, up to 10 facts for context). For entities of degree larger than 11, we select the 11 facts with the most frequent relations; see Tab. 2 for an example. The rationale is that more common relations (such as instanceof or country) may be considered more likely to be provided for unseen entities than rare ones (such as militaryBranch or publisher). We then construct a single $k$ -shot task for each triple $(s,p,o)$ in the SI split as follows. When, say, $s$ is the unseen entity, we consider the LP task $(s,p,?)$ and provide $k$ additional facts of form $(s,p',o')$ as context. Context facts are selected by frequency as above, but we also explored random and infrequent-relation context in our study. Models are asked to provide a ranking of predicted answers, and we determine the filtered mean reciprocal rank (MRR) and Hits@K of the correct answer $(o)$ . + +Textual information. For each entity, we provide its principal mention and a detailed description (both directly from Wikidata5M); see Tab. 2. This allows to differentiate model evaluation with varying amounts of textual information per entity (G3): (A) atomic, i.e., no textual information, (M) men + +
IDQ18918
MentionSam Witwer
DescriptionSamuel Stewart Witwer (born October 20, 1977) is an American actor and mu-sician. He is known for portraying Crashdown in Battlestar Galactica, Davis Bloome in Smallville, Aidan Waite in Being Human, and Ben Lockwood in Supergirl. He voiced the protagonist Galen Marek / Starkiller in Star Wars: The Force Unleashed, the Son in Star Wars: The Clone Wars and Emperor Palpatine in Star Wars Rebels, both of which he has also voiced Darth Maul.
Context triplesinstance of | humanM: ○ D: ○
country of citizenship | United States of AmericaM: × D: ○
occupation | musicianM: × D: ✓
occupation | actorM: × D: ✓
place of birth | GlenviewM: × D: ×
given name | SamuelM: ○ D: ✓
given name | SamM: ✓ D: ○
cast member | Battlestar GalacticaM: × D: ✓
cast member | Being Human - supernatural drama television seriesM: × D: ✓
cast member | Star Wars: The Force Unleashed IIM: × D: ○
cast member | The MistM: × D: ×
+ +Table 2: Example of an entity from the semi-inductive validation set of Wikidata5M-SI. For each triple, we annotated whether the answer is contained in (✓), deducible from (○), or not contained in (×) mention (M) or description (D). + +tions only, and (D) detailed textual descriptions as in (Kochsiek et al., 2023). This differentiation is especially important in the SI setting, as detailed text descriptions might not be provided for unseen entities and each setting demands different modeling capabilities. In fact, (A) performs reasoning only using graph structure, whereas (D) also benefits from information extraction to some extent. We discuss this further in Sec. 5. + +# 4 Semi-Inductive Link Prediction Models + +We briefly summarize recent models for SI-LP; we considered these models in our experimental study. + +Graph-only models. ComplEx (Trouillon et al., 2016) is the best-performing transductive KGE model on Wikidata5M (Kochsiek et al., 2022). To use ComplEx for SI-LP, we follow an approach explored by Jambor et al. (2021). In particular, we represent each entity as the sum of a local embedding (one per entity) and a global bias embedding. For 0-shot, we solely use the global bias for the unseen entity. For k-shot, we obtain the local embedding for the unseen entity by performing a single training step on the context triples (keeping all other embeddings fixed). An alternative + +approach is taken by oDistMult-ERAvg (Albooyeh et al., 2020), which represents unseen entities by aggregating the embeddings of the relations and entities in the context. A more direct approach is taken by HittER (Chen et al., 2021), which contextualizes the query entity with its neighborhood for TD-LP. The approach can be used for SI-LP directly by using a masking token (akin to the global bias above) for an unseen entity. We originally planned to consider NodePiece (Galkin et al., 2021) (entity represented by a combination of anchor embeddings) and NBFNet (Zhu et al., 2021) (a GNN-based LP model); both support SI-LP directly. However, the available implementations did not scale to Wikidata5M-SI (out of memory). + +Text-based models. As a baseline approach to integrate textual information directly into KGE models, we consider the approach explored in the + +WikiKG90M benchmark (Hu et al., 2021); see Sec. A.2 for details. The remaining approaches are purely textual. SimKGC (Wang et al., 2022) utilizes two pretrained BERT Transformers: one to embed query entities (and relations) based on their mention or description, and one for tail entities. Using a contrastive learning approach, it measures cosine similarity between both representations for ranking. KGT5 (Saxena et al., 2022) is a sequence-to-sequence link prediction approach, which is trained to generate the mention of the answer entity using the mention or description of the query entity and relation as input. Both approaches support 0-shot SI-LP when textual information is provided for the query entity. They do not utilize additional context, however, i.e., do not support k-shot SI-LP. KGT5-context (Kochsiek et al., 2023) is an extension of KGT5, which extends the input of KGT5 by the one-hop neighborhood of the query entity and consequently supports k-shot LP directly. + +# 5 Experimental Study + +We evaluated all presented baseline models in the TD and SI setting on the atomic, mentions, and descriptions dataset. Further, we evaluated in detail which context was most useful and what information was conveyed by textual mentions and descriptions. + +Setup. Source code, configuration, and the benchmark itself are available at https://github. com/uma-pi1.wikidata5m-si. For further details on hyperparameter tuning and training see Sec. A.3. + +Main results. Transductive and SI performance in terms of MRR of all models is presented in Tab. 3; Hits@K in Tab. 7-9 (Sec. A). Note that overall transductive performance was oftentimes below best reported SI performance. This is due to varying degrees of query entities between both settings. Typically, models perform better predicting new relations for an entity (e.g., the birthplace) than predicting additional objects for a known relation (e.g., additional awards won by a person) (Saxena et al., 2022; Kochsiek et al., 2023). For a direct comparison between both settings, we additionally report TD performance on long tail query entities. $^{3}$ + +Atomic. TD performance on the long tail was considerably higher than SI performance. As no in + +formation was provided for unseen entities, 0-shot was not reasonably possible. Without text-based information, context was a necessity. A simple neighborhood aggregation—entity-relation average (ERAvg)—offered the best integration of context. + +Mentions. Integrating mentions did not improve performance on its own, as provided text information was still limited. However, additionally providing context information during inference (KGT5-context) simplified the learning problem and improved TD performance significantly. But for 0-shot, the limited text information provided with mentions allowed for reasonable performance. To analyze what information is conveyed for 0-shot, we annotated 100 valid triples; see Tab. 4. In $10\%$ of cases, the answer was already contained in the mention, and it was deducible in at least $7\%$ . This enabled basic reasoning without any further information. In contrast to the TD setting, KGT5 outperformed its context extension. KGT5-context was reliant on context which was lacking especially during 0-shot. This showed a trade-off between best performance in the SI and TD setting. This trade-off could be mitigated by applying (full and partial) context hiding. With such adapted training, KGT5-context reached a middle ground with a transductive MRR of 0.366 and 0-shot MRR of 0.283.4 However, even with full context (10-shot), performance was still only on par with KGT5. Therefore, context information did not bring any further benefits when text was provided. + +Descriptions. Further, integrating descriptions improved performance for both settings, TD and SI, considerably; see Tab. 3. Similar to the mentions-only setting, KGT5-context performed best in TD and KGT5 in the SI setting. Applying the same trade-off with context-hiding reached a middle ground with 0.418 TD-MRR and 0.449 SI-MRR. + +Descriptions were very detailed and partially contained the correct answer as well as the same information as contained in context triples; see Tab. 4. Therefore, performance did not further improve with context size. In such cases, models mainly benefit from information extraction capabilities. To judge how much information extraction helps, we grouped performance of KGT5+description in the 0-shot setting on validation data into the groups contained, deducible and not contained in descrip + +
ModelTransductiveSemi-inductive (num. shots)Pre-trained
AllLong tail013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3080.5230.1240.1510.1760.1900.206no
DistMult + ERAvg (Albooyeh et al., 2020)0.2940.512-0.1710.2460.2950.333no
HittER (Chen et al., 2021)0.2840.5120.0190.1050.1530.1790.221no
DistMult + ERAvg + Mentions0.2990.535-0.1870.2350.2580.280yes
SimKGC (mentions only)0.2120.3610.220----yes
KGT5 (Saxena et al., 2022)0.2810.5420.310----no
KGT5-context (Kochsiek et al., 2023)0.3740.6780.2200.2170.2360.2590.311no
DistMult + ERAvg + Descriptions0.3130.585-0.2780.2810.2850.292yes
SimKGC + Descriptions (Wang et al., 2022)0.3530.6630.403----yes
KGT5 + Descriptions (Kochsiek et al., 2023)0.3640.7280.470----no
KGT5-context + Descriptions (Kochsiek et al., 2023)0.4200.7770.4170.4200.4160.4200.437no
+ +Table 3: Transductive and semi-inductive link prediction results in terms of MRR on the dataset Wikidata5M-SI. The first group presets results on the atomic, the second on the mentions and the third on the descriptions dataset. Best per TD/SI in bold. Best per group underlined. + +
MentionDescription
Contained10%44%
Deductible7%10%
Not contained83%46%
+ +Table 4: Information about a query answer contained in mentions and descriptions. Annotated for 100 sampled triples from 0-shot valid. For an example, see Tab. 2. + +
Context selection135
Most common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
+ +Table 5: Influence of context selection. Semi-inductive test MRR of KGT5-context. + +tion; see Fig. 1 in Sec. A. When contained, the correct answer was extracted in $\approx 70\%$ of cases. + +Context selection. We selected the most common relations as context triples so far, as this may be a more realistic setting. To investigate the effect of this selection approach, we compared the default selection of choosing most common relations to least common and random. Results for KGT5-context are shown in Tab. 5; for all other models in Tab. 10 in Sec. A. We found that the less common the relations of the provided context, the better the SI performance. More common context relations often described high-level concepts, while less common provided further detail; see the example in Tab. 2. While more common context may be more readily available, less common context was more helpful to describe a new entity. + +# 6 Conclusion + +We proposed the new WikiData5M-SI large-scale benchmark for semi-supervised link prediction. The benchmark focuses on unseen entities from the long tail and allows to evaluate models with varying and controlled amounts of factual and textual context information. In our experimental evaluation, we found that semi-inductive LP performance fell behind transductive performance for long-tail entities in general, and that detailed textual information was often more valuable than factual context information. Moreover, current models did not integrate these two types of information adequately, suggesting a direction for future research. + +# Limitations + +This study was performed on Wikidata5M-SI, i.e., a subset of a single knowledge graph. Model performance and insights may vary if graph structure and/or availability and usefulness of mentions and description is different. In particular, the entity descriptions provided with Wikidata5M-SI partly contained information relevant for link prediction so that models benefited from information extraction capabilities. + +# Ethics Statement + +This research adapts publicly available data, benchmarks, and codebases for evaluation. We believe that this research was conducted in an ethical manner in compliance with all relevant laws and regulations. + +# References + +Marjan Albooyeh, Rishab Goel, and Seyed Mehran Kazemi. 2020. Out-of-sample representation learning for knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2657-2666. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems (NIPS), pages 1-9. +Samuel Broscheit, Daniel Ruffinelli, Adrian Kochsiek, Patrick Betz, and Rainer Gemulla. 2020. LibKGE - A knowledge graph embedding library for reproducible research. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 165-174. +Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, and Yangfeng Ji. 2021. Hitter: Hierarchical transformers for knowledge graph embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10395-10407. +Daniel Daza, Michael Cochez, and Paul Groth. 2021. Inductive entity representations from text via link prediction. In Proceedings of the Web Conference 2021, pages 798-808. +Mikhail Galkin, Etienne Denis, Jiapeng Wu, and William L Hamilton. 2021. Nodepiece: Compositional and parameter-efficient representations of large knowledge graphs. In International Conference on Learning Representations. +Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 1802-1808. +Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. 2021. Ogb-lsc: A large-scale challenge for machine learning on graphs. Advances in Neural Information Processing Systems, 35. +Dora Jambor, Komal Teru, Joelle Pineau, and William L Hamilton. 2021. Exploring the limits of few-shot link prediction in knowledge graphs. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2816-2822. +Adrian Kochsiek, Fritz Niesel, and Rainer Gemulla. 2022. Start small, think big: On hyperparameter optimization for large-scale knowledge graph embeddings. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September + +19-23, 2022, Proceedings, Part II, pages 138-154. Springer. +Adrian Kochsiek, Apoorv Saxena, Inderjeet Nair, and Rainer Gemulla. 2023. Friendly neighbors: Contextualized sequence-to-sequence link prediction. In Proceedings of the 8th Workshop on Representation Learning for NLP. +Apoory Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to-sequence knowledge graph completion and question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2814-2828. +Haseeb Shah, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, and Faisal Shafait. 2019. An open-world extension to knowledge graph completion models. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3044-3051. +Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 32. +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. Advances in Neural Information Processing Systems, 33:16857-16867. +Komal Teru, Etienne Denis, and Will Hamilton. 2020. Inductive relation prediction by subgraph reasoning. In International Conference on Machine Learning, pages 9448-9457. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071-2080. +Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4281-4294. +Peifeng Wang, Jialong Han, Chenliang Li, and Rong Pan. 2019. Logic attention based neighborhood aggregation for inductive knowledge graph embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7152-7159. +Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021. Kepler: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176-194. + +Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. + +Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015. + +Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems, 34. + +# A Appendix + +# A.1 Distribution of Unseen Entities + +Long-tail entities have a different distribution than entities from the whole KG; see Tab. 6 for an overview of the distribution shift for the top 10 entity types. This difference is natural. In particular, high-degree entities in a KG such as Wikidata often refer to types/taxons (e.g., human, organization, ...) as well as popular named entities (e.g., Albert Einstein, Germany, ...). These entities are fundamental to the KG and/or of high interest and have many facts associated with them. For this reason, they do not form suitable candidates for benchmarking unseen or new entities. In addition, removing high-degree entities for the purpose of evaluating SI-LP is likely to distort the KG (e.g., consider removing type "human" or "Germany"). In contrast, Wikidata5M-SI focuses on entities for which knowledge is not yet abundant: long-tail entities are accompanied by no or few facts (at least initially) and our SI-LP benchmark tests reasoning capabilities with this limited information. + +# A.2 Integrating Text into KGE Models + +To integrate text into traditional KGE models, we follow the baseline models of the WikiKG90M link prediction challenge (Hu et al., 2021). We embed mentions combined with descriptions using MPNet (Song et al., 2020), concatenate the resulting descriptions embedding with the entity embedding, and project it with a linear layer for the final representation of the entity. In combination with oDistMult-ERAvg (Albooyeh et al., 2020), we apply the aggregation of neighboring entities and relations on the entity embedding part only. The resulting aggregation is then concatenated with its description and finally projected. + +This approach is closely related to BLP (Daza et al., 2021). The main differences to BLP are: + +1. Hu et al. (2021) use MPNet, BLP uses BERT. +2. In combination with DistMult-ERAvg, we concatenate a learnable "structural embedding" to the CLS embedding of the language model, whereas BLP does not. + +# A.3 Experimental Setup + +For hyperparameter optimization for ComplEx (Trouillon et al., 2016), DistMult (Yang et al., 2015), and HittER (Chen et al., 2021), we used the multi-fidelity approach GraSH (Kochsiek et al., 2022) implemented in LibKGE (Broscheit et al., 2020) with 64 initial trials and trained for up to 64 epochs. For fold-in, we reused training hyperparameters and trained for a single epoch on the provided context. For text-based approaches, we used the hyperparameters and architectures proposed by the authors for the transductive split of Wikidata5M. We trained on up to 5 A6000-GPUs with 49GB of VRAM. + +
WikidataIDMentionAll entitiesLong-tail entities
Q5human39%61%
Q11424film3%8%
Q484170commune of France1%7%
Q482994album3%1%
Q16521taxon9%1%
Q134556single1%1%
Q747074commune of Italy0%1%
Q2074737municipality of Spain0%1%
Q571book1%1%
Q7889video game1%1%
+ +![](images/baae9bbd293af72cab83695b604ca04663ba5fd908d5419b7ced6cda5d4535df.jpg) +Figure 1: Number of correct (rank=1) and incorrect predictions by KGT5+descriptions on annotated examples per annotation label. + +Table 6: Distribution of top 10 entity types over long-tail entities with degree between 11 and 20 compared to all entities. + +
ModelTrans.Semi-inductive (num. shots)
013510
Complex + Bias + Fold in (Jambor et al., 2021)0.2600.0580.0970.1180.1240.132
DistMult + ERAvg (Albooyeh et al., 2020)0.237-0.1150.1510.1850.209
HittER (Chen et al., 2021)0.2340.0050.0760.1150.1320.153
DistMult + ERAvg + Mentions0.239-0.1060.1420.1530.167
SimKGC (mentions only)0.1820.187----
KGT5 (Saxena et al., 2022)0.2490.263----
KGT5-context (Kochsiek et al., 2023)0.3470.1840.1770.1950.2180.263
DistMult + ERAvg + Descriptions0.252-0.1520.1530.1530.161
SimKGC + Descriptions (Wang et al., 2022)0.3110.349----
KGT5 + Descriptions0.3320.430----
KGT5-context + Descriptions0.4000.3790.3820.3730.3780.393
+ +Table 7: Transductive and semi-inductive link prediction results in terms of H@1 on the dataset Wikidata5M-SI. + +
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3370.1650.1800.2020.2190.242
DistMult + ERAvg (Albooyeh et al., 2020)0.328-0.1900.2920.3520.401
HittER (Chen et al., 2021)0.3090.0130.1090.1580.1880.242
DistMult + ERAvg + Mentions0.332-0.2390.2890.3140.340
SimKGC (mentions only)0.2230.227----
KGT5 (Saxena et al., 2022)0.2960.332----
KGT5-context (Kochsiek et al., 2023)0.3900.2360.2340.2570.2780.335
DistMult + ERAvg + Descriptions0.344-0.3680.3730.3780.380
SimKGC (Wang et al., 2022)0.3670.421----
KGT5 + Descriptions0.3850.490----
KGT5-context + Descriptions0.4320.4410.4430.4430.4470.463
+ +Table 8: Transductive and semi-inductive link prediction results in terms of H@3 on the dataset Wikidata5M-SI. + +
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3870.2310.2450.2820.3090.336
DistMult + ERAvg (Albooyeh et al., 2020)0.389-0.2700.4090.4930.564
HittER (Chen et al., 2021)0.3760.0500.1570.2260.2700.359
DistMult + ERAvg + Mentions0.411-0.3200.3920.4400.478
SimKGC (mentions only)0.2660.283----
KGT5 (Saxena et al., 2022)0.3440.398----
KGT5-context (Kochsiek et al., 2023)0.4230.2930.2950.3100.3360.400
DistMult + ERAvg + Descriptions0.425-0.4650.4720.4840.491
SimKGC (Wang et al., 2022)0.4320.504----
KGT5 + Descriptions0.4160.544----
KGT5-context + Descriptions0.4550.4840.4890.4890.4950.516
+ +Table 9: Transductive and semi-inductive link prediction results in terms of H@10 on the dataset Wikidata5M-SI. + +
ModelContext selection135
ComplEx + fold-inMost common0.1510.1610.168
Least common0.1660.1850.195
Random0.1640.1870.196
DistMult + ERAvgMost common0.1710.2460.295
Least common0.2170.2990.323
Random0.2150.3030.318
oDistMult + ERAvg + MentionsMost common0.1870.2350.258
Least common0.2370.2740.279
Random0.2320.2650.272
HittERMost common0.1050.1530.179
Least common0.1510.1950.216
Random0.1360.1900.206
KGT5-contextMost common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
KGT5-context + Desc.Most common0.4200.4160.420
Least common0.4230.4240.430
Random0.4220.4300.430
+ +Table 10: Influence of context selection. Semi-inductive test MRR. Best per model in bold. \ No newline at end of file diff --git a/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/images.zip b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0b07c80988839a547697ca57b649ce3eb9c9d66b --- /dev/null +++ b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0e05a8644c0e03d830ae07a4cbb0b21e9777720a30ee2964c58cde9310fbe98 +size 716640 diff --git a/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/layout.json b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3769cd3c56189bb73292d3c061ffc7e36499c17e --- /dev/null +++ b/2023/A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs/layout.json @@ -0,0 +1,4868 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 74, + 75, + 519, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 74, + 75, + 519, + 94 + ], + "spans": [ + { + "bbox": [ + 74, + 75, + 519, + 94 + ], + "type": "text", + "content": "A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 154, + 116, + 244, + 128 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 154, + 116, + 244, + 128 + ], + "spans": [ + { + "bbox": [ + 154, + 116, + 244, + 128 + ], + "type": "text", + "content": "Adrian Kochsiek" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 139, + 130, + 258, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 130, + 258, + 143 + ], + "spans": [ + { + "bbox": [ + 139, + 130, + 258, + 143 + ], + "type": "text", + "content": "University of Mannheim" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 176, + 144, + 221, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 176, + 144, + 221, + 158 + ], + "spans": [ + { + "bbox": [ + 176, + 144, + 221, + 158 + ], + "type": "text", + "content": "Germany" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 123, + 158, + 274, + 170 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 123, + 158, + 274, + 170 + ], + "spans": [ + { + "bbox": [ + 123, + 158, + 274, + 170 + ], + "type": "text", + "content": "akochsiek@uni-mannheim.de" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 354, + 116, + 439, + 128 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 354, + 116, + 439, + 128 + ], + "spans": [ + { + "bbox": [ + 354, + 116, + 439, + 128 + ], + "type": "text", + "content": "Rainer Gemulla" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 337, + 130, + 456, + 142 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 337, + 130, + 456, + 142 + ], + "spans": [ + { + "bbox": [ + 337, + 130, + 456, + 142 + ], + "type": "text", + "content": "University of Mannheim" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 373, + 144, + 420, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 373, + 144, + 420, + 158 + ], + "spans": [ + { + "bbox": [ + 373, + 144, + 420, + 158 + ], + "type": "text", + "content": "Germany" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 324, + 158, + 468, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 324, + 158, + 468, + 171 + ], + "spans": [ + { + "bbox": [ + 324, + 158, + 468, + 171 + ], + "type": "text", + "content": "rgemulla@uni-mannheim.de" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 86, + 237, + 273, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 237, + 273, + 522 + ], + "spans": [ + { + "bbox": [ + 86, + 237, + 273, + 522 + ], + "type": "text", + "content": "Semi-inductive link prediction (LP) in knowledge graphs (KG) is the task of predicting facts for new, previously unseen entities based on context information. Although new entities can be integrated by retraining the model from scratch in principle, such an approach is infeasible for large-scale KGs, where retraining is expensive and new entities may arise frequently. In this paper, we propose and describe a large-scale benchmark to evaluate semi-inductive LP models. The benchmark is based on and extends Wikidata5M: It provides transductive, k-shot, and 0-shot LP tasks, each varying the available information from (i) only KG structure, to (ii) including textual mentions, and (iii) detailed descriptions of the entities. We report on a small study of recent approaches and found that semi-inductive LP performance is far from transductive performance on long-tail entities throughout all experiments. The benchmark provides a test bed for further research into integrating context and textual information in semi-inductive LP models." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 534, + 154, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 534, + 154, + 547 + ], + "spans": [ + { + "bbox": [ + 68, + 534, + 154, + 547 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 556, + 291, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 556, + 291, + 703 + ], + "spans": [ + { + "bbox": [ + 67, + 556, + 291, + 703 + ], + "type": "text", + "content": "A knowledge graph (KG) is a collection of facts describing relations between real-world entities. Facts are represented in the form of subject-relation-object triples such as (Dave Grohl, memberOf, Foo Fighters). In this paper, we consider link prediction (LP) tasks, i.e., the problem of inferring missing facts in the KG. LP can be transductive (TD; all entities known a priori), semi-inductive (SI; some entities known a priori), and inductive (no entities known a priori). We concentrate on semi-inductive and transductive LP." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 705, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 705, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 705, + 291, + 772 + ], + "type": "text", + "content": "SI-LP focuses on modeling entities that are unknown or unseen during LP, such as out-of-KG entities (not part or not yet part of the KG) or newly created entities, e.g., a new user, product, or event. Such previously unknown entities can be" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 212, + 526, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 212, + 526, + 293 + ], + "spans": [ + { + "bbox": [ + 302, + 212, + 526, + 293 + ], + "type": "text", + "content": "handled by retraining in principle. For large-scale KGs, however, retraining is inherently expensive and new entities may arise frequently. Therefore, the goal of SI-LP is to avoid retraining and perform LP directly, i.e., to generalize beyond the entities seen during training." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 294, + 527, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 294, + 527, + 415 + ], + "spans": [ + { + "bbox": [ + 302, + 294, + 527, + 415 + ], + "type": "text", + "content": "To perform LP for unseen entities, context information about these entities is needed. The amount and form of context information varies widely and may take the form of facts and/or textual information, such as an entity mention and/or its description. For example, a new user in a social network may provide a name, basic facts such as gender or country of origin, and perhaps a textual self-description." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 416, + 527, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 416, + 527, + 657 + ], + "spans": [ + { + "bbox": [ + 302, + 416, + 527, + 657 + ], + "type": "text", + "content": "In this paper, we introduce the Wikidata5M-SI benchmark for SI-LP. Our benchmark is based on the popular Wikidata5M (Wang et al., 2021) benchmark and has four major design goals: (G1) It ensures that unseen entities are long tail entities since popular entities (such as, say, Foo Fighters) and/or types and taxons (such as human and organization) are unlikely to be unseen. (G2) It allows to evaluate each model with varying amounts of contextual facts (0-shot, few-shot, transductive), i.e., to explore individual models across a range of tasks. (G3) It provides a controlled amount of textual information (none, mention, full description), where each setting demands different modeling capabilities. Finally, (G4) the benchmark is large-scale so that retraining is not a suitable approach. All prior SI-LP benchmarks violate at least one of these criteria." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 660, + 525, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 660, + 525, + 687 + ], + "spans": [ + { + "bbox": [ + 302, + 660, + 525, + 687 + ], + "type": "text", + "content": "We report on a small experimental study with recent LP approaches. In general, we found that" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 311, + 696, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 312, + 696, + 525, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 696, + 525, + 723 + ], + "spans": [ + { + "bbox": [ + 312, + 696, + 525, + 723 + ], + "type": "text", + "content": "1. SI performance was far behind TD performance in all experiments for long-tail entities," + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 311, + 728, + 524, + 755 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 728, + 524, + 755 + ], + "spans": [ + { + "bbox": [ + 311, + 728, + 524, + 755 + ], + "type": "text", + "content": "2. there was generally a trade-off between TD and SI performance," + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 311, + 759, + 503, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 759, + 503, + 772 + ], + "spans": [ + { + "bbox": [ + 311, + 759, + 503, + 772 + ], + "type": "text", + "content": "3. textual information was highly valuable," + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "text", + "content": "10634" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 125, + 795, + 468, + 806 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 795, + 468, + 806 + ], + "spans": [ + { + "bbox": [ + 125, + 795, + 468, + 806 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10634-10643" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 165, + 806, + 428, + 817 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 806, + 428, + 817 + ], + "spans": [ + { + "bbox": [ + 165, + 806, + 428, + 817 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 76, + 71, + 290, + 130 + ], + "type": "list", + "angle": 0, + "index": 2, + "blocks": [ + { + "bbox": [ + 76, + 71, + 290, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 71, + 290, + 97 + ], + "spans": [ + { + "bbox": [ + 76, + 71, + 290, + 97 + ], + "type": "text", + "content": "4. proper integration of context and textual information needs further exploration, and" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 76, + 105, + 290, + 130 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 105, + 290, + 130 + ], + "spans": [ + { + "bbox": [ + 76, + 105, + 290, + 130 + ], + "type": "text", + "content": "5. facts involving less common relations provided more useful context." + } + ] + } + ], + "index": 1 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 144, + 289, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 144, + 289, + 169 + ], + "spans": [ + { + "bbox": [ + 67, + 144, + 289, + 169 + ], + "type": "text", + "content": "Our benchmark provides directions and a test bed for further research into SI-LP." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 184, + 159, + 196 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 184, + 159, + 196 + ], + "spans": [ + { + "bbox": [ + 67, + 184, + 159, + 196 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 207, + 290, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 207, + 290, + 491 + ], + "spans": [ + { + "bbox": [ + 67, + 207, + 290, + 491 + ], + "type": "text", + "content": "Multiple SI-LP datasets have been proposed in the literature. The benchmarks of Daza et al. (2021), Albooyeh et al. (2020), and Galkin et al. (2021) are obtained by first merging the splits of smaller transductive LP datasets and subsequently sampling unseen entities uniformly to construct validation and test splits. These benchmarks do not satisfy goals G1-G4. Shi and Weninger (2018) follow a similar approach but focus on only 0-shot evaluation based on textual features. Xie et al. (2016) and Shah et al. (2019) select entities from Freebase with connection to entities in FB15k (Bordes et al., 2013), also focussing on 0-shot evaluation using rich textual descriptions. These approaches do not satisfy G2 and G3. Finally, Wang et al. (2019) and Hamaguchi et al. (2017) uniformly sample test triples and mark occurring entities as unseen. These approaches do not focus on long-tail entities (and, in fact, the accumulated context of unseen entities may be larger than the training graph itself) and they do not satisfy G1-G3." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 493, + 290, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 493, + 290, + 587 + ], + "spans": [ + { + "bbox": [ + 67, + 493, + 290, + 587 + ], + "type": "text", + "content": "There are also several of fully-inductive LP benchmarks (Teru et al., 2020; Wang et al., 2021) involving KGs. While SI-LP aims to connect unseen entities to an existing KG, fully-inductive LP reasons about a new KG with completely separate entities (but shared relations). We do not consider this task in this work." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 600, + 251, + 613 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 600, + 251, + 613 + ], + "spans": [ + { + "bbox": [ + 67, + 600, + 251, + 613 + ], + "type": "text", + "content": "3 The Wikidata5M-SI Benchmark" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "type": "text", + "content": "Wikidata5M-SI is based on the popular Wikidata5M (Wang et al., 2021) benchmark, which is induced by the 5M most common entities of Wikidata. Our benchmark contains transductive and semi-inductive valid/test splits; see Tab. 1 for an overview. Generally, we aimed to keep Wikidata5M-SI as close as possible to Wikidata5M. We did need to modify the original transductive valid and test splits, however, because they unintentionally contained both seen and unseen entities (i.e., these splits were not fully transductive). We" + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 304, + 68, + 523, + 144 + ], + "blocks": [ + { + "bbox": [ + 304, + 68, + 523, + 144 + ], + "lines": [ + { + "bbox": [ + 304, + 68, + 523, + 144 + ], + "spans": [ + { + "bbox": [ + 304, + 68, + 523, + 144 + ], + "type": "table", + "html": "
TrainTransductiveSemi-inductive
ValidTestValidTest
Triples20,600,1874,9834,9775,5005,500
Entities4,593,1037,7687,7603,7223,793
Entities unseen-00500500
Relations822217211126115
", + "image_path": "6df893522092a3826a3019d3e2f226989b49c48bb05359ce9afd18aaf33ac573.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 318, + 152, + 508, + 164 + ], + "lines": [ + { + "bbox": [ + 318, + 152, + 508, + 164 + ], + "spans": [ + { + "bbox": [ + 318, + 152, + 508, + 164 + ], + "type": "text", + "content": "Table 1: Statistics of the Wikidata5M-SI splits." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 188, + 524, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 188, + 524, + 213 + ], + "spans": [ + { + "bbox": [ + 302, + 188, + 524, + 213 + ], + "type": "text", + "content": "did that by simply removing all triples involving unseen entities." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 216, + 525, + 377 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 216, + 525, + 377 + ], + "spans": [ + { + "bbox": [ + 302, + 216, + 525, + 377 + ], + "type": "text", + "content": "Unseen entities. To ensure that unseen entities in the semi-inductive splits are from the long tail (G1), we only considered entities of degree 20 or less. To be able to provide sufficient context for few-shot tasks (G2), we further did not consider entities of degree 10 or less. In more detail, we sampled 500 entities of degrees 11-20 (stratified sampling grouped by degree) for each semi-inductive split. All sampled entities, along with their facts, were removed from the train split. Note that these entities (naturally) have a different class distribution than all entities; see Sec. A.1 for details." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "spans": [ + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": "Tasks and metrics. For TD tasks, we follow the standard protocol of Wikidata5M. To construct SI tasks, we include 11 of the original facts of each unseen entity into its SI split; each split thus contains 5,500 triples. This enables up to 10-shot SI tasks (1 fact to test, up to 10 facts for context). For entities of degree larger than 11, we select the 11 facts with the most frequent relations; see Tab. 2 for an example. The rationale is that more common relations (such as instanceof or country) may be considered more likely to be provided for unseen entities than rare ones (such as militaryBranch or publisher). We then construct a single " + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": "-shot task for each triple " + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "inline_equation", + "content": "(s,p,o)" + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": " in the SI split as follows. When, say, " + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": " is the unseen entity, we consider the LP task " + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "inline_equation", + "content": "(s,p,?)" + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": " and provide " + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": " additional facts of form " + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "inline_equation", + "content": "(s,p',o')" + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": " as context. Context facts are selected by frequency as above, but we also explored random and infrequent-relation context in our study. Models are asked to provide a ranking of predicted answers, and we determine the filtered mean reciprocal rank (MRR) and Hits@K of the correct answer " + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "inline_equation", + "content": "(o)" + }, + { + "bbox": [ + 302, + 380, + 525, + 689 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "content": "Textual information. For each entity, we provide its principal mention and a detailed description (both directly from Wikidata5M); see Tab. 2. This allows to differentiate model evaluation with varying amounts of textual information per entity (G3): (A) atomic, i.e., no textual information, (M) men" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "10635" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 75, + 68, + 518, + 352 + ], + "blocks": [ + { + "bbox": [ + 75, + 68, + 518, + 352 + ], + "lines": [ + { + "bbox": [ + 75, + 68, + 518, + 352 + ], + "spans": [ + { + "bbox": [ + 75, + 68, + 518, + 352 + ], + "type": "table", + "html": "
IDQ18918
MentionSam Witwer
DescriptionSamuel Stewart Witwer (born October 20, 1977) is an American actor and mu-sician. He is known for portraying Crashdown in Battlestar Galactica, Davis Bloome in Smallville, Aidan Waite in Being Human, and Ben Lockwood in Supergirl. He voiced the protagonist Galen Marek / Starkiller in Star Wars: The Force Unleashed, the Son in Star Wars: The Clone Wars and Emperor Palpatine in Star Wars Rebels, both of which he has also voiced Darth Maul.
Context triplesinstance of | humanM: ○ D: ○
country of citizenship | United States of AmericaM: × D: ○
occupation | musicianM: × D: ✓
occupation | actorM: × D: ✓
place of birth | GlenviewM: × D: ×
given name | SamuelM: ○ D: ✓
given name | SamM: ✓ D: ○
cast member | Battlestar GalacticaM: × D: ✓
cast member | Being Human - supernatural drama television seriesM: × D: ✓
cast member | Star Wars: The Force Unleashed IIM: × D: ○
cast member | The MistM: × D: ×
", + "image_path": "cb2ab194c68802dab69337dd9266e6bedca5457458a49eb9115a16cea174c1bf.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 360, + 525, + 396 + ], + "lines": [ + { + "bbox": [ + 67, + 360, + 525, + 396 + ], + "spans": [ + { + "bbox": [ + 67, + 360, + 525, + 396 + ], + "type": "text", + "content": "Table 2: Example of an entity from the semi-inductive validation set of Wikidata5M-SI. For each triple, we annotated whether the answer is contained in (✓), deducible from (○), or not contained in (×) mention (M) or description (D)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 417, + 290, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 417, + 290, + 539 + ], + "spans": [ + { + "bbox": [ + 67, + 417, + 290, + 539 + ], + "type": "text", + "content": "tions only, and (D) detailed textual descriptions as in (Kochsiek et al., 2023). This differentiation is especially important in the SI setting, as detailed text descriptions might not be provided for unseen entities and each setting demands different modeling capabilities. In fact, (A) performs reasoning only using graph structure, whereas (D) also benefits from information extraction to some extent. We discuss this further in Sec. 5." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 549, + 250, + 575 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 549, + 250, + 575 + ], + "spans": [ + { + "bbox": [ + 67, + 549, + 250, + 575 + ], + "type": "text", + "content": "4 Semi-Inductive Link Prediction Models" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 584, + 290, + 610 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 584, + 290, + 610 + ], + "spans": [ + { + "bbox": [ + 67, + 584, + 290, + 610 + ], + "type": "text", + "content": "We briefly summarize recent models for SI-LP; we considered these models in our experimental study." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "type": "text", + "content": "Graph-only models. ComplEx (Trouillon et al., 2016) is the best-performing transductive KGE model on Wikidata5M (Kochsiek et al., 2022). To use ComplEx for SI-LP, we follow an approach explored by Jambor et al. (2021). In particular, we represent each entity as the sum of a local embedding (one per entity) and a global bias embedding. For 0-shot, we solely use the global bias for the unseen entity. For k-shot, we obtain the local embedding for the unseen entity by performing a single training step on the context triples (keeping all other embeddings fixed). An alternative" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 417, + 526, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 417, + 526, + 620 + ], + "spans": [ + { + "bbox": [ + 302, + 417, + 526, + 620 + ], + "type": "text", + "content": "approach is taken by oDistMult-ERAvg (Albooyeh et al., 2020), which represents unseen entities by aggregating the embeddings of the relations and entities in the context. A more direct approach is taken by HittER (Chen et al., 2021), which contextualizes the query entity with its neighborhood for TD-LP. The approach can be used for SI-LP directly by using a masking token (akin to the global bias above) for an unseen entity. We originally planned to consider NodePiece (Galkin et al., 2021) (entity represented by a combination of anchor embeddings) and NBFNet (Zhu et al., 2021) (a GNN-based LP model); both support SI-LP directly. However, the available implementations did not scale to Wikidata5M-SI (out of memory)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 623, + 525, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 623, + 525, + 664 + ], + "spans": [ + { + "bbox": [ + 302, + 623, + 525, + 664 + ], + "type": "text", + "content": "Text-based models. As a baseline approach to integrate textual information directly into KGE models, we consider the approach explored in the" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 680, + 525, + 711 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 680, + 525, + 711 + ], + "spans": [ + { + "bbox": [ + 302, + 680, + 525, + 711 + ], + "type": "text", + "content": "To address the high memory footprint (Galkin et al., 2021) of oDistMult-ERAvg, we extend it with neighborhood sampling." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 712, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 712, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 712, + 525, + 772 + ], + "type": "text", + "content": "For NBFNet (Zhu et al., 2021), the large memory footprint is inherent to the model; it is a full-graph GNN and hard to scale. For NodePiece (Galkin et al., 2021), however, the problem mainly lies in the expensive evaluation. All intermediate representations are precomputed, leading to a large memory overhead." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "10636" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 69, + 292, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 69, + 292, + 354 + ], + "spans": [ + { + "bbox": [ + 69, + 69, + 292, + 354 + ], + "type": "text", + "content": "WikiKG90M benchmark (Hu et al., 2021); see Sec. A.2 for details. The remaining approaches are purely textual. SimKGC (Wang et al., 2022) utilizes two pretrained BERT Transformers: one to embed query entities (and relations) based on their mention or description, and one for tail entities. Using a contrastive learning approach, it measures cosine similarity between both representations for ranking. KGT5 (Saxena et al., 2022) is a sequence-to-sequence link prediction approach, which is trained to generate the mention of the answer entity using the mention or description of the query entity and relation as input. Both approaches support 0-shot SI-LP when textual information is provided for the query entity. They do not utilize additional context, however, i.e., do not support k-shot SI-LP. KGT5-context (Kochsiek et al., 2023) is an extension of KGT5, which extends the input of KGT5 by the one-hop neighborhood of the query entity and consequently supports k-shot LP directly." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 366, + 191, + 380 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 366, + 191, + 380 + ], + "spans": [ + { + "bbox": [ + 67, + 366, + 191, + 380 + ], + "type": "text", + "content": "5 Experimental Study" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 389, + 290, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 389, + 290, + 469 + ], + "spans": [ + { + "bbox": [ + 67, + 389, + 290, + 469 + ], + "type": "text", + "content": "We evaluated all presented baseline models in the TD and SI setting on the atomic, mentions, and descriptions dataset. Further, we evaluated in detail which context was most useful and what information was conveyed by textual mentions and descriptions." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 471, + 291, + 537 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 471, + 291, + 537 + ], + "spans": [ + { + "bbox": [ + 67, + 471, + 291, + 537 + ], + "type": "text", + "content": "Setup. Source code, configuration, and the benchmark itself are available at https://github. com/uma-pi1.wikidata5m-si. For further details on hyperparameter tuning and training see Sec. A.3." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 539, + 290, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 539, + 290, + 714 + ], + "spans": [ + { + "bbox": [ + 67, + 539, + 290, + 714 + ], + "type": "text", + "content": "Main results. Transductive and SI performance in terms of MRR of all models is presented in Tab. 3; Hits@K in Tab. 7-9 (Sec. A). Note that overall transductive performance was oftentimes below best reported SI performance. This is due to varying degrees of query entities between both settings. Typically, models perform better predicting new relations for an entity (e.g., the birthplace) than predicting additional objects for a known relation (e.g., additional awards won by a person) (Saxena et al., 2022; Kochsiek et al., 2023). For a direct comparison between both settings, we additionally report TD performance on long tail query entities." + }, + { + "bbox": [ + 67, + 539, + 290, + 714 + ], + "type": "inline_equation", + "content": "^{3}" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 715, + 291, + 741 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 715, + 291, + 741 + ], + "spans": [ + { + "bbox": [ + 67, + 715, + 291, + 741 + ], + "type": "text", + "content": "Atomic. TD performance on the long tail was considerably higher than SI performance. As no in" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 525, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 138 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 138 + ], + "type": "text", + "content": "formation was provided for unseen entities, 0-shot was not reasonably possible. Without text-based information, context was a necessity. A simple neighborhood aggregation—entity-relation average (ERAvg)—offered the best integration of context." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 141, + 526, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 141, + 526, + 492 + ], + "spans": [ + { + "bbox": [ + 302, + 141, + 526, + 492 + ], + "type": "text", + "content": "Mentions. Integrating mentions did not improve performance on its own, as provided text information was still limited. However, additionally providing context information during inference (KGT5-context) simplified the learning problem and improved TD performance significantly. But for 0-shot, the limited text information provided with mentions allowed for reasonable performance. To analyze what information is conveyed for 0-shot, we annotated 100 valid triples; see Tab. 4. In " + }, + { + "bbox": [ + 302, + 141, + 526, + 492 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 302, + 141, + 526, + 492 + ], + "type": "text", + "content": " of cases, the answer was already contained in the mention, and it was deducible in at least " + }, + { + "bbox": [ + 302, + 141, + 526, + 492 + ], + "type": "inline_equation", + "content": "7\\%" + }, + { + "bbox": [ + 302, + 141, + 526, + 492 + ], + "type": "text", + "content": ". This enabled basic reasoning without any further information. In contrast to the TD setting, KGT5 outperformed its context extension. KGT5-context was reliant on context which was lacking especially during 0-shot. This showed a trade-off between best performance in the SI and TD setting. This trade-off could be mitigated by applying (full and partial) context hiding. With such adapted training, KGT5-context reached a middle ground with a transductive MRR of 0.366 and 0-shot MRR of 0.283.4 However, even with full context (10-shot), performance was still only on par with KGT5. Therefore, context information did not bring any further benefits when text was provided." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 495, + 525, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 495, + 525, + 589 + ], + "spans": [ + { + "bbox": [ + 302, + 495, + 525, + 589 + ], + "type": "text", + "content": "Descriptions. Further, integrating descriptions improved performance for both settings, TD and SI, considerably; see Tab. 3. Similar to the mentions-only setting, KGT5-context performed best in TD and KGT5 in the SI setting. Applying the same trade-off with context-hiding reached a middle ground with 0.418 TD-MRR and 0.449 SI-MRR." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 592, + 525, + 726 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 592, + 525, + 726 + ], + "spans": [ + { + "bbox": [ + 302, + 592, + 525, + 726 + ], + "type": "text", + "content": "Descriptions were very detailed and partially contained the correct answer as well as the same information as contained in context triples; see Tab. 4. Therefore, performance did not further improve with context size. In such cases, models mainly benefit from information extraction capabilities. To judge how much information extraction helps, we grouped performance of KGT5+description in the 0-shot setting on validation data into the groups contained, deducible and not contained in descrip" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "text", + "content": "4In " + }, + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "inline_equation", + "content": "25\\% / 25\\% / 50\\%" + }, + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "text", + "content": " of cases, we hid the full context/sampled between 1-10 neighbors/used the full context, respectively." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "type": "text", + "content": "3We define long tail query entities as entities with degree " + }, + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "type": "inline_equation", + "content": "\\leq 10" + }, + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "type": "text", + "content": " as in the SI setting." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "10637" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 67, + 68, + 524, + 238 + ], + "blocks": [ + { + "bbox": [ + 67, + 68, + 524, + 238 + ], + "lines": [ + { + "bbox": [ + 67, + 68, + 524, + 238 + ], + "spans": [ + { + "bbox": [ + 67, + 68, + 524, + 238 + ], + "type": "table", + "html": "
ModelTransductiveSemi-inductive (num. shots)Pre-trained
AllLong tail013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3080.5230.1240.1510.1760.1900.206no
DistMult + ERAvg (Albooyeh et al., 2020)0.2940.512-0.1710.2460.2950.333no
HittER (Chen et al., 2021)0.2840.5120.0190.1050.1530.1790.221no
DistMult + ERAvg + Mentions0.2990.535-0.1870.2350.2580.280yes
SimKGC (mentions only)0.2120.3610.220----yes
KGT5 (Saxena et al., 2022)0.2810.5420.310----no
KGT5-context (Kochsiek et al., 2023)0.3740.6780.2200.2170.2360.2590.311no
DistMult + ERAvg + Descriptions0.3130.585-0.2780.2810.2850.292yes
SimKGC + Descriptions (Wang et al., 2022)0.3530.6630.403----yes
KGT5 + Descriptions (Kochsiek et al., 2023)0.3640.7280.470----no
KGT5-context + Descriptions (Kochsiek et al., 2023)0.4200.7770.4170.4200.4160.4200.437no
", + "image_path": "4c597d069cebe5c1fb4baede01313e4f125e346507e24f70ac1253c37b7d6a27.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 83, + 302, + 276, + 365 + ], + "blocks": [ + { + "bbox": [ + 67, + 248, + 525, + 285 + ], + "lines": [ + { + "bbox": [ + 67, + 248, + 525, + 285 + ], + "spans": [ + { + "bbox": [ + 67, + 248, + 525, + 285 + ], + "type": "text", + "content": "Table 3: Transductive and semi-inductive link prediction results in terms of MRR on the dataset Wikidata5M-SI. The first group presets results on the atomic, the second on the mentions and the third on the descriptions dataset. Best per TD/SI in bold. Best per group underlined." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 83, + 302, + 276, + 365 + ], + "lines": [ + { + "bbox": [ + 83, + 302, + 276, + 365 + ], + "spans": [ + { + "bbox": [ + 83, + 302, + 276, + 365 + ], + "type": "table", + "html": "
MentionDescription
Contained10%44%
Deductible7%10%
Not contained83%46%
", + "image_path": "31d45159c5ece98c1e1cd7e33d1abfd26cc42b9c75893ec7721d32e30d4244fe.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 82, + 424, + 276, + 484 + ], + "blocks": [ + { + "bbox": [ + 67, + 374, + 289, + 409 + ], + "lines": [ + { + "bbox": [ + 67, + 374, + 289, + 409 + ], + "spans": [ + { + "bbox": [ + 67, + 374, + 289, + 409 + ], + "type": "text", + "content": "Table 4: Information about a query answer contained in mentions and descriptions. Annotated for 100 sampled triples from 0-shot valid. For an example, see Tab. 2." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 82, + 424, + 276, + 484 + ], + "lines": [ + { + "bbox": [ + 82, + 424, + 276, + 484 + ], + "spans": [ + { + "bbox": [ + 82, + 424, + 276, + 484 + ], + "type": "table", + "html": "
Context selection135
Most common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
", + "image_path": "43bba62dbfaca9436f58a64500a6c77ee103085817d0c9ccbe840ad1fecebc7f.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 491, + 289, + 515 + ], + "lines": [ + { + "bbox": [ + 67, + 491, + 289, + 515 + ], + "spans": [ + { + "bbox": [ + 67, + 491, + 289, + 515 + ], + "type": "text", + "content": "Table 5: Influence of context selection. Semi-inductive test MRR of KGT5-context." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 541, + 289, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 541, + 289, + 567 + ], + "spans": [ + { + "bbox": [ + 67, + 541, + 289, + 567 + ], + "type": "text", + "content": "tion; see Fig. 1 in Sec. A. When contained, the correct answer was extracted in " + }, + { + "bbox": [ + 67, + 541, + 289, + 567 + ], + "type": "inline_equation", + "content": "\\approx 70\\%" + }, + { + "bbox": [ + 67, + 541, + 289, + 567 + ], + "type": "text", + "content": " of cases." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 571, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 571, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 571, + 290, + 772 + ], + "type": "text", + "content": "Context selection. We selected the most common relations as context triples so far, as this may be a more realistic setting. To investigate the effect of this selection approach, we compared the default selection of choosing most common relations to least common and random. Results for KGT5-context are shown in Tab. 5; for all other models in Tab. 10 in Sec. A. We found that the less common the relations of the provided context, the better the SI performance. More common context relations often described high-level concepts, while less common provided further detail; see the example in Tab. 2. While more common context may be more readily available, less common context was more helpful to describe a new entity." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 305, + 381, + 317 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 305, + 381, + 317 + ], + "spans": [ + { + "bbox": [ + 302, + 305, + 381, + 317 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 327, + 525, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 327, + 525, + 502 + ], + "spans": [ + { + "bbox": [ + 302, + 327, + 525, + 502 + ], + "type": "text", + "content": "We proposed the new WikiData5M-SI large-scale benchmark for semi-supervised link prediction. The benchmark focuses on unseen entities from the long tail and allows to evaluate models with varying and controlled amounts of factual and textual context information. In our experimental evaluation, we found that semi-inductive LP performance fell behind transductive performance for long-tail entities in general, and that detailed textual information was often more valuable than factual context information. Moreover, current models did not integrate these two types of information adequately, suggesting a direction for future research." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 514, + 365, + 527 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 514, + 365, + 527 + ], + "spans": [ + { + "bbox": [ + 303, + 514, + 365, + 527 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 536, + 525, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 536, + 525, + 658 + ], + "spans": [ + { + "bbox": [ + 302, + 536, + 525, + 658 + ], + "type": "text", + "content": "This study was performed on Wikidata5M-SI, i.e., a subset of a single knowledge graph. Model performance and insights may vary if graph structure and/or availability and usefulness of mentions and description is different. In particular, the entity descriptions provided with Wikidata5M-SI partly contained information relevant for link prediction so that models benefited from information extraction capabilities." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 303, + 670, + 393, + 682 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 670, + 393, + 682 + ], + "spans": [ + { + "bbox": [ + 303, + 670, + 393, + 682 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 692, + 525, + 759 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 692, + 525, + 759 + ], + "spans": [ + { + "bbox": [ + 302, + 692, + 525, + 759 + ], + "type": "text", + "content": "This research adapts publicly available data, benchmarks, and codebases for evaluation. We believe that this research was conducted in an ethical manner in compliance with all relevant laws and regulations." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "10638" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 89, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 69, + 89, + 291, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 89, + 291, + 146 + ], + "spans": [ + { + "bbox": [ + 69, + 89, + 291, + 146 + ], + "type": "text", + "content": "Marjan Albooyeh, Rishab Goel, and Seyed Mehran Kazemi. 2020. Out-of-sample representation learning for knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2657-2666." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 153, + 291, + 210 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 153, + 291, + 210 + ], + "spans": [ + { + "bbox": [ + 69, + 153, + 291, + 210 + ], + "type": "text", + "content": "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems (NIPS), pages 1-9." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 216, + 291, + 294 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 216, + 291, + 294 + ], + "spans": [ + { + "bbox": [ + 69, + 216, + 291, + 294 + ], + "type": "text", + "content": "Samuel Broscheit, Daniel Ruffinelli, Adrian Kochsiek, Patrick Betz, and Rainer Gemulla. 2020. LibKGE - A knowledge graph embedding library for reproducible research. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 165-174." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 302, + 291, + 370 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 302, + 291, + 370 + ], + "spans": [ + { + "bbox": [ + 69, + 302, + 291, + 370 + ], + "type": "text", + "content": "Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, and Yangfeng Ji. 2021. Hitter: Hierarchical transformers for knowledge graph embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10395-10407." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 376, + 290, + 422 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 376, + 290, + 422 + ], + "spans": [ + { + "bbox": [ + 69, + 376, + 290, + 422 + ], + "type": "text", + "content": "Daniel Daza, Michael Cochez, and Paul Groth. 2021. Inductive entity representations from text via link prediction. In Proceedings of the Web Conference 2021, pages 798-808." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 429, + 291, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 429, + 291, + 485 + ], + "spans": [ + { + "bbox": [ + 69, + 429, + 291, + 485 + ], + "type": "text", + "content": "Mikhail Galkin, Etienne Denis, Jiapeng Wu, and William L Hamilton. 2021. Nodepiece: Compositional and parameter-efficient representations of large knowledge graphs. In International Conference on Learning Representations." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 492, + 291, + 560 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 492, + 291, + 560 + ], + "spans": [ + { + "bbox": [ + 69, + 492, + 291, + 560 + ], + "type": "text", + "content": "Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 1802-1808." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 567, + 291, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 567, + 291, + 624 + ], + "spans": [ + { + "bbox": [ + 69, + 567, + 291, + 624 + ], + "type": "text", + "content": "Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. 2021. Ogb-lsc: A large-scale challenge for machine learning on graphs. Advances in Neural Information Processing Systems, 35." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 631, + 291, + 698 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 631, + 291, + 698 + ], + "spans": [ + { + "bbox": [ + 69, + 631, + 291, + 698 + ], + "type": "text", + "content": "Dora Jambor, Komal Teru, Joelle Pineau, and William L Hamilton. 2021. Exploring the limits of few-shot link prediction in knowledge graphs. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2816-2822." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 706, + 291, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 706, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 706, + 291, + 772 + ], + "type": "text", + "content": "Adrian Kochsiek, Fritz Niesel, and Rainer Gemulla. 2022. Start small, think big: On hyperparameter optimization for large-scale knowledge graph embeddings. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 315, + 72, + 525, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 72, + 525, + 95 + ], + "spans": [ + { + "bbox": [ + 315, + 72, + 525, + 95 + ], + "type": "text", + "content": "19-23, 2022, Proceedings, Part II, pages 138-154. Springer." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 105, + 525, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 105, + 525, + 160 + ], + "spans": [ + { + "bbox": [ + 304, + 105, + 525, + 160 + ], + "type": "text", + "content": "Adrian Kochsiek, Apoorv Saxena, Inderjeet Nair, and Rainer Gemulla. 2023. Friendly neighbors: Contextualized sequence-to-sequence link prediction. In Proceedings of the 8th Workshop on Representation Learning for NLP." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 170, + 525, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 170, + 525, + 237 + ], + "spans": [ + { + "bbox": [ + 304, + 170, + 525, + 237 + ], + "type": "text", + "content": "Apoory Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to-sequence knowledge graph completion and question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2814-2828." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 246, + 525, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 246, + 525, + 312 + ], + "spans": [ + { + "bbox": [ + 304, + 246, + 525, + 312 + ], + "type": "text", + "content": "Haseeb Shah, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, and Faisal Shafait. 2019. An open-world extension to knowledge graph completion models. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3044-3051." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 322, + 525, + 366 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 322, + 525, + 366 + ], + "spans": [ + { + "bbox": [ + 304, + 322, + 525, + 366 + ], + "type": "text", + "content": "Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 32." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 377, + 525, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 377, + 525, + 432 + ], + "spans": [ + { + "bbox": [ + 304, + 377, + 525, + 432 + ], + "type": "text", + "content": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. Advances in Neural Information Processing Systems, 33:16857-16867." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 442, + 525, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 442, + 525, + 487 + ], + "spans": [ + { + "bbox": [ + 304, + 442, + 525, + 487 + ], + "type": "text", + "content": "Komal Teru, Etienne Denis, and Will Hamilton. 2020. Inductive relation prediction by subgraph reasoning. In International Conference on Machine Learning, pages 9448-9457." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 497, + 525, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 497, + 525, + 553 + ], + "spans": [ + { + "bbox": [ + 304, + 497, + 525, + 553 + ], + "type": "text", + "content": "Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071-2080." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 563, + 525, + 631 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 563, + 525, + 631 + ], + "spans": [ + { + "bbox": [ + 304, + 563, + 525, + 631 + ], + "type": "text", + "content": "Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4281-4294." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 640, + 525, + 696 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 640, + 525, + 696 + ], + "spans": [ + { + "bbox": [ + 304, + 640, + 525, + 696 + ], + "type": "text", + "content": "Peifeng Wang, Jialong Han, Chenliang Li, and Rong Pan. 2019. Logic attention based neighborhood aggregation for inductive knowledge graph embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7152-7159." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 706, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 706, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 706, + 525, + 772 + ], + "type": "text", + "content": "Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021. Kepler: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176-194." + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "10639" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 290, + 127 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 290, + 127 + ], + "type": "text", + "content": "Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 136, + 290, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 136, + 290, + 191 + ], + "spans": [ + { + "bbox": [ + 69, + 136, + 290, + 191 + ], + "type": "text", + "content": "Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 200, + 290, + 254 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 200, + 290, + 254 + ], + "spans": [ + { + "bbox": [ + 68, + 200, + 290, + 254 + ], + "type": "text", + "content": "Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems, 34." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 266, + 141, + 279 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 266, + 141, + 279 + ], + "spans": [ + { + "bbox": [ + 68, + 266, + 141, + 279 + ], + "type": "text", + "content": "A Appendix" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 287, + 241, + 299 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 287, + 241, + 299 + ], + "spans": [ + { + "bbox": [ + 68, + 287, + 241, + 299 + ], + "type": "text", + "content": "A.1 Distribution of Unseen Entities" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 304, + 290, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 304, + 290, + 574 + ], + "spans": [ + { + "bbox": [ + 69, + 304, + 290, + 574 + ], + "type": "text", + "content": "Long-tail entities have a different distribution than entities from the whole KG; see Tab. 6 for an overview of the distribution shift for the top 10 entity types. This difference is natural. In particular, high-degree entities in a KG such as Wikidata often refer to types/taxons (e.g., human, organization, ...) as well as popular named entities (e.g., Albert Einstein, Germany, ...). These entities are fundamental to the KG and/or of high interest and have many facts associated with them. For this reason, they do not form suitable candidates for benchmarking unseen or new entities. In addition, removing high-degree entities for the purpose of evaluating SI-LP is likely to distort the KG (e.g., consider removing type \"human\" or \"Germany\"). In contrast, Wikidata5M-SI focuses on entities for which knowledge is not yet abundant: long-tail entities are accompanied by no or few facts (at least initially) and our SI-LP benchmark tests reasoning capabilities with this limited information." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 303, + 71, + 493, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 71, + 493, + 84 + ], + "spans": [ + { + "bbox": [ + 303, + 71, + 493, + 84 + ], + "type": "text", + "content": "A.2 Integrating Text into KGE Models" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 89, + 525, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 89, + 525, + 263 + ], + "spans": [ + { + "bbox": [ + 302, + 89, + 525, + 263 + ], + "type": "text", + "content": "To integrate text into traditional KGE models, we follow the baseline models of the WikiKG90M link prediction challenge (Hu et al., 2021). We embed mentions combined with descriptions using MPNet (Song et al., 2020), concatenate the resulting descriptions embedding with the entity embedding, and project it with a linear layer for the final representation of the entity. In combination with oDistMult-ERAvg (Albooyeh et al., 2020), we apply the aggregation of neighboring entities and relations on the entity embedding part only. The resulting aggregation is then concatenated with its description and finally projected." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 263, + 524, + 288 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 263, + 524, + 288 + ], + "spans": [ + { + "bbox": [ + 302, + 263, + 524, + 288 + ], + "type": "text", + "content": "This approach is closely related to BLP (Daza et al., 2021). The main differences to BLP are:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 311, + 293, + 525, + 370 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 312, + 293, + 525, + 306 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 293, + 525, + 306 + ], + "spans": [ + { + "bbox": [ + 312, + 293, + 525, + 306 + ], + "type": "text", + "content": "1. Hu et al. (2021) use MPNet, BLP uses BERT." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 311, + 317, + 524, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 317, + 524, + 370 + ], + "spans": [ + { + "bbox": [ + 311, + 317, + 524, + 370 + ], + "type": "text", + "content": "2. In combination with DistMult-ERAvg, we concatenate a learnable \"structural embedding\" to the CLS embedding of the language model, whereas BLP does not." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 380, + 427, + 392 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 380, + 427, + 392 + ], + "spans": [ + { + "bbox": [ + 303, + 380, + 427, + 392 + ], + "type": "text", + "content": "A.3 Experimental Setup" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 396, + 525, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 396, + 525, + 571 + ], + "spans": [ + { + "bbox": [ + 302, + 396, + 525, + 571 + ], + "type": "text", + "content": "For hyperparameter optimization for ComplEx (Trouillon et al., 2016), DistMult (Yang et al., 2015), and HittER (Chen et al., 2021), we used the multi-fidelity approach GraSH (Kochsiek et al., 2022) implemented in LibKGE (Broscheit et al., 2020) with 64 initial trials and trained for up to 64 epochs. For fold-in, we reused training hyperparameters and trained for a single epoch on the provided context. For text-based approaches, we used the hyperparameters and architectures proposed by the authors for the transductive split of Wikidata5M. We trained on up to 5 A6000-GPUs with 49GB of VRAM." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "10640" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 132, + 70, + 462, + 234 + ], + "blocks": [ + { + "bbox": [ + 132, + 70, + 462, + 234 + ], + "lines": [ + { + "bbox": [ + 132, + 70, + 462, + 234 + ], + "spans": [ + { + "bbox": [ + 132, + 70, + 462, + 234 + ], + "type": "table", + "html": "
WikidataIDMentionAll entitiesLong-tail entities
Q5human39%61%
Q11424film3%8%
Q484170commune of France1%7%
Q482994album3%1%
Q16521taxon9%1%
Q134556single1%1%
Q747074commune of Italy0%1%
Q2074737municipality of Spain0%1%
Q571book1%1%
Q7889video game1%1%
", + "image_path": "89e501f861d5ef0214152d97d53cfcea733d7ae14141c864209868ba5ff5a7c3.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 156, + 282, + 438, + 491 + ], + "blocks": [ + { + "bbox": [ + 156, + 282, + 438, + 491 + ], + "lines": [ + { + "bbox": [ + 156, + 282, + 438, + 491 + ], + "spans": [ + { + "bbox": [ + 156, + 282, + 438, + 491 + ], + "type": "image", + "image_path": "baae9bbd293af72cab83695b604ca04663ba5fd908d5419b7ced6cda5d4535df.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 507, + 525, + 530 + ], + "lines": [ + { + "bbox": [ + 67, + 507, + 525, + 530 + ], + "spans": [ + { + "bbox": [ + 67, + 507, + 525, + 530 + ], + "type": "text", + "content": "Figure 1: Number of correct (rank=1) and incorrect predictions by KGT5+descriptions on annotated examples per annotation label." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 73, + 539, + 520, + 749 + ], + "blocks": [ + { + "bbox": [ + 67, + 242, + 525, + 267 + ], + "lines": [ + { + "bbox": [ + 67, + 242, + 525, + 267 + ], + "spans": [ + { + "bbox": [ + 67, + 242, + 525, + 267 + ], + "type": "text", + "content": "Table 6: Distribution of top 10 entity types over long-tail entities with degree between 11 and 20 compared to all entities." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 73, + 539, + 520, + 749 + ], + "lines": [ + { + "bbox": [ + 73, + 539, + 520, + 749 + ], + "spans": [ + { + "bbox": [ + 73, + 539, + 520, + 749 + ], + "type": "table", + "html": "
ModelTrans.Semi-inductive (num. shots)
013510
Complex + Bias + Fold in (Jambor et al., 2021)0.2600.0580.0970.1180.1240.132
DistMult + ERAvg (Albooyeh et al., 2020)0.237-0.1150.1510.1850.209
HittER (Chen et al., 2021)0.2340.0050.0760.1150.1320.153
DistMult + ERAvg + Mentions0.239-0.1060.1420.1530.167
SimKGC (mentions only)0.1820.187----
KGT5 (Saxena et al., 2022)0.2490.263----
KGT5-context (Kochsiek et al., 2023)0.3470.1840.1770.1950.2180.263
DistMult + ERAvg + Descriptions0.252-0.1520.1530.1530.161
SimKGC + Descriptions (Wang et al., 2022)0.3110.349----
KGT5 + Descriptions0.3320.430----
KGT5-context + Descriptions0.4000.3790.3820.3730.3780.393
", + "image_path": "6026a441f3b98506f54bfb89cbc3672f5394b099a10392ef3cd43300a9c5328d.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 70, + 756, + 521, + 768 + ], + "lines": [ + { + "bbox": [ + 70, + 756, + 521, + 768 + ], + "spans": [ + { + "bbox": [ + 70, + 756, + 521, + 768 + ], + "type": "text", + "content": "Table 7: Transductive and semi-inductive link prediction results in terms of H@1 on the dataset Wikidata5M-SI." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "text", + "content": "10641" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 73, + 128, + 521, + 336 + ], + "blocks": [ + { + "bbox": [ + 73, + 128, + 521, + 336 + ], + "lines": [ + { + "bbox": [ + 73, + 128, + 521, + 336 + ], + "spans": [ + { + "bbox": [ + 73, + 128, + 521, + 336 + ], + "type": "table", + "html": "
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3370.1650.1800.2020.2190.242
DistMult + ERAvg (Albooyeh et al., 2020)0.328-0.1900.2920.3520.401
HittER (Chen et al., 2021)0.3090.0130.1090.1580.1880.242
DistMult + ERAvg + Mentions0.332-0.2390.2890.3140.340
SimKGC (mentions only)0.2230.227----
KGT5 (Saxena et al., 2022)0.2960.332----
KGT5-context (Kochsiek et al., 2023)0.3900.2360.2340.2570.2780.335
DistMult + ERAvg + Descriptions0.344-0.3680.3730.3780.380
SimKGC (Wang et al., 2022)0.3670.421----
KGT5 + Descriptions0.3850.490----
KGT5-context + Descriptions0.4320.4410.4430.4430.4470.463
", + "image_path": "0d637e223560b8fe3fe0b4aa3b32b9e0fe281fae22778a0d6f9bdd23d08e37c0.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 73, + 481, + 521, + 690 + ], + "blocks": [ + { + "bbox": [ + 71, + 343, + 521, + 356 + ], + "lines": [ + { + "bbox": [ + 71, + 343, + 521, + 356 + ], + "spans": [ + { + "bbox": [ + 71, + 343, + 521, + 356 + ], + "type": "text", + "content": "Table 8: Transductive and semi-inductive link prediction results in terms of H@3 on the dataset Wikidata5M-SI." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 73, + 481, + 521, + 690 + ], + "lines": [ + { + "bbox": [ + 73, + 481, + 521, + 690 + ], + "spans": [ + { + "bbox": [ + 73, + 481, + 521, + 690 + ], + "type": "table", + "html": "
ModelTrans.Semi-inductive (num. shots)
013510
ComplEx + Bias + Fold in (Jambor et al., 2021)0.3870.2310.2450.2820.3090.336
DistMult + ERAvg (Albooyeh et al., 2020)0.389-0.2700.4090.4930.564
HittER (Chen et al., 2021)0.3760.0500.1570.2260.2700.359
DistMult + ERAvg + Mentions0.411-0.3200.3920.4400.478
SimKGC (mentions only)0.2660.283----
KGT5 (Saxena et al., 2022)0.3440.398----
KGT5-context (Kochsiek et al., 2023)0.4230.2930.2950.3100.3360.400
DistMult + ERAvg + Descriptions0.425-0.4650.4720.4840.491
SimKGC (Wang et al., 2022)0.4320.504----
KGT5 + Descriptions0.4160.544----
KGT5-context + Descriptions0.4550.4840.4890.4890.4950.516
", + "image_path": "3fd2ce994ab5d7c80f165bf238ab17d6880c5876b2bd282cf885ac708209371a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 698, + 523, + 709 + ], + "lines": [ + { + "bbox": [ + 68, + 698, + 523, + 709 + ], + "spans": [ + { + "bbox": [ + 68, + 698, + 523, + 709 + ], + "type": "text", + "content": "Table 9: Transductive and semi-inductive link prediction results in terms of H@10 on the dataset Wikidata5M-SI." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "10642" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 117, + 269, + 474, + 548 + ], + "blocks": [ + { + "bbox": [ + 117, + 269, + 474, + 548 + ], + "lines": [ + { + "bbox": [ + 117, + 269, + 474, + 548 + ], + "spans": [ + { + "bbox": [ + 117, + 269, + 474, + 548 + ], + "type": "table", + "html": "
ModelContext selection135
ComplEx + fold-inMost common0.1510.1610.168
Least common0.1660.1850.195
Random0.1640.1870.196
DistMult + ERAvgMost common0.1710.2460.295
Least common0.2170.2990.323
Random0.2150.3030.318
oDistMult + ERAvg + MentionsMost common0.1870.2350.258
Least common0.2370.2740.279
Random0.2320.2650.272
HittERMost common0.1050.1530.179
Least common0.1510.1950.216
Random0.1360.1900.206
KGT5-contextMost common0.2170.2360.259
Least common0.2530.2730.290
Random0.2370.2600.281
KGT5-context + Desc.Most common0.4200.4160.420
Least common0.4230.4240.430
Random0.4220.4300.430
", + "image_path": "b6a0092f4fc56e10af7c6f2a33953ece3eeddc716ad6b3f162b8fd09290bf1d1.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 114, + 555, + 478, + 568 + ], + "lines": [ + { + "bbox": [ + 114, + 555, + 478, + 568 + ], + "spans": [ + { + "bbox": [ + 114, + 555, + 478, + 568 + ], + "type": "text", + "content": "Table 10: Influence of context selection. Semi-inductive test MRR. Best per model in bold." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "10643" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_content_list.json b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..84c9b073324468e6fa9f08bead785ffa34b0d343 --- /dev/null +++ b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_content_list.json @@ -0,0 +1,1527 @@ +[ + { + "type": "text", + "text": "A Black-Box Attack on Code Models via Representation Nearest Neighbor Search", + "text_level": 1, + "bbox": [ + 114, + 83, + 884, + 122 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jie Zhang $^{1*}$ , Wei Ma $^{2\\dagger}$ , Qiang Hu $^{3}$ , Shangqing Liu $^{2}$ , Xiaofei Xie $^{4}$ , Yves Le Traon $^{3}$ , and Yang Liu $^{2}$", + "bbox": [ + 109, + 131, + 892, + 151 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1Noah's Ark Lab, Huawei", + "bbox": [ + 393, + 162, + 608, + 178 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ School of Computer Science and Engineering, Nanyang Technological University", + "bbox": [ + 161, + 179, + 836, + 196 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3The Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg", + "bbox": [ + 131, + 196, + 870, + 212 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{4}$ School of Computing and Information Systems, Singapore Management University", + "bbox": [ + 157, + 212, + 842, + 230 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 342, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Existing methods for generating adversarial code examples face several challenges: limited availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, Python, and C. We employed three pre-trained code models (CodeBERT, GraphCodeBERT, and CodeT5) that resulted in a cumulative of 18 victim models. The results demonstrate that RNNS outperforms baselines in terms of ASR and QT. Furthermore, the perturbation of adversarial examples introduced by RNNS is smaller compared to the baselines in terms of the number of replaced variables and the change in variable length. Lastly, our experiments indicate that RNNS is efficient in attacking defended models and can be employed for adversarial training.", + "bbox": [ + 142, + 278, + 460, + 690 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 700, + 260, + 715 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recently, since programming language can be seen as one kind of textual data and also inspired by the success of deep learning for text processing and understanding, researchers have tried to pretrain code models such as CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020), ContrabERT (Liu et al., 2023a) to help developers to solve multiple programming tasks, e.g., code search (Gu et al., 2018; Liu et al., 2023b), code clone detection (White et al., 2016; Li et al., 2017), code sum", + "bbox": [ + 112, + 725, + 490, + 885 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "marization (Ahmad et al., 2020; Liu et al., 2020), and vulnerability detection (Zhou et al., 2019). Although these code models have achieved good performance on many code tasks, they are still suffering from robustness issues. A few adversarial attack methods have emerged to evaluate and improve the robustness of code models.", + "bbox": [ + 507, + 253, + 885, + 363 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "There are certain considerations to be made. Firstly, code pre-training models are frequently deployed remotely, which limits access to the model parameters and renders white-box attacks infeasible. Secondly, among the numerous code-equivalent transformation methods, variable substitution exerts the most significant influence on the resilience of large code models while being the least detectable transformation (Li et al., 2022). As a result, black-box attack techniques based on variable substitution have emerged as a valuable avenue for research and multiple works have been proposed such as ALERT (Yang et al., 2022) and MHM (Zhang et al., 2020).", + "bbox": [ + 507, + 368, + 885, + 594 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, these works have three limitations: 1) The number of substitute variables is limited and lacks diversity, which lowers the upper bound of the attack success rate. For example, ALERT employs 60 substitute variables for each variable, which are generated by a pre-trained model, and the substitute variables lack diversity. MHM also randomly selects 1500 words from a fixed dictionary as substitute variables. 2) The verification cost of substitute variables is high. To verify the attack effect of each substitute, it is necessary to replace the source variable with an adversarial sample and perform an actual attack on the victim model. ALERT uses a traversal method to select substitute variables, and in order to reduce the number of attacks, it limits the number of substitute variables; MHM uses a random sampling method to select substitute variables in order to reduce the number of attacks. Neither method is conducive to cost-effective attacks. 3) The generated adversarial samples have", + "bbox": [ + 507, + 596, + 885, + 920 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* clark.zhang@huawei.com", + "bbox": [ + 139, + 891, + 310, + 904 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "† corresponding author: ma_wei@ntu.edu.sg", + "bbox": [ + 139, + 904, + 416, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "9706", + "bbox": [ + 478, + 927, + 521, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9706-9716 December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 216, + 945, + 779, + 972 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "large perturbations. Each adversarial sample usually needs to replace multiple original variables to succeed in attacking, and MHM easily generates semantically incoherent and excessively long variable names.", + "bbox": [ + 112, + 84, + 487, + 162 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address the aforementioned challenges, in this paper, we propose a search-based black-box adversarial attack method to create challenging adversarial samples based on the search seed vector in the variable representation space, namely Representation Nearest Neighbor Search (RNNS). Specifically, RNNS, first utilizes publicly available real code datasets to construct a large original substitute set, denoted as $subs_{original}$ . Then, based on the previous attack results, RNNS predicts the search seed vector required for the next round of attacks and efficiently searches for the $k$ nearest substitutes to the seed vector from the large-scale original substitute set to form the $subs_{topk}$ , where $k$ is much smaller than the size of the original substitute set. The generation process of the $subs_{topk}$ does not involve attacking the victim model even once. Furthermore, the length and similarity of the substitute must adhere to specific perturbation constraints to prevent excessive deviations from $var$ .", + "bbox": [ + 115, + 165, + 489, + 501 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To evaluate the effectiveness of RNNS, we investigate three pre-trained code models, CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020) and CodeT5 (Wang et al., 2021), and perform the attack on six code tasks in three programming languages, i.e., Java, Python, and C. The results on 18 victim models demonstrate that compared to the approaches MHM and ALERT, RNNS achieves a higher attack success rate (ASR) with a maximum of about $100\\%$ improvement and 18/18 times as the winner. Meanwhile, RNNS needs fewer query times (QT) with 8/18 times as the winners. Furthermore, we analyze the quality of adversarial examples statistically and find that RNNS introduces minor perturbations. In the end, we apply RNNS to attack three defended models and find that our approach outperforms the baselines by up to $32.07\\%$ ASR. We also use adversarial examples to improve the model's robustness through contrastive adversarial training.", + "bbox": [ + 115, + 504, + 489, + 825 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Preliminaries", + "text_level": 1, + "bbox": [ + 112, + 838, + 265, + 853 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 Textual Code Processing", + "text_level": 1, + "bbox": [ + 112, + 865, + 352, + 881 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The nature of code data (in text format with discrete input space) makes it impossible to feed one", + "bbox": [ + 112, + 887, + 489, + 917 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{c c} x = \\left( \\begin{array}{c} S _ {0} \\\\ ... \\\\ S _ {i} \\\\ ... \\\\ S _ {j} \\\\ ... \\\\ S _ {l} \\end{array} \\right) \\longrightarrow R ^ {l \\times d} = \\left( \\begin{array}{c} \\boldsymbol {v} _ {0} \\\\ ... \\\\ \\boldsymbol {v} _ {i} \\\\ ... \\\\ \\boldsymbol {v} _ {j} \\\\ ... \\\\ \\boldsymbol {v} _ {l} \\end{array} \\right) \\longrightarrow \\boxed {f (\\theta)} \\\\ \\text {M o d e l} & \\longrightarrow \\left( \\begin{array}{c} p _ {0} \\\\ ... \\\\ p _ {g} \\\\ ... \\\\ p _ {k} \\end{array} \\right) \\\\ \\text {D o m a i n P r o b a b i l i t y} \\\\ \\text {S p a c e} & \\text {S p a c e} \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 512, + 87, + 887, + 186 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Figure 1: One code model demo on the downstream task.", + "bbox": [ + 507, + 204, + 882, + 231 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "code input $x$ directly into deep learning models. Thus, transferring code data to learnable continuous vectors is the first step in source code learning. Dense encoding (Zhelezniak et al., 2020) is one common method used to vectorize textual code data. To do so, first, we need to learn a tokenizer that splits the code text into a token sequence which is called Tokenization. After tokenization, code $x$ is represented by a sequence of tokens, namely, $x = (s_0, \\dots, s_j, \\dots, s_l)$ where $s_i$ is one token. Then, the code vocabulary dictionary is built by using all the appeared tokens $s_i$ , denoted $\\mathbb{V}$ . After that, every word (token) in $\\mathbb{V}$ is embedded by learned vectors $\\boldsymbol{v}_i$ with dimension $d$ . Here, we use $E^{|\\mathbb{V}| \\times d}$ to represent the embedding matrix for $\\mathbb{V}$ . Finally, $x$ can be converted into a embedding matrix $R^{l \\times d} = (v_0, \\dots, v_j, \\dots, v_l)$ . After this code encoding, pre-trained code models based on the transformer take the matrix $R^{l \\times d}$ as inputs and learn the contextual representation of $x$ for downstream tasks via pre-training such as Masked Language Modeling (MLM) and Causal Language Modeling (CLM).", + "bbox": [ + 507, + 239, + 884, + 608 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Figure 1 illustrates the main steps of the code processing models for the downstream classification tasks. First, we tokenize the textual code $x$ into a token sequence that is represented in a discrete integer space. Then, we map the discrete sequence ids into the token vector space $R^{l \\times d}$ . Next, we feed the token vectors into the task model $f(\\theta)$ . $f(\\theta)$ is built on top of pre-trained models. Finally, we can predict the domain probabilities after fine-tuning.", + "bbox": [ + 507, + 609, + 882, + 755 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2 Problem Statement", + "text_level": 1, + "bbox": [ + 507, + 768, + 707, + 782 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Since many critical code tasks are classification problems, e.g., defect prediction and code clone detection. In this paper, we focus on the adversarial attack for code classification tasks. Considering a code classification task, we use $f(x; \\theta) \\to y: R^{l \\times d} \\to \\mathbb{C} = \\{i | 0 \\leq i \\leq n\\}$ to denote the victim model that maps a code token sequence $x$ to a label $y$ from a label set $\\mathbb{C}$ with size $n$ , where", + "bbox": [ + 507, + 790, + 882, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "9707", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "$l$ is the sequence length and $d$ is the token vector dimension, and $i$ is one integer. By querying dictionary dense embedding $\\pmb{E}^{|\\mathbb{V}\\times d|}$ , a code token sequence $x = (s_0,\\dots,s_j,\\dots,s_l)$ , is vectorized into $\\pmb{R}^{l\\times d}$ . Adversarial attacks for code models create an adversarial example $x^{\\prime}$ by modifying some vulnerable tokens of $x$ with a limited maximum perturbation $\\epsilon$ to change the correct label $y$ to a wrong label $y^\\prime$ . Simply, we get a perturbed $x^{\\prime}$ by modifying some tokens in $(s_0,\\dots,s_j,\\dots,s_l)$ such that $f(x^{\\prime};\\theta)\\neq f(x;\\theta)$ where $x^{\\prime} = x + \\sigma$ and $x^{\\prime}$ has to have the same behavior with $x$ , + represent perturbation execution, $\\sigma$ is the perturbation code transformation for $(s_0,\\dots,s_j,\\dots,s_l)$ , and $\\sigma \\leq \\epsilon$ . We target the more practical attacking scenario - black-box attack that requires less information. We assume we cannot access the model parameters and can only utilize the final output of model $f(x;\\theta)$ to conduct the attack.", + "bbox": [ + 112, + 84, + 492, + 390 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Methodology", + "text_level": 1, + "bbox": [ + 112, + 403, + 263, + 420 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Motivation", + "text_level": 1, + "bbox": [ + 112, + 430, + 247, + 444 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As mentioned in the introduction, the current methods face three limitations: 1) there is a limited number of substitute variables; 2) there is a high verification cost associated with substitute variables; and 3) the generated adversarial samples often exhibit large perturbations. Among these limitations, the second one holds the utmost significance as it significantly impacts both the first and third limitations. Due to the high cost involved, it becomes challenging to generate diverse adversarial examples within a reasonable budget. Additionally, attackers tend to introduce large perturbations without employing any perturbation constraints in order to maximize their attacks.", + "bbox": [ + 112, + 451, + 489, + 675 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To address these limitations, the first question arises: \"Could we substantially reduce the verification cost while allowing for unrestricted diversity of substitute variables and minimizing perturbations?\" To delve into the reasons behind the second limitation, we need to analyze its underlying factors. The low verification efficiency of the substitute set stems from the fact that each substitute can only be verified by constructing an adversarial sample to replace the original variable and then launching an actual attack on the victim model. This realization leads to the second question: \"Is it feasible to predict the attack effect of a substitute instead of constructing an adversarial sample to attack the victim model?\"", + "bbox": [ + 112, + 677, + 489, + 917 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/30b4588241ab9a49cce19898cce8d7c4104bc2ea7b84b8d6c8b49fd8dc7c39bc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 522, + 85, + 890, + 370 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given input code $x$ and one of its variables $var$ , different substitutes can be used to replace it to obtain different adversarial samples. After attacking the victim model, the probability of the label will also change. Conversely, if we want to reduce the probability of this label, the third question is following, \"how to choose relatively better substitutes that can reduce the model confidence from a large-scale original substitute set?\" It is possible to select good substitutes without actual attack if we can forecast, which is implemented by RNNS.", + "bbox": [ + 507, + 400, + 884, + 576 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The core idea of RNNS is maintaining a search seed updated based on the historical attack. The search seed is employed to search next adversarial substitutes that are possible to attack successfully. Since substitutes are discrete and cannot be directly involved in calculations, we first use a variable name pre-trained encoder denoted as $E$ to map substitutes to a unified continuous representation vector space. Then, based on the representation vectors of substitutes that have participated in the attack, we predict the search seed vector $e_{seed}$ for the next round of the substitute selection. Finally, we calculate the similarity between $e_{seed}$ and the representation vector of substitutes and then select relatively better substitutes. For specific details, please refer to Section 3.2.3.", + "bbox": [ + 507, + 577, + 884, + 835 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2 Representation Nearest Neighbor Search", + "text_level": 1, + "bbox": [ + 507, + 848, + 877, + 865 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Algorithm 1 shows the workflow of our approach, First, we collect the original substitute set from public real code, following the process described", + "bbox": [ + 507, + 871, + 884, + 919 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "9708", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "in Section 3.2.1. We extract variables from the input code and sort them according to their uncertainty, referring to Section 3.2.2 (Line 3-4). We replace variables in sequence to form attack samples (Line 5). For a given $var$ , we first initialize the optimal substitute for this current iteration $sub_{cur}$ and the optimal substitute for the previous iteration $sub_{pre}$ to the $var$ . Then, we initialize the accumulated smooth increment of the representation vector $\\Delta e_{smo}$ to a zero vector. $\\Delta e_{smo}$ is used to record the historical representation change of the search seed $e_{seed}$ . We now commence the iterative attack process, as delineated in Line 11. We predict the search seed vector $e_{seed}$ with the process described in Section 3.2.3 (Line 12), and then extract topk substitutes based on $e_{seed}$ to form the candidate substitutes $subs_{topk}$ with the process described in Section 3.2.4 (Line 13). Subsequently, we replace $sub_{cur}$ in $x'$ with each substitute in $subs_{topk}$ to obtain the corresponding temporary adversarial sample $x'_tmp$ (Line 14-15). $x'$ is the current code that we are trying to attack and it is initialized with the original code $x$ . We use $x'_tmp$ to attack the victim model and obtain the probability $prob_y$ of the ground-truth label $y$ and predicted label $y'$ (Line 16). If the probability of the ground-truth label $y$ hits a new low ( $< prob_{min}$ ), we update $x'$ , $sub_{pre}$ , $sub_{cur}$ and $prob_{min}$ (Line 17-22). $prob_{min}$ records the minimum probability of label $y$ during the attack process. If $x'_tmp$ causes the victim model to predict an incorrect label, this attack is successful and returns the successful adversarial sample (Line 23-26); otherwise, proceed to the next iteration until all variables have completed iteration and return the final adversarial sample and attack result (Line 30).", + "bbox": [ + 112, + 84, + 490, + 664 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.1 Collecting Large Original Substitute Set", + "text_level": 1, + "bbox": [ + 112, + 671, + 487, + 687 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We have developed a tool for variable extraction that leverages the tree-sitter framework1. This tool, henceforth denoted as ExtractVar (see Line 3), operates in three distinct steps. In the first step, we extract all variables from the current dataset and then filter out duplicates. During the second step, each valid variable is tokenized, and we compute the embedding for each token using the variable-name encoder $E$ that is pre-trained on CodeSearchNet2. We then apply a mean pooling operation on these tokens to determine the variable's embedding. In the third step, we retain all the chosen variables", + "bbox": [ + 112, + 690, + 490, + 883 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "along with their associated embeddings as the initial substitute set, represented as $subs_{original}$ .", + "bbox": [ + 507, + 84, + 885, + 118 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.2 Computing Uncertainty", + "text_level": 1, + "bbox": [ + 507, + 124, + 757, + 140 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Given a specific code $x$ , we replace each instance of $var \\in x$ with a set of predefined fixed variables $VarArray$ , resulting in a set of mutated codes denoted as $X_{var}^{mutated}$ . These mutated codes are subsequently utilized to query the victim model, allowing us to obtain the probability distribution for each class. A greater variance in the distribution signifies increased uncertainty for $var$ , suggesting that $var$ should be prioritized for replacement. The uncertainty associated with $var$ is defined as follows:", + "bbox": [ + 507, + 143, + 885, + 319 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nu n c e r t a i n t y _ {v a r} = \\frac {1}{C} \\sum_ {i = 1} ^ {C} v a r i a n c e (P _ {v a r} ^ {i})\n$$\n", + "text_format": "latex", + "bbox": [ + 534, + 326, + 857, + 370 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $P_{var}^{i} = \\{p_{var}^{i}(x) | \\forall x \\in X_{var}^{mutated}\\}$ , $C$ is the number of labels, $p_{var}^{i}(x)$ is the model probability for label $i$ given the mutated code $x$ , and variance denotes the standard variance function. A larger and more diverse $X_{var}^{mutated}$ ensures a closer approximation of uncertainty to the true value. It is important to note, however, that the magnitude of the change length must not be excessively large, as this would result in all probability changes converging to a single point. This is because samples subjected to large changes deviate significantly from the original, leading to a substantial decrease in the model confidence levels. Subsequently, we arrange the variables in descending order based on their uncertainties. The greater the uncertainty of a variable, the more valuable it is for attack. This process is denoted as RankVarsWithUncertainty at line 4. In our implementation, the size of this variable array VarArray is 16, and the variable length ranges from 1 to 5.", + "bbox": [ + 507, + 380, + 885, + 703 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.3 Predicting Search Seed", + "text_level": 1, + "bbox": [ + 507, + 712, + 752, + 727 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To filter out superior substitutes from the substantial $subs_{original}$ , it becomes necessary to predict the search seed within the substitute representation vector space. Given the optimal substitute $sub_{cur}$ of the current round, the optimal substitute $sub_{pre}$ from the previous round, and the accumulated smooth increment of the representation vector, denoted as $\\Delta e_{smo}$ , from all preceding rounds of iteration, we initially compute the increment of the representation vector in the current round, $\\Delta e$ :", + "bbox": [ + 507, + 731, + 885, + 891 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\Delta \\boldsymbol {e} = E (s u b _ {c u r}) - E (s u b _ {p r e})\n$$\n", + "text_format": "latex", + "bbox": [ + 579, + 902, + 811, + 920 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "1https://tree-sitter.github.io/tree-sitter", + "bbox": [ + 134, + 890, + 389, + 904 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "2https://huggingface.co/datasets/code_search_net", + "bbox": [ + 136, + 904, + 421, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "9709", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/17b33fb314e463991cd8f0c9d307ef116209e0b26bc4871ae46163d3010a654f.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TaskTrain / Val / TestCodeBERTGraphCodeBERTCodeT5
Defect21,854 / 2,732 / 2,73263.7663.6567.02
Clone90,102 / 4,000 / 4,00096.9797.3697.84
Authorship528 / - / 13282.5777.2788.63
C1000320,000 / 80,000 / 100,00082.5383.7984.46
Python800153,600 / 38,400 / 48,00096.3996.2996.79
Java25048,000 / 11,909 / 15,00096.9197.2797.72
", + "bbox": [ + 117, + 82, + 489, + 143 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 1: Datasets and Victim Model Performance (Accuracy, %).", + "bbox": [ + 112, + 159, + 487, + 187 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": ", where $E$ is variable name encoder, trained on CodeSearchNet by masked language modelling independently so that RNNS is independent of victim downstream-task models. Then we update the $\\Delta e_{smo}$", + "bbox": [ + 112, + 214, + 487, + 294 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\Delta \\mathbf {e} _ {s m o} = (1 - \\alpha) \\Delta \\mathbf {e} _ {s m o} + \\alpha \\Delta \\mathbf {e}\n$$\n", + "text_format": "latex", + "bbox": [ + 174, + 309, + 426, + 325 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": ", where $\\alpha$ is a smooth rate limited 0 to 1, Finally, we predict the search seed $e_{\\text{seed}}$ :", + "bbox": [ + 112, + 338, + 487, + 370 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {e} _ {\\text {s e e d}} = E \\left(\\operatorname {s u b} _ {\\text {c u r}}\\right) + \\Delta \\boldsymbol {e} _ {\\text {s m o}}\n$$\n", + "text_format": "latex", + "bbox": [ + 189, + 385, + 411, + 401 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This process is denoted as PredictSeed at line 12.", + "bbox": [ + 112, + 416, + 487, + 431 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2.4 Searching Top-K Substitutes", + "text_level": 1, + "bbox": [ + 112, + 442, + 400, + 457 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Initially, we filter out substitutes from $subs_{original}$ that comply with two constraints: 1) $1 - sim(E(sub), E(var)) < \\epsilon$ and 2) $|len(sub) - len(var)| < \\delta$ , where $var$ refers to the original variable in the input code that is to be replaced, $sim(.)$ is the similarity calculation function. $E(.)$ is the variable name encoder, and $len(.)$ is used to calculate the length of the variable name. Then, we calculate the similarity between the search seed $e_{seed}$ and the substitutes that are filtered by the two constraints and select the $k$ most similar substitutes to form $subs_{topk}$ . This process is denoted as SearchTopkSub at line 13. In our experiment, $\\epsilon = 0.15$ , $\\delta = 4$ , $k = 60$ , $sim(.)$ is cosine similarity.", + "bbox": [ + 112, + 462, + 489, + 703 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4 Experimental Setup", + "text_level": 1, + "bbox": [ + 112, + 715, + 321, + 733 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Dataset and Model. To study the effectiveness and efficiency of RNNS, we conduct experiments on three popular programming languages (C, Python, and Java). For the datasets, we employed six widely studied open-source datasets that cover four important code tasks. Specifically, BigCloneBench (Wang et al., 2020) is one code clone detection dataset named Clone. Devign (Zhou et al., 2019) is a dataset used for vulnerability detection, named Defect. For authorship prediction, we use the dataset provided by (Alsulami et al., 2017).", + "bbox": [ + 112, + 741, + 489, + 917 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Besides, we utilize three problem-solving classification tasks, Java250, Python800, and C1000, provided by ProjectCodeNet (Puri et al., 2021). For all the datasets (except for authorship prediction which does not have enough data samples), we follow the original papers to split the data into the training set, validation set, and test set. Authorship prediction only has two split parts, training data and test data.", + "bbox": [ + 507, + 84, + 882, + 227 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For the code models, we follow the previous work (Yang et al., 2022) and investigate two pretrained models CodeBERT (Feng et al., 2020), and GraphCodeBERT (Guo et al., 2020). Besides, we add one more powerful model CodeT5 (Wang et al., 2021) in our study. Table 1 summarizes the details of our employed datasets and fine-tuned models.", + "bbox": [ + 507, + 252, + 882, + 365 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Evaluation Metric. To evaluate the effectiveness of adversarial attack methods, we employ the commonly used attack success rate (ASR) (Yang et al., 2022) as the measurement. To evaluate the efficiency of the attack methods, we use query times (QT) to check the average number of querying the victim model for one input code. Finally, we use the change of replaced-variable length and the number of replaced variables to study the quality/perturbation of adversarial examples. A smaller score means the attack method can generate adversarial examples with less perturbation injection.", + "bbox": [ + 507, + 388, + 882, + 581 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Baseline. We compare RNNS with two black-box attack baselines, MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022). MHM is a sampling search-based black-box attack that generates the substitutes from the vocabulary based on lexical rules for identifiers. MHM employs synthesized tokens as the candidates of substitutes, which could introduce meaningless variable names. ALERT is a recently proposed attack method that combines greedy attack and genetic algorithm to find the substitutes. We also use two textual attack algorithms PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021) as minor baselines, since they are not designed for code models.", + "bbox": [ + 507, + 605, + 882, + 829 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Implementation. We implement our approach in PyTorch and run all experiments on 32G-v100 GPUs. We reuse the source code from the baselines. We make our implementation publicly available.", + "bbox": [ + 507, + 854, + 882, + 917 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "9710", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/0e919ba263daf8956316645eed9a72f94dc933a90beb1976bd24e049559a6608.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Task+ModelALERTMHMRNNS
ASRQTASRQTASRQT
Clone+CodeBert28.672155.3939.66972.1546.50666.48
Clone+GraphCodeBert10.401466.689.58490.9941.281122.01
Clone+CodeT529.202359.7038.791069.0639.61895.79
Defect+CodeBert52.291079.6850.51862.1869.18588.35
Defect+GraphCodeBert74.29621.7775.19539.9381.63404.73
Defect+CodeT576.66721.0286.51344.0889.45344.29
Authorship+CodeBert34.98682.5764.70775.1173.391029.59
Authorship+GraphCodeBert58.821227.3675.49632.1080.39696.64
Authorship+CodeT564.951078.4066.97715.8971.79970.44
Java250+CodeBert50.50958.9674.03961.6075.12815.91
Java250+GraphCodeBert46.741026.1546.05946.5272.30853.74
Java250+CodeT552.041189.4230.591107.9563.801049.46
Python800+CodeBert58.30513.6356.67919.3777.88514.19
Python800+GraphCodeBert51.87577.7054.15917.9271.42730.14
Python800+CodeT552.84777.2036.951127.4469.07662.28
C1000+CodeBert53.50525.4359.75340.8872.96537.76
C1000+GraphCodeBert52.68566.1845.93837.0972.23634.27
C1000+CodeT547.86843.3336.45668.1559.00697.06
Count0/184/180/186/1818/188/18
", + "bbox": [ + 268, + 80, + 731, + 322 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2: Comparison results with MHM, and ALERT, ASR %. Count: the number of best results achieved.", + "bbox": [ + 136, + 331, + 858, + 346 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5 Results Analysis", + "text_level": 1, + "bbox": [ + 112, + 357, + 290, + 373 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.1 Attack Effectiveness and Efficiency", + "text_level": 1, + "bbox": [ + 112, + 385, + 435, + 401 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We compare RNNS with two methods MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022) on six datasets and 18 victim models that have been fine-tuned for the downstream tasks. Table 2 shows the comparison results where the last row Count indicates how many times this method achieves the best results. We can see that RNNS achieves the best performance for 18/18 times in terms of ASR, and the lowest cost for 8/18 times in terms of QT in Table 2. Both of the indicators are better than the baselines. The two baselines have zero best ASR for all victim models and all datasets. The lowest QTs achieved by ALERT and MHM are 4 and 6, respectively. We conclude that for effectiveness and efficiency, RNNS outperforms ALERT and MHM in all cases. Especially, MHM and ALERT fail to attack GraphCodeBERT on BigClone dataset, and only have $9.58\\%$ and $10.4\\%$ ASR respectively, while RNNS has more than $40\\%$ ASR. RNNS has almost two times larger ASR than MHM on Java250+CodeT5 and Python800+CodeT5.", + "bbox": [ + 112, + 409, + 487, + 778 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "It should be noted that high ASR is not due to large QT. As shown in Table 2, the three groups of experiments with the most QTs are Clone+GraphCodeBert, Java250+CodeT5, and Authorship+CodeBert, with ASRs of $41.28\\%$ , $63.80\\%$ , and $73.39\\%$ , respectively, which are not the highest. On the contrary, Defect+CodeT5 has the highest", + "bbox": [ + 112, + 780, + 489, + 892 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "ASR of $89.45\\%$ , but QT is the smallest. Therefore, there is no absolute causal relationship between QT and ASR.", + "bbox": [ + 507, + 357, + 882, + 405 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.2 Perturbation of Adversarial Example", + "text_level": 1, + "bbox": [ + 507, + 425, + 848, + 441 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conduct a study about the quality of the adversarial examples to check if RNNS can generate looking-normal code, e.g., avoiding naively increasing the variable name length. To do so, firstly, we count the average length of the original variable and adversarial variables as demonstrated by Table 3. We also compute the mean and variances of their difference. Besides, we compute the average number of the replaced variables for the successful attack as shown in Table 4. Low values mean the inputs are modified less, and high qualities.", + "bbox": [ + 505, + 450, + 882, + 626 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In Table 3, the 2nd, 5th, and 8th columns are the average length for original variables (named Var Len) that are replaced. The 3rd, 6th, and 9th columns are the average lengths for adversarial variables (named Adv Var Len). The 4th, 7th, and 10th columns are the average and variance (mean $\\pm$ variance) of the absolute length difference between original variables and adversarial variables (named Difference). We observe that MHM prefers to replace the long-length variables while RNNS likes replacing short-length variables if we compare the 2nd and 5th columns. Meanwhile, the change of variable length from RNNS is less than MHM. MHM introduces the average length difference of 3.39-6.82 while RNNS only has 2.02-2.54. MHM has much higher variances than RNNS in the length change. ALERT uses shorter adversarial variable names than RNNS", + "bbox": [ + 507, + 629, + 882, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "3https://github.com/18682922316/RNNS-for-code-attack", + "bbox": [ + 134, + 903, + 447, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "9711", + "bbox": [ + 480, + 927, + 517, + 940 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/972dbb7978d3bef7c97b734d46e3c3771bbc4db71d27ef228eb675815d0a1b11.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Task+ModelRNNSMHMALERT
Var LenAdv Var LenDifferenceVar LenAdv Var LenDifferenceVar LenAdv Var LenDifference
Clone+CodeBert6.126.792.35 ± 4.506.4710.66.34 ± 10.985.916.211.32 ± 2.02
Clone+GraphCodeBert6.326.972.54 ± 6.436.5810.416.82 ± 21.675.505.931.45 ± 2.49
Clone+CodeT56.456.692.51 ± 8.306.4610.466.17 ± 25.786.256.611.32 ± 2.72
Defect+CodeBert4.645.442.08 ± 2.494.449.596.57 ± 28.784.855.061.36 ± 1.93
Defect+GraphCodeBert4.085.342.13 ± 1.834.379.736.48 ± 26.514.475.221.33 ± 1.83
Defect+CodeT53.955.172.03 ± 1.934.339.816.59 ± 29.984.365.011.27 ± 1.57
Authorship+CodeBert3.815.182.28 ± 1.563.977.945.45 ± 16.724.425.351.40 ± 2.25
Authorship+GraphCodeBert3.695.232.36 ± 1.714.397.645.24 ± 15.383.744.461.22 ± 1.82
Authorship+CodeT53.955.182.03 ± 2.663.957.985.59 ± 20.943.814.501.22 ± 1.62
Java250+CodeBert2.354.222.11 ± 1.023.216.504.34 ± 15.203.223.650.94 ± 1.63
Java250+GraphCodeBert2.484.312.13 ± 1.073.136.594.42 ± 14.843.053.500.98 ± 1.54
Java250+CodeT52.764.472.10 ± 1.173.206.544.33 ± 14.603.167.314.41 ± 18.73
Python800+CodeBert1.503.542.21 ± 1.021.975.113.64 ± 9.061.782.270.64 ± 1.34
Python800+GraphCodeBert1.883.902.18 ± 0.781.996.014.46 ± 16.521.802.330.76 ± 1.30
Python800+CodeT51.653.592.13 ± 0.951.974.953.49 ± 8.181.885.844.10 ± 12.64
C1000+CodeBert1.583.442.08 ± 0.882.415.053.65 ± 12.022.132.520.67 ± 1.17
C1000+GraphCodeBert1.603.592.10 ±0.852.395.353.90 ± 12.982.182.670.66 ± 1.23
C1000+CodeBert1.383.332.02 ± 0.852.364.823.39 ± 10.982.106.564.74 ± 13.24
", + "bbox": [ + 147, + 80, + 848, + 294 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3: Replaced-variable length comparison, mean $\\pm$ variance.", + "bbox": [ + 270, + 304, + 724, + 318 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "with less change because it uses the pre-trained model to generate the replacements that are close to the replaced variables.", + "bbox": [ + 112, + 331, + 487, + 378 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 4 statistically shows the number of replaced variables. It can be seen that RNNS replaces around an average of 3.6 variables with a smaller variance of around (3.4-4.6) while MHM needs to modify about an average of 5.4 variables with a larger variance $(\\geq 11.14)$ . ALERT also replaces more variables to attack models than RNNS and MHM. RNNS introduces less or equal perturbation than the baselines in terms of length change and change number.", + "bbox": [ + 112, + 380, + 489, + 539 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Figure 2 shows one example of RNNS, MHM, and ALERT attack successfully from the Java250 dataset. The changes are highlighted by shadow markers. RNNS only renames one variable $\\mathbf{b}$ to $\\mathbf{h}$ , ALERT renames two variables, while MHM almost renames all variables and also prefers longer names.", + "bbox": [ + 112, + 542, + 489, + 653 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.3 Ablation Study", + "text_level": 1, + "bbox": [ + 112, + 670, + 280, + 686 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We remove the two search constraints in Section 3.2.4, denoted this variant of RNNS as RNNS-Unlimited. Table 5 shows the comparing results between RNNS-Unlimited and RNNS. RNNS-Unlimited gets the first place for all the tasks in terms of ASR. ASR can be improved by a maximum of $8.35\\%$ and a minimum of about $2\\%$ after removing limitations. For QT, RNNS-Unlimited only loses 3 times among 18 evaluations. The improvement of RNNS-Unlimited is not surprising with respect to ASR and QT. Because RNNS-Unlimited can search the adversarial examples in the non-similar real names and use very long variable names.", + "bbox": [ + 112, + 694, + 490, + 917 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.4 Attack Defended Model and Retraining", + "text_level": 1, + "bbox": [ + 507, + 331, + 867, + 347 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Attack Defended Model. We employ RNNS and MHM to attack the three defended models provided by ALERT (Yang et al., 2022). These models are prepared by adversarial fine-tuning. Table 6 presents the results. We can see that RNNS outperforms MHM in two tasks, and MHM is better in one task. This experiment setting actually is not friendly for RNNS because ALERT (Yang et al., 2022) uses the replacements from pre-trained models which implicitly have the semantic constraint.", + "bbox": [ + 505, + 351, + 885, + 513 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Retraining. We use the adversarial examples from RNNS to retrain the victim models of CodeBERT by contrastive adversarial learning. We use three 3 datasets, Defect, Authorship, and Java250. We generate the adversarial examples on the whole training dataset for them. Table 7 presents the results, all approaches achieve much lower ASR compared with the previous. RNNS adversarial examples can improve the mode robustness through contrastive adversarial retraining. If we compare Defect/Authorship+CodeBERT in Table 7 and Table 6, we can find that both retrained models via RNNS are more robust than the models from ALERT since they have much lower ASRs.", + "bbox": [ + 507, + 514, + 885, + 739 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.5 RNNS vs Textual Attack Methods", + "text_level": 1, + "bbox": [ + 507, + 752, + 821, + 766 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To compare the effects of RNNS and textual attack methods, We conducted attack experiments on three datasets using the PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021). The three datasets Defect, Authorship, and Java250, represent three languages respectively, C, Python, and Java. To be fair, the search space of the PSO and LSH is the same as that of RNNS.", + "bbox": [ + 507, + 774, + 884, + 901 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As shown in Table 8, the QT of PSO algorithm", + "bbox": [ + 527, + 903, + 880, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "9712", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/cbe72181236b7b03ac2707b60e04bdf414d0b44d39c985b525b4408dbd2f25d7.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TaskCodeBERTGraphCodeBERTCodeT5
RNNSMHMALERTRNNSMHMALERTRNNSMHMALERT
Clone3.55 ± 4.606.72 ± 16.576.86 ± 18.854.12 ± 4.946.21 ± 15.136.95 ± 18.993.43 ± 5.005.68 ± 14.017.65 ± 25.57
Defect3.39 ± 4.962.78 ± 7.893.49 ± 3.992.67 ± 1.752.84 ± 9.504.10 ± 11.052.51 ± 1.452.16 ± 3.583.49 ± 3.99
Authorship4.24 ± 7.477.52 ± 25.826.60 ± 22.963.65 ± 3.326.67 ± 22.297.75 ± 33.124.39 ± 9.005.72 ± 13.026.06 ± 18.74
Java2503.87 ± 4.707.11 ± 21.187.82 ± 28.963.87 ± 4.256.41 ± 16.247.83 ± 25.064.71 ± 6.877.04 ± 15.298.92 ± 25.97
Python8003.06 ± 1.875.21 ± 12.284.96 ± 8.474.12 ± 3.685.00 ± 10.834.63 ± 6.763.57 ± 3.045.29 ± 13.516.18 ± 11.45
C10003.00 ± 1.864.42 ± 7.494.13 ± 5.593.37 ± 2.385.14 ± 7.304.88 ± 6.243.39 ± 2.485.20 ± 7.435.43 ± 6.99
mean3.52 ± 4.245.63 ± 15.215.65 ± 14.803.63 ± 3.395.38 ± 13.556.02 ± 16.873.67 ± 4.645.18 ± 11.146.29 ± 15.45
", + "bbox": [ + 129, + 80, + 867, + 180 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/4e8ab953e457c12148e74c5d4481000d71071bf77ba1781d8277c663e7ff9cea.jpg", + "table_caption": [ + "Table 4: Replaced-variable number comparison, mean $\\pm$ variance" + ], + "table_footnote": [], + "table_body": "
public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args)
Scanner obj = new Scanner(System.in);
int a = obj.nextInt();
int b = obj.nextInt();
int out = 1;
int ans = 0;
while (out < b) {}while (out < h) {}}while (tempOp < colArr) {}}}}}}}
out--;
out = out + a;
ans++;
}}System.out.println(ans);}}System.out.println(number_array);}}}}}}}
Original CodeAdversarial Code from RNNSAdversarial Code from MHMAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERT
", + "bbox": [ + 129, + 217, + 860, + 332 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/771539b6beaf6d7585994be7cb1aeb67e9d57accf249b095eeaef2a5a545fe47.jpg", + "table_caption": [ + "Figure 2: Case study. Original vs. RNNS vs. MHM vs. ALERT" + ], + "table_footnote": [], + "table_body": "
TaskCodeBERTGraphCodeBERTCodeT5
RNNS-UnlimitedRNNSRNNS-UnlimitedRNNSRNNS-UnlimitedRNNS
ASRQTASRQTASRQTASRQTASRQTASRQT
Defect72.29590.9869.18588.3587.77381.8281.63404.7391.64338.4189.45344.29
Clone50.66955.9746.50666.4848.161105.1141.281122.0141.38920.6539.61895.79
Authorship91.74447.6873.391029.5991.17438.6980.39696.6488.88620.5671.79970.44
C100074.70502.0272.96537.7676.82498.6472.23634.2761.96704.9559.00697.06
Python80083.90460.9277.88514.1979.00496.3071.42730.1472.69646.5969.07662.28
Java25079.70760.9775.12815.9181.94744.5772.30853.7475.52910.9763.801049.46
Count6/64/60/62/66/66/60/60/66/65/60/61/6
", + "bbox": [ + 154, + 376, + 842, + 495 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/7860bc92b68fbb77ab00f2beece6397a0ec64ac8885eb875e9adf450ed8b2bc2.jpg", + "table_caption": [ + "Table 5: Results of ablation study, before and after removing constraints, ASR %." + ], + "table_footnote": [], + "table_body": "
Defended ModelRNNSMHM
ASRQTASRQT
Clone+CodeBert12.90958.3528.171245.75
Defect+CodeBert95.37282.2092.23283.66
Authorship+CodeBert51.881524.4043.261026.08
", + "bbox": [ + 137, + 533, + 467, + 594 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/206b7875bdf32f367983f1f679ef90c30f57bb0719d1fb24d992aca8532fb2eb.jpg", + "table_caption": [ + "Table 6: Attack defended models, ASR %." + ], + "table_footnote": [], + "table_body": "
ACCASR(RNNS)ASR(MHM)ASR(ALERT)
Authorship90.6219.8123.5814.28
Defect65.1440.4623.6924.53
Java25097.6319.676.6542.91
", + "bbox": [ + 117, + 630, + 489, + 678 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "is 4.22-6.7 times that of RNNS, and the ASR of PSQ algorithm is $5.55\\% - 27.82\\%$ lower than that of RNNS algorithm. It can be inferred that for code variable attacks, combinatorial optimization is inefficient when the substitute set of variables is relatively large. The main reasons are the following two points. Firstly, code segments are generally longer, and the substitute set of code variables is much larger than the synonym set of natural language words. Secondly, the impact of variable replacement on code semantics is smaller than that", + "bbox": [ + 112, + 741, + 487, + 917 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/1bc38d7614c0281243716bf757f4a186790439153b00aace795c5a254f5daba8.jpg", + "table_caption": [ + "Table 7: Results of contrastive adversarial retraining, model: CodeBERT." + ], + "table_footnote": [], + "table_body": "
Task+ModelRNNSPSOLSH
ASRQTASRQTASRQT
Defect+CodeBert69.18588.3563.633945.0426.62321.78
Authorship+CodeBert73.391029.5952.294350.0019.26458.55
Java250+CodeBert75.12815.9147.35076.0231.58397.05
", + "bbox": [ + 515, + 533, + 878, + 586 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 8: RNNS vs PSO and LSH, ASR %.", + "bbox": [ + 549, + 596, + 838, + 609 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "of word replacement on natural language semantics.", + "bbox": [ + 507, + 623, + 882, + 639 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "RNNS's QT is 1.8-2.2 times that of LSH, and the QT has dropped significantly. However, LSH's ASR is inferior to RNNS by $42.56\\% - 54.13\\%$ . For code variable attacks, LSH has high efficiency, but its effectiveness is relatively low. One possible reason for LSH causing low ASR is the distribution of adversarial samples in each bucket is uneven.", + "bbox": [ + 507, + 655, + 882, + 766 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 Related Work", + "text_level": 1, + "bbox": [ + 507, + 780, + 665, + 795 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Adversarial attacks for code models have been widely studied (Yang et al., 2022; Liu et al., 2023a; Li et al., 2023; Jha and Reddy, 2023). These works can be generally categorized into black-box attacks and white-box attacks. A black-box attack for code models queries the model outputs and selects the substitutes using a score function. For example,", + "bbox": [ + 505, + 806, + 882, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "9713", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/771b4e0040f235590205eb87e09d7749767d61c59325e1d876bb32b805c7b3f4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
AlgorithmSubstitutes SizeSubstitutes SourceReplacement PositionSubstitutes Selection
MHMmediumvocabularyrandomrandom sample
ALERTsmallmodel generationimportance scoretraverse
RNNSlargereal public variablesuncertainty scoreefficient constrained search
", + "bbox": [ + 144, + 80, + 458, + 171 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 9: Difference between RNNS to the others.", + "bbox": [ + 129, + 181, + 467, + 193 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "ALERT (Yang et al., 2022) finds the adversarial examples using variable-name substitutes generated by pre-trained masked models. MHM (Zhang et al., 2020) uses Metropolis-Hastings to sample the replacement of code identifiers. STRATA (Springer et al., 2020) generates adversarial examples by replacing the code tokens based on the token distribution. Chen et al. (2022) apply pre-defined semantics-preserving code transformations to attack code models. CodeAttack (Jha and Reddy, 2023) uses code structure to generate adversarial data. White-box attacks require the code model gradient to modify inputs for adversarial example generation. CARROT (Zhang et al., 2022) selects code mutated variants based on the model gradient. Henkel et al. (2022) attack code models by gradient-based optimization of the abstract syntax tree transformation. Srikant et al. (2021) uses optimized program obfuscations to modify the code. DAMP (Yefet et al., 2020) derives the desired wrong prediction by changing inputs guided by the model gradient.", + "bbox": [ + 115, + 206, + 489, + 558 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 9 demonstrates the differences among RNNS, MHM (Zhang et al., 2020) and ALERT (Yang et al., 2022). MHM and ALERT represent the two methodologies most closely aligned with our research. Our approach considers identifier replacements like MHM and ALERT, ensuring that the adversarial example keeps the same semantics as the original one. Our substitute size is scalable and can be substantial, and RNNS searches the possible next adversarial example in the substitute space. In our approach, we locate vulnerable variables based on the uncertainty and search $\\text{sub}_{\\text{topk}}$ without building adversarial samples and actual attacks. Our goal is to obtain high ASRs by searching real variable names. MHM has the same goal as ours but synthesizes variable names. ALERT sacrifices ASR to make the variable name readable.", + "bbox": [ + 115, + 561, + 485, + 848 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "7 Conclusion", + "text_level": 1, + "bbox": [ + 112, + 862, + 243, + 876 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We propose a novel black-box adversarial search-based attack for variable replacement. RNNS has", + "bbox": [ + 112, + 887, + 487, + 917 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "three main contributions: 1) This work proposes a non-generation search-based black-box attacking method via predicting the attack effect of a substitute. This method can greatly reduce the verification cost of the substitute, remove the restrictions on the size and diversity of the substitute set, and achieve a significant improvement in terms of ASR without increasing QT. 2) This work proposes a simple and efficient method for constructing a substitute set. This method can construct a large-scale, diverse, and real substitute set at low cost. 3) The adversarial examples from RNNS can be used to improve the model robustness.", + "bbox": [ + 507, + 84, + 882, + 292 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "8 Limitations", + "text_level": 1, + "bbox": [ + 507, + 307, + 643, + 321 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "There are some limitations of RNNS. Firstly, RNNS does not revert to the preceding step to persist with the search upon an increase in the model probability of the ground truth label. While the incorporation of this step may bolster the Attack Success Rate (ASR), it could potentially compromise the Query Time (QT). Secondly, the size and diversity of the substitute set significantly influence RNNS; a minimal and homogeneous set can precipitate a diminished attack success rate. Thirdly, RNNS involves multiple hyperparameters whose values need to be manually set. One of the most important parameters is the moving parameter $\\alpha$ . The number of attacking iterations max itr is also significant. We set $\\alpha$ to 0.2 and max itr to 6 with some small experimental trials. Fourthly, RNNS currently only targets untargeted attack scenarios, for targeted attacks, ASR will be very low when there are many category labels. For example, when performing targeted attacks on Authorship+Codebert with 66 labels, the ASR can only reach $6.4\\%$ . How to migrate to targeted attacks is a direction we need to study in the future.", + "bbox": [ + 507, + 332, + 882, + 702 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Acknowledgment", + "text_level": 1, + "bbox": [ + 509, + 715, + 663, + 732 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work is supported by NRF and the CSA under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN), NRF and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-019), and NRF Investigatorship NRF-NRFI06-2020-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of NRF and CSA Singapore.", + "bbox": [ + 507, + 741, + 882, + 903 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9714", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 115, + 84, + 213, + 98 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4998-5007.", + "Bander Alsulami, Edwin Dauber, Richard Harang, Spiros Mancoridis, and Rachel Greenstadt. 2017. Source code authorship attribution using long short-term memory based networks. In Computer Security - ESORICS 2017, pages 65-82, Cham. Springer International Publishing.", + "Penglong Chen, Zhen Li, Yu Wen, and Lili Liu. 2022. Generating adversarial source programs using important tokens-based structural transformations. In 2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS), pages 173-182.", + "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547.", + "Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pages 933-944. IEEE.", + "Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, LIU Shujie, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. 2020. Graphcodebert: Pre-training code representations with data flow. In International Conference on Learning Representations.", + "Jordan Henkel, Goutham Ramakrishnan, Zi Wang, Aws Albarghouthi, Somesh Jha, and Thomas Reps. 2022. Semantic robustness of models of source code. In 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 526-537.", + "Akshita Jha and Chandan K Reddy. 2023. Codeattack: Code-based adversarial attacks for pre-trained programming language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 14892-14900.", + "Liuqing Li, He Feng, Wenjie Zhuang, Na Meng, and Barbara Ryder. 2017. Cclearner: A deep learning-based clone detection approach. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 249-260. IEEE.", + "Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, and Yang Liu. 2023. Multi-target backdoor attacks for code pre-trained models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7236-7254, Toronto, Canada. Association for Computational Linguistics." + ], + "bbox": [ + 115, + 105, + 487, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yaoxian Li, Shiyi Qi, Cuiyun Gao, Yun Peng, David Lo, Zenglin Xu, and Michael R Lyu. 2022. A closer look into transformer-based code intelligence through code transformation: Challenges and opportunities. arXiv preprint arXiv:2207.04285.", + "Shangqing Liu, Yu Chen, Xiaofei Xie, Jing Kai Siow, and Yang Liu. 2020. Retrieval-augmented generation for code summarization via hybrid gnn. In International Conference on Learning Representations.", + "Shangqing Liu, Bozhi Wu, Xiaofei Xie, Guozhu Meng, and Yang Liu. 2023a. Contrabert: Enhancing code pre-trained models via contrastive learning. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pages 2476-2487.", + "Shangqing Liu, Xiaofei Xie, Jingkai Siow, Lei Ma, Guozhu Meng, and Yang Liu. 2023b. Graphsearch-net: Enhancing gnns via capturing global dependencies for semantic code search. IEEE Transactions on Software Engineering.", + "Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. A strong baseline for query efficient attacks in a black box setting. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8396-8409.", + "Ruchir Puri, David Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pajar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks.", + "Jacob M Springer, Bryn Marie Reinstadler, and Una-May O'Reilly. 2020. Strata: Simple, gradient-free attacks for models of code. arXiv preprint arXiv:2009.13562.", + "Shashank Srikant, Sijia Liu, Tamara Mitrovska, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang, and Una-May O'Reilly. 2021. Generating adversarial computer programs using optimized obfuscations. In International Conference on Learning Representations.", + "Wenhan Wang, Ge Li, Bo Ma, Xin Xia, and Zhi Jin. 2020. Detecting code clones with graph neural network and flow-augmented abstract syntax tree. In 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 261-271. IEEE.", + "Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696-8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "9715", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Martin White, Michele Tufano, Christopher Vendome, and Denys Poshyvanyk. 2016. Deep learning code fragments for code clone detection. In 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 87-98. IEEE.", + "Zhou Yang, Jieke Shi, Junda He, and David Lo. 2022. Natural attack for pre-trained models of code. In Proceedings of the 44th International Conference on Software Engineering, ICSE '22, page 1482-1493, New York, NY, USA. Association for Computing Machinery.", + "Noam Yefet, Uri Alon, and Eran Yahav. 2020. Adversarial examples for models of code. Proceedings of the ACM on Programming Languages, 4(OOPSLA):1-30.", + "Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080.", + "Huangzhao Zhang, Zhiyi Fu, Ge Li, Lei Ma, Zhehao Zhao, Hua'an Yang, Yizhe Sun, Yang Liu, and Zhi Jin. 2022. Towards robustness of deep program processing models—detection, estimation, and enhancement. ACM Trans. Softw. Eng. Methodol., 31(3).", + "Huangzhao Zhang, Zhuo Li, Ge Li, Lei Ma, Yang Liu, and Zhi Jin. 2020. Generating adversarial examples for holding robustness of source code processing models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1169-1176.", + "Vitalii Zhelezniak, Aleksandar Savkov, and Nils Hammerla. 2020. Estimating mutual information between dense word embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8361-8371, Online. Association for Computational Linguistics.", + "Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. Advances in neural information processing systems, 32." + ], + "bbox": [ + 115, + 85, + 485, + 707 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "9716", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_model.json b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9e9bd09e5012305af0c151746d8d0d9c9cb3ad22 --- /dev/null +++ b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_model.json @@ -0,0 +1,1828 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.084, + 0.885, + 0.123 + ], + "angle": 0, + "content": "A Black-Box Attack on Code Models via Representation Nearest Neighbor Search" + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.133, + 0.893, + 0.152 + ], + "angle": 0, + "content": "Jie Zhang\\(^{1*}\\), Wei Ma\\(^{2\\dagger}\\), Qiang Hu\\(^{3}\\), Shangqing Liu\\(^{2}\\), Xiaofei Xie\\(^{4}\\), Yves Le Traon\\(^{3}\\), and Yang Liu\\(^{2}\\)" + }, + { + "type": "text", + "bbox": [ + 0.394, + 0.163, + 0.61, + 0.179 + ], + "angle": 0, + "content": "1Noah's Ark Lab, Huawei" + }, + { + "type": "text", + "bbox": [ + 0.162, + 0.18, + 0.837, + 0.197 + ], + "angle": 0, + "content": "\\(^{2}\\)School of Computer Science and Engineering, Nanyang Technological University" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.197, + 0.871, + 0.214 + ], + "angle": 0, + "content": "3The Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg" + }, + { + "type": "text", + "bbox": [ + 0.158, + 0.214, + 0.843, + 0.231 + ], + "angle": 0, + "content": "\\(^{4}\\)School of Computing and Information Systems, Singapore Management University" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.343, + 0.269 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.279, + 0.461, + 0.691 + ], + "angle": 0, + "content": "Existing methods for generating adversarial code examples face several challenges: limited availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, Python, and C. We employed three pre-trained code models (CodeBERT, GraphCodeBERT, and CodeT5) that resulted in a cumulative of 18 victim models. The results demonstrate that RNNS outperforms baselines in terms of ASR and QT. Furthermore, the perturbation of adversarial examples introduced by RNNS is smaller compared to the baselines in terms of the number of replaced variables and the change in variable length. Lastly, our experiments indicate that RNNS is efficient in attacking defended models and can be employed for adversarial training." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.701, + 0.262, + 0.716 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.726, + 0.491, + 0.887 + ], + "angle": 0, + "content": "Recently, since programming language can be seen as one kind of textual data and also inspired by the success of deep learning for text processing and understanding, researchers have tried to pretrain code models such as CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020), ContrabERT (Liu et al., 2023a) to help developers to solve multiple programming tasks, e.g., code search (Gu et al., 2018; Liu et al., 2023b), code clone detection (White et al., 2016; Li et al., 2017), code sum" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.254, + 0.886, + 0.365 + ], + "angle": 0, + "content": "marization (Ahmad et al., 2020; Liu et al., 2020), and vulnerability detection (Zhou et al., 2019). Although these code models have achieved good performance on many code tasks, they are still suffering from robustness issues. A few adversarial attack methods have emerged to evaluate and improve the robustness of code models." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.369, + 0.886, + 0.595 + ], + "angle": 0, + "content": "There are certain considerations to be made. Firstly, code pre-training models are frequently deployed remotely, which limits access to the model parameters and renders white-box attacks infeasible. Secondly, among the numerous code-equivalent transformation methods, variable substitution exerts the most significant influence on the resilience of large code models while being the least detectable transformation (Li et al., 2022). As a result, black-box attack techniques based on variable substitution have emerged as a valuable avenue for research and multiple works have been proposed such as ALERT (Yang et al., 2022) and MHM (Zhang et al., 2020)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.598, + 0.886, + 0.921 + ], + "angle": 0, + "content": "However, these works have three limitations: 1) The number of substitute variables is limited and lacks diversity, which lowers the upper bound of the attack success rate. For example, ALERT employs 60 substitute variables for each variable, which are generated by a pre-trained model, and the substitute variables lack diversity. MHM also randomly selects 1500 words from a fixed dictionary as substitute variables. 2) The verification cost of substitute variables is high. To verify the attack effect of each substitute, it is necessary to replace the source variable with an adversarial sample and perform an actual attack on the victim model. ALERT uses a traversal method to select substitute variables, and in order to reduce the number of attacks, it limits the number of substitute variables; MHM uses a random sampling method to select substitute variables in order to reduce the number of attacks. Neither method is conducive to cost-effective attacks. 3) The generated adversarial samples have" + }, + { + "type": "page_footnote", + "bbox": [ + 0.141, + 0.892, + 0.312, + 0.905 + ], + "angle": 0, + "content": "* clark.zhang@huawei.com" + }, + { + "type": "page_footnote", + "bbox": [ + 0.141, + 0.905, + 0.417, + 0.919 + ], + "angle": 0, + "content": "† corresponding author: ma_wei@ntu.edu.sg" + }, + { + "type": "list", + "bbox": [ + 0.141, + 0.892, + 0.417, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.928, + 0.522, + 0.941 + ], + "angle": 0, + "content": "9706" + }, + { + "type": "footer", + "bbox": [ + 0.218, + 0.946, + 0.781, + 0.973 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9706-9716 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.489, + 0.163 + ], + "angle": 0, + "content": "large perturbations. Each adversarial sample usually needs to replace multiple original variables to succeed in attacking, and MHM easily generates semantically incoherent and excessively long variable names." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.166, + 0.49, + 0.502 + ], + "angle": 0, + "content": "To address the aforementioned challenges, in this paper, we propose a search-based black-box adversarial attack method to create challenging adversarial samples based on the search seed vector in the variable representation space, namely Representation Nearest Neighbor Search (RNNS). Specifically, RNNS, first utilizes publicly available real code datasets to construct a large original substitute set, denoted as \\( subs_{original} \\). Then, based on the previous attack results, RNNS predicts the search seed vector required for the next round of attacks and efficiently searches for the \\( k \\) nearest substitutes to the seed vector from the large-scale original substitute set to form the \\( subs_{topk} \\), where \\( k \\) is much smaller than the size of the original substitute set. The generation process of the \\( subs_{topk} \\) does not involve attacking the victim model even once. Furthermore, the length and similarity of the substitute must adhere to specific perturbation constraints to prevent excessive deviations from \\( var \\)." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.505, + 0.49, + 0.826 + ], + "angle": 0, + "content": "To evaluate the effectiveness of RNNS, we investigate three pre-trained code models, CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020) and CodeT5 (Wang et al., 2021), and perform the attack on six code tasks in three programming languages, i.e., Java, Python, and C. The results on 18 victim models demonstrate that compared to the approaches MHM and ALERT, RNNS achieves a higher attack success rate (ASR) with a maximum of about \\(100\\%\\) improvement and 18/18 times as the winner. Meanwhile, RNNS needs fewer query times (QT) with 8/18 times as the winners. Furthermore, we analyze the quality of adversarial examples statistically and find that RNNS introduces minor perturbations. In the end, we apply RNNS to attack three defended models and find that our approach outperforms the baselines by up to \\(32.07\\%\\) ASR. We also use adversarial examples to improve the model's robustness through contrastive adversarial training." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.839, + 0.267, + 0.854 + ], + "angle": 0, + "content": "2 Preliminaries" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.866, + 0.353, + 0.882 + ], + "angle": 0, + "content": "2.1 Textual Code Processing" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.888, + 0.49, + 0.919 + ], + "angle": 0, + "content": "The nature of code data (in text format with discrete input space) makes it impossible to feed one" + }, + { + "type": "equation", + "bbox": [ + 0.514, + 0.089, + 0.888, + 0.187 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{c c} x = \\left( \\begin{array}{c} S _ {0} \\\\ ... \\\\ S _ {i} \\\\ ... \\\\ S _ {j} \\\\ ... \\\\ S _ {l} \\end{array} \\right) \\longrightarrow R ^ {l \\times d} = \\left( \\begin{array}{c} \\boldsymbol {v} _ {0} \\\\ ... \\\\ \\boldsymbol {v} _ {i} \\\\ ... \\\\ \\boldsymbol {v} _ {j} \\\\ ... \\\\ \\boldsymbol {v} _ {l} \\end{array} \\right) \\longrightarrow \\boxed {f (\\theta)} \\\\ \\text {M o d e l} & \\longrightarrow \\left( \\begin{array}{c} p _ {0} \\\\ ... \\\\ p _ {g} \\\\ ... \\\\ p _ {k} \\end{array} \\right) \\\\ \\text {D o m a i n P r o b a b i l i t y} \\\\ \\text {S p a c e} & \\text {S p a c e} \\end{array}\n\\]" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.205, + 0.883, + 0.232 + ], + "angle": 0, + "content": "Figure 1: One code model demo on the downstream task." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.24, + 0.885, + 0.609 + ], + "angle": 0, + "content": "code input \\( x \\) directly into deep learning models. Thus, transferring code data to learnable continuous vectors is the first step in source code learning. Dense encoding (Zhelezniak et al., 2020) is one common method used to vectorize textual code data. To do so, first, we need to learn a tokenizer that splits the code text into a token sequence which is called Tokenization. After tokenization, code \\( x \\) is represented by a sequence of tokens, namely, \\( x = (s_0, \\dots, s_j, \\dots, s_l) \\) where \\( s_i \\) is one token. Then, the code vocabulary dictionary is built by using all the appeared tokens \\( s_i \\), denoted \\( \\mathbb{V} \\). After that, every word (token) in \\( \\mathbb{V} \\) is embedded by learned vectors \\( \\boldsymbol{v}_i \\) with dimension \\( d \\). Here, we use \\( E^{|\\mathbb{V}| \\times d} \\) to represent the embedding matrix for \\( \\mathbb{V} \\). Finally, \\( x \\) can be converted into a embedding matrix \\( R^{l \\times d} = (v_0, \\dots, v_j, \\dots, v_l) \\). After this code encoding, pre-trained code models based on the transformer take the matrix \\( R^{l \\times d} \\) as inputs and learn the contextual representation of \\( x \\) for downstream tasks via pre-training such as Masked Language Modeling (MLM) and Causal Language Modeling (CLM)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.611, + 0.884, + 0.756 + ], + "angle": 0, + "content": "Figure 1 illustrates the main steps of the code processing models for the downstream classification tasks. First, we tokenize the textual code \\( x \\) into a token sequence that is represented in a discrete integer space. Then, we map the discrete sequence ids into the token vector space \\( R^{l \\times d} \\). Next, we feed the token vectors into the task model \\( f(\\theta) \\). \\( f(\\theta) \\) is built on top of pre-trained models. Finally, we can predict the domain probabilities after fine-tuning." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.769, + 0.708, + 0.783 + ], + "angle": 0, + "content": "2.2 Problem Statement" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.791, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Since many critical code tasks are classification problems, e.g., defect prediction and code clone detection. In this paper, we focus on the adversarial attack for code classification tasks. Considering a code classification task, we use \\( f(x; \\theta) \\to y: R^{l \\times d} \\to \\mathbb{C} = \\{i | 0 \\leq i \\leq n\\} \\) to denote the victim model that maps a code token sequence \\( x \\) to a label \\( y \\) from a label set \\( \\mathbb{C} \\) with size \\( n \\), where" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9707" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.391 + ], + "angle": 0, + "content": "\\(l\\) is the sequence length and \\(d\\) is the token vector dimension, and \\(i\\) is one integer. By querying dictionary dense embedding \\(\\pmb{E}^{|\\mathbb{V}\\times d|}\\), a code token sequence \\(x = (s_0,\\dots,s_j,\\dots,s_l)\\), is vectorized into \\(\\pmb{R}^{l\\times d}\\). Adversarial attacks for code models create an adversarial example \\(x^{\\prime}\\) by modifying some vulnerable tokens of \\(x\\) with a limited maximum perturbation \\(\\epsilon\\) to change the correct label \\(y\\) to a wrong label \\(y^\\prime\\). Simply, we get a perturbed \\(x^{\\prime}\\) by modifying some tokens in \\((s_0,\\dots,s_j,\\dots,s_l)\\) such that \\(f(x^{\\prime};\\theta)\\neq f(x;\\theta)\\) where \\(x^{\\prime} = x + \\sigma\\) and \\(x^{\\prime}\\) has to have the same behavior with \\(x\\), + represent perturbation execution, \\(\\sigma\\) is the perturbation code transformation for \\((s_0,\\dots,s_j,\\dots,s_l)\\), and \\(\\sigma \\leq \\epsilon\\). We target the more practical attacking scenario - black-box attack that requires less information. We assume we cannot access the model parameters and can only utilize the final output of model \\(f(x;\\theta)\\) to conduct the attack." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.404, + 0.265, + 0.421 + ], + "angle": 0, + "content": "3 Methodology" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.431, + 0.248, + 0.445 + ], + "angle": 0, + "content": "3.1 Motivation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.453, + 0.49, + 0.676 + ], + "angle": 0, + "content": "As mentioned in the introduction, the current methods face three limitations: 1) there is a limited number of substitute variables; 2) there is a high verification cost associated with substitute variables; and 3) the generated adversarial samples often exhibit large perturbations. Among these limitations, the second one holds the utmost significance as it significantly impacts both the first and third limitations. Due to the high cost involved, it becomes challenging to generate diverse adversarial examples within a reasonable budget. Additionally, attackers tend to introduce large perturbations without employing any perturbation constraints in order to maximize their attacks." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.678, + 0.49, + 0.919 + ], + "angle": 0, + "content": "To address these limitations, the first question arises: \"Could we substantially reduce the verification cost while allowing for unrestricted diversity of substitute variables and minimizing perturbations?\" To delve into the reasons behind the second limitation, we need to analyze its underlying factors. The low verification efficiency of the substitute set stems from the fact that each substitute can only be verified by constructing an adversarial sample to replace the original variable and then launching an actual attack on the victim model. This realization leads to the second question: \"Is it feasible to predict the attack effect of a substitute instead of constructing an adversarial sample to attack the victim model?\"" + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.086, + 0.891, + 0.372 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.401, + 0.885, + 0.577 + ], + "angle": 0, + "content": "Given input code \\( x \\) and one of its variables \\( var \\), different substitutes can be used to replace it to obtain different adversarial samples. After attacking the victim model, the probability of the label will also change. Conversely, if we want to reduce the probability of this label, the third question is following, \"how to choose relatively better substitutes that can reduce the model confidence from a large-scale original substitute set?\" It is possible to select good substitutes without actual attack if we can forecast, which is implemented by RNNS." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.579, + 0.885, + 0.837 + ], + "angle": 0, + "content": "The core idea of RNNS is maintaining a search seed updated based on the historical attack. The search seed is employed to search next adversarial substitutes that are possible to attack successfully. Since substitutes are discrete and cannot be directly involved in calculations, we first use a variable name pre-trained encoder denoted as \\( E \\) to map substitutes to a unified continuous representation vector space. Then, based on the representation vectors of substitutes that have participated in the attack, we predict the search seed vector \\( e_{seed} \\) for the next round of the substitute selection. Finally, we calculate the similarity between \\( e_{seed} \\) and the representation vector of substitutes and then select relatively better substitutes. For specific details, please refer to Section 3.2.3." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.849, + 0.878, + 0.866 + ], + "angle": 0, + "content": "3.2 Representation Nearest Neighbor Search" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.872, + 0.885, + 0.92 + ], + "angle": 0, + "content": "Algorithm 1 shows the workflow of our approach, First, we collect the original substitute set from public real code, following the process described" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9708" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.492, + 0.665 + ], + "angle": 0, + "content": "in Section 3.2.1. We extract variables from the input code and sort them according to their uncertainty, referring to Section 3.2.2 (Line 3-4). We replace variables in sequence to form attack samples (Line 5). For a given \\( var \\), we first initialize the optimal substitute for this current iteration \\( sub_{cur} \\) and the optimal substitute for the previous iteration \\( sub_{pre} \\) to the \\( var \\). Then, we initialize the accumulated smooth increment of the representation vector \\( \\Delta e_{smo} \\) to a zero vector. \\( \\Delta e_{smo} \\) is used to record the historical representation change of the search seed \\( e_{seed} \\). We now commence the iterative attack process, as delineated in Line 11. We predict the search seed vector \\( e_{seed} \\) with the process described in Section 3.2.3 (Line 12), and then extract topk substitutes based on \\( e_{seed} \\) to form the candidate substitutes \\( subs_{topk} \\) with the process described in Section 3.2.4 (Line 13). Subsequently, we replace \\( sub_{cur} \\) in \\( x' \\) with each substitute in \\( subs_{topk} \\) to obtain the corresponding temporary adversarial sample \\( x'_tmp \\) (Line 14-15). \\( x' \\) is the current code that we are trying to attack and it is initialized with the original code \\( x \\). We use \\( x'_tmp \\) to attack the victim model and obtain the probability \\( prob_y \\) of the ground-truth label \\( y \\) and predicted label \\( y' \\) (Line 16). If the probability of the ground-truth label \\( y \\) hits a new low (\\(< prob_{min}\\)), we update \\( x' \\), \\( sub_{pre} \\), \\( sub_{cur} \\) and \\( prob_{min} \\) (Line 17-22). \\( prob_{min} \\) records the minimum probability of label \\( y \\) during the attack process. If \\( x'_tmp \\) causes the victim model to predict an incorrect label, this attack is successful and returns the successful adversarial sample (Line 23-26); otherwise, proceed to the next iteration until all variables have completed iteration and return the final adversarial sample and attack result (Line 30)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.673, + 0.489, + 0.688 + ], + "angle": 0, + "content": "3.2.1 Collecting Large Original Substitute Set" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.692, + 0.491, + 0.884 + ], + "angle": 0, + "content": "We have developed a tool for variable extraction that leverages the tree-sitter framework1. This tool, henceforth denoted as ExtractVar (see Line 3), operates in three distinct steps. In the first step, we extract all variables from the current dataset and then filter out duplicates. During the second step, each valid variable is tokenized, and we compute the embedding for each token using the variable-name encoder \\( E \\) that is pre-trained on CodeSearchNet2. We then apply a mean pooling operation on these tokens to determine the variable's embedding. In the third step, we retain all the chosen variables" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.886, + 0.119 + ], + "angle": 0, + "content": "along with their associated embeddings as the initial substitute set, represented as \\( subs_{original} \\)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.126, + 0.759, + 0.141 + ], + "angle": 0, + "content": "3.2.2 Computing Uncertainty" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.145, + 0.886, + 0.32 + ], + "angle": 0, + "content": "Given a specific code \\( x \\), we replace each instance of \\( var \\in x \\) with a set of predefined fixed variables \\( VarArray \\), resulting in a set of mutated codes denoted as \\( X_{var}^{mutated} \\). These mutated codes are subsequently utilized to query the victim model, allowing us to obtain the probability distribution for each class. A greater variance in the distribution signifies increased uncertainty for \\( var \\), suggesting that \\( var \\) should be prioritized for replacement. The uncertainty associated with \\( var \\) is defined as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.536, + 0.328, + 0.858, + 0.372 + ], + "angle": 0, + "content": "\\[\nu n c e r t a i n t y _ {v a r} = \\frac {1}{C} \\sum_ {i = 1} ^ {C} v a r i a n c e (P _ {v a r} ^ {i})\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.381, + 0.886, + 0.705 + ], + "angle": 0, + "content": "where \\( P_{var}^{i} = \\{p_{var}^{i}(x) | \\forall x \\in X_{var}^{mutated}\\} \\), \\( C \\) is the number of labels, \\( p_{var}^{i}(x) \\) is the model probability for label \\( i \\) given the mutated code \\( x \\), and variance denotes the standard variance function. A larger and more diverse \\( X_{var}^{mutated} \\) ensures a closer approximation of uncertainty to the true value. It is important to note, however, that the magnitude of the change length must not be excessively large, as this would result in all probability changes converging to a single point. This is because samples subjected to large changes deviate significantly from the original, leading to a substantial decrease in the model confidence levels. Subsequently, we arrange the variables in descending order based on their uncertainties. The greater the uncertainty of a variable, the more valuable it is for attack. This process is denoted as RankVarsWithUncertainty at line 4. In our implementation, the size of this variable array VarArray is 16, and the variable length ranges from 1 to 5." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.713, + 0.753, + 0.728 + ], + "angle": 0, + "content": "3.2.3 Predicting Search Seed" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.732, + 0.886, + 0.892 + ], + "angle": 0, + "content": "To filter out superior substitutes from the substantial \\( subs_{original} \\), it becomes necessary to predict the search seed within the substitute representation vector space. Given the optimal substitute \\( sub_{cur} \\) of the current round, the optimal substitute \\( sub_{pre} \\) from the previous round, and the accumulated smooth increment of the representation vector, denoted as \\( \\Delta e_{smo} \\), from all preceding rounds of iteration, we initially compute the increment of the representation vector in the current round, \\( \\Delta e \\):" + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.891, + 0.391, + 0.905 + ], + "angle": 0, + "content": "1https://tree-sitter.github.io/tree-sitter" + }, + { + "type": "page_footnote", + "bbox": [ + 0.137, + 0.905, + 0.422, + 0.918 + ], + "angle": 0, + "content": "2https://huggingface.co/datasets/code_search_net" + }, + { + "type": "list", + "bbox": [ + 0.136, + 0.891, + 0.422, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "equation", + "bbox": [ + 0.58, + 0.903, + 0.813, + 0.921 + ], + "angle": 0, + "content": "\\[\n\\Delta \\boldsymbol {e} = E (s u b _ {c u r}) - E (s u b _ {p r e})\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9709" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.119, + 0.083, + 0.49, + 0.145 + ], + "angle": 0, + "content": "
TaskTrain / Val / TestCodeBERTGraphCodeBERTCodeT5
Defect21,854 / 2,732 / 2,73263.7663.6567.02
Clone90,102 / 4,000 / 4,00096.9797.3697.84
Authorship528 / - / 13282.5777.2788.63
C1000320,000 / 80,000 / 100,00082.5383.7984.46
Python800153,600 / 38,400 / 48,00096.3996.2996.79
Java25048,000 / 11,909 / 15,00096.9197.2797.72
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.16, + 0.489, + 0.189 + ], + "angle": 0, + "content": "Table 1: Datasets and Victim Model Performance (Accuracy, %)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.215, + 0.489, + 0.295 + ], + "angle": 0, + "content": ", where \\(E\\) is variable name encoder, trained on CodeSearchNet by masked language modelling independently so that RNNS is independent of victim downstream-task models. Then we update the \\(\\Delta e_{smo}\\)" + }, + { + "type": "equation", + "bbox": [ + 0.175, + 0.31, + 0.427, + 0.326 + ], + "angle": 0, + "content": "\\[\n\\Delta \\mathbf {e} _ {s m o} = (1 - \\alpha) \\Delta \\mathbf {e} _ {s m o} + \\alpha \\Delta \\mathbf {e}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.34, + 0.489, + 0.372 + ], + "angle": 0, + "content": ", where \\(\\alpha\\) is a smooth rate limited 0 to 1, Finally, we predict the search seed \\(e_{\\text{seed}}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.19, + 0.386, + 0.412, + 0.403 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {e} _ {\\text {s e e d}} = E \\left(\\operatorname {s u b} _ {\\text {c u r}}\\right) + \\Delta \\boldsymbol {e} _ {\\text {s m o}}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.417, + 0.489, + 0.432 + ], + "angle": 0, + "content": "This process is denoted as PredictSeed at line 12." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.443, + 0.4, + 0.458 + ], + "angle": 0, + "content": "3.2.4 Searching Top-K Substitutes" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.463, + 0.49, + 0.704 + ], + "angle": 0, + "content": "Initially, we filter out substitutes from \\( subs_{original} \\) that comply with two constraints: 1) \\( 1 - sim(E(sub), E(var)) < \\epsilon \\) and 2) \\( |len(sub) - len(var)| < \\delta \\), where \\( var \\) refers to the original variable in the input code that is to be replaced, \\( sim(.) \\) is the similarity calculation function. \\( E(.) \\) is the variable name encoder, and \\( len(.) \\) is used to calculate the length of the variable name. Then, we calculate the similarity between the search seed \\( e_{seed} \\) and the substitutes that are filtered by the two constraints and select the \\( k \\) most similar substitutes to form \\( subs_{topk} \\). This process is denoted as SearchTopkSub at line 13. In our experiment, \\( \\epsilon = 0.15 \\), \\( \\delta = 4 \\), \\( k = 60 \\), \\( sim(.) \\) is cosine similarity." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.717, + 0.322, + 0.734 + ], + "angle": 0, + "content": "4 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.743, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Dataset and Model. To study the effectiveness and efficiency of RNNS, we conduct experiments on three popular programming languages (C, Python, and Java). For the datasets, we employed six widely studied open-source datasets that cover four important code tasks. Specifically, BigCloneBench (Wang et al., 2020) is one code clone detection dataset named Clone. Devign (Zhou et al., 2019) is a dataset used for vulnerability detection, named Defect. For authorship prediction, we use the dataset provided by (Alsulami et al., 2017)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.228 + ], + "angle": 0, + "content": "Besides, we utilize three problem-solving classification tasks, Java250, Python800, and C1000, provided by ProjectCodeNet (Puri et al., 2021). For all the datasets (except for authorship prediction which does not have enough data samples), we follow the original papers to split the data into the training set, validation set, and test set. Authorship prediction only has two split parts, training data and test data." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.253, + 0.884, + 0.366 + ], + "angle": 0, + "content": "For the code models, we follow the previous work (Yang et al., 2022) and investigate two pretrained models CodeBERT (Feng et al., 2020), and GraphCodeBERT (Guo et al., 2020). Besides, we add one more powerful model CodeT5 (Wang et al., 2021) in our study. Table 1 summarizes the details of our employed datasets and fine-tuned models." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.39, + 0.884, + 0.582 + ], + "angle": 0, + "content": "Evaluation Metric. To evaluate the effectiveness of adversarial attack methods, we employ the commonly used attack success rate (ASR) (Yang et al., 2022) as the measurement. To evaluate the efficiency of the attack methods, we use query times (QT) to check the average number of querying the victim model for one input code. Finally, we use the change of replaced-variable length and the number of replaced variables to study the quality/perturbation of adversarial examples. A smaller score means the attack method can generate adversarial examples with less perturbation injection." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.606, + 0.884, + 0.831 + ], + "angle": 0, + "content": "Baseline. We compare RNNS with two black-box attack baselines, MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022). MHM is a sampling search-based black-box attack that generates the substitutes from the vocabulary based on lexical rules for identifiers. MHM employs synthesized tokens as the candidates of substitutes, which could introduce meaningless variable names. ALERT is a recently proposed attack method that combines greedy attack and genetic algorithm to find the substitutes. We also use two textual attack algorithms PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021) as minor baselines, since they are not designed for code models." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.855, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Implementation. We implement our approach in PyTorch and run all experiments on 32G-v100 GPUs. We reuse the source code from the baselines. We make our implementation publicly available." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9710" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.27, + 0.082, + 0.732, + 0.323 + ], + "angle": 0, + "content": "
Task+ModelALERTMHMRNNS
ASRQTASRQTASRQT
Clone+CodeBert28.672155.3939.66972.1546.50666.48
Clone+GraphCodeBert10.401466.689.58490.9941.281122.01
Clone+CodeT529.202359.7038.791069.0639.61895.79
Defect+CodeBert52.291079.6850.51862.1869.18588.35
Defect+GraphCodeBert74.29621.7775.19539.9381.63404.73
Defect+CodeT576.66721.0286.51344.0889.45344.29
Authorship+CodeBert34.98682.5764.70775.1173.391029.59
Authorship+GraphCodeBert58.821227.3675.49632.1080.39696.64
Authorship+CodeT564.951078.4066.97715.8971.79970.44
Java250+CodeBert50.50958.9674.03961.6075.12815.91
Java250+GraphCodeBert46.741026.1546.05946.5272.30853.74
Java250+CodeT552.041189.4230.591107.9563.801049.46
Python800+CodeBert58.30513.6356.67919.3777.88514.19
Python800+GraphCodeBert51.87577.7054.15917.9271.42730.14
Python800+CodeT552.84777.2036.951127.4469.07662.28
C1000+CodeBert53.50525.4359.75340.8872.96537.76
C1000+GraphCodeBert52.68566.1845.93837.0972.23634.27
C1000+CodeT547.86843.3336.45668.1559.00697.06
Count0/184/180/186/1818/188/18
" + }, + { + "type": "table_caption", + "bbox": [ + 0.137, + 0.332, + 0.859, + 0.347 + ], + "angle": 0, + "content": "Table 2: Comparison results with MHM, and ALERT, ASR %. Count: the number of best results achieved." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.359, + 0.291, + 0.374 + ], + "angle": 0, + "content": "5 Results Analysis" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.386, + 0.436, + 0.402 + ], + "angle": 0, + "content": "5.1 Attack Effectiveness and Efficiency" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.41, + 0.489, + 0.779 + ], + "angle": 0, + "content": "We compare RNNS with two methods MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022) on six datasets and 18 victim models that have been fine-tuned for the downstream tasks. Table 2 shows the comparison results where the last row Count indicates how many times this method achieves the best results. We can see that RNNS achieves the best performance for 18/18 times in terms of ASR, and the lowest cost for 8/18 times in terms of QT in Table 2. Both of the indicators are better than the baselines. The two baselines have zero best ASR for all victim models and all datasets. The lowest QTs achieved by ALERT and MHM are 4 and 6, respectively. We conclude that for effectiveness and efficiency, RNNS outperforms ALERT and MHM in all cases. Especially, MHM and ALERT fail to attack GraphCodeBERT on BigClone dataset, and only have \\(9.58\\%\\) and \\(10.4\\%\\) ASR respectively, while RNNS has more than \\(40\\%\\) ASR. RNNS has almost two times larger ASR than MHM on Java250+CodeT5 and Python800+CodeT5." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.781, + 0.49, + 0.893 + ], + "angle": 0, + "content": "It should be noted that high ASR is not due to large QT. As shown in Table 2, the three groups of experiments with the most QTs are Clone+GraphCodeBert, Java250+CodeT5, and Authorship+CodeBert, with ASRs of \\(41.28\\%\\), \\(63.80\\%\\), and \\(73.39\\%\\), respectively, which are not the highest. On the contrary, Defect+CodeT5 has the highest" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.359, + 0.883, + 0.406 + ], + "angle": 0, + "content": "ASR of \\(89.45\\%\\), but QT is the smallest. Therefore, there is no absolute causal relationship between QT and ASR." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.426, + 0.85, + 0.442 + ], + "angle": 0, + "content": "5.2 Perturbation of Adversarial Example" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.451, + 0.884, + 0.627 + ], + "angle": 0, + "content": "We conduct a study about the quality of the adversarial examples to check if RNNS can generate looking-normal code, e.g., avoiding naively increasing the variable name length. To do so, firstly, we count the average length of the original variable and adversarial variables as demonstrated by Table 3. We also compute the mean and variances of their difference. Besides, we compute the average number of the replaced variables for the successful attack as shown in Table 4. Low values mean the inputs are modified less, and high qualities." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.63, + 0.884, + 0.919 + ], + "angle": 0, + "content": "In Table 3, the 2nd, 5th, and 8th columns are the average length for original variables (named Var Len) that are replaced. The 3rd, 6th, and 9th columns are the average lengths for adversarial variables (named Adv Var Len). The 4th, 7th, and 10th columns are the average and variance (mean \\(\\pm\\) variance) of the absolute length difference between original variables and adversarial variables (named Difference). We observe that MHM prefers to replace the long-length variables while RNNS likes replacing short-length variables if we compare the 2nd and 5th columns. Meanwhile, the change of variable length from RNNS is less than MHM. MHM introduces the average length difference of 3.39-6.82 while RNNS only has 2.02-2.54. MHM has much higher variances than RNNS in the length change. ALERT uses shorter adversarial variable names than RNNS" + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.904, + 0.448, + 0.918 + ], + "angle": 0, + "content": "3https://github.com/18682922316/RNNS-for-code-attack" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.519, + 0.941 + ], + "angle": 0, + "content": "9711" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.149, + 0.082, + 0.849, + 0.296 + ], + "angle": 0, + "content": "
Task+ModelRNNSMHMALERT
Var LenAdv Var LenDifferenceVar LenAdv Var LenDifferenceVar LenAdv Var LenDifference
Clone+CodeBert6.126.792.35 ± 4.506.4710.66.34 ± 10.985.916.211.32 ± 2.02
Clone+GraphCodeBert6.326.972.54 ± 6.436.5810.416.82 ± 21.675.505.931.45 ± 2.49
Clone+CodeT56.456.692.51 ± 8.306.4610.466.17 ± 25.786.256.611.32 ± 2.72
Defect+CodeBert4.645.442.08 ± 2.494.449.596.57 ± 28.784.855.061.36 ± 1.93
Defect+GraphCodeBert4.085.342.13 ± 1.834.379.736.48 ± 26.514.475.221.33 ± 1.83
Defect+CodeT53.955.172.03 ± 1.934.339.816.59 ± 29.984.365.011.27 ± 1.57
Authorship+CodeBert3.815.182.28 ± 1.563.977.945.45 ± 16.724.425.351.40 ± 2.25
Authorship+GraphCodeBert3.695.232.36 ± 1.714.397.645.24 ± 15.383.744.461.22 ± 1.82
Authorship+CodeT53.955.182.03 ± 2.663.957.985.59 ± 20.943.814.501.22 ± 1.62
Java250+CodeBert2.354.222.11 ± 1.023.216.504.34 ± 15.203.223.650.94 ± 1.63
Java250+GraphCodeBert2.484.312.13 ± 1.073.136.594.42 ± 14.843.053.500.98 ± 1.54
Java250+CodeT52.764.472.10 ± 1.173.206.544.33 ± 14.603.167.314.41 ± 18.73
Python800+CodeBert1.503.542.21 ± 1.021.975.113.64 ± 9.061.782.270.64 ± 1.34
Python800+GraphCodeBert1.883.902.18 ± 0.781.996.014.46 ± 16.521.802.330.76 ± 1.30
Python800+CodeT51.653.592.13 ± 0.951.974.953.49 ± 8.181.885.844.10 ± 12.64
C1000+CodeBert1.583.442.08 ± 0.882.415.053.65 ± 12.022.132.520.67 ± 1.17
C1000+GraphCodeBert1.603.592.10 ±0.852.395.353.90 ± 12.982.182.670.66 ± 1.23
C1000+CodeBert1.383.332.02 ± 0.852.364.823.39 ± 10.982.106.564.74 ± 13.24
" + }, + { + "type": "table_caption", + "bbox": [ + 0.271, + 0.305, + 0.725, + 0.319 + ], + "angle": 0, + "content": "Table 3: Replaced-variable length comparison, mean \\( \\pm \\) variance." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.332, + 0.489, + 0.379 + ], + "angle": 0, + "content": "with less change because it uses the pre-trained model to generate the replacements that are close to the replaced variables." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.381, + 0.49, + 0.541 + ], + "angle": 0, + "content": "Table 4 statistically shows the number of replaced variables. It can be seen that RNNS replaces around an average of 3.6 variables with a smaller variance of around (3.4-4.6) while MHM needs to modify about an average of 5.4 variables with a larger variance \\((\\geq 11.14)\\). ALERT also replaces more variables to attack models than RNNS and MHM. RNNS introduces less or equal perturbation than the baselines in terms of length change and change number." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.543, + 0.49, + 0.655 + ], + "angle": 0, + "content": "Figure 2 shows one example of RNNS, MHM, and ALERT attack successfully from the Java250 dataset. The changes are highlighted by shadow markers. RNNS only renames one variable \\(\\mathbf{b}\\) to \\(\\mathbf{h}\\), ALERT renames two variables, while MHM almost renames all variables and also prefers longer names." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.671, + 0.281, + 0.687 + ], + "angle": 0, + "content": "5.3 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.695, + 0.491, + 0.919 + ], + "angle": 0, + "content": "We remove the two search constraints in Section 3.2.4, denoted this variant of RNNS as RNNS-Unlimited. Table 5 shows the comparing results between RNNS-Unlimited and RNNS. RNNS-Unlimited gets the first place for all the tasks in terms of ASR. ASR can be improved by a maximum of \\(8.35\\%\\) and a minimum of about \\(2\\%\\) after removing limitations. For QT, RNNS-Unlimited only loses 3 times among 18 evaluations. The improvement of RNNS-Unlimited is not surprising with respect to ASR and QT. Because RNNS-Unlimited can search the adversarial examples in the non-similar real names and use very long variable names." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.332, + 0.868, + 0.348 + ], + "angle": 0, + "content": "5.4 Attack Defended Model and Retraining" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.353, + 0.886, + 0.514 + ], + "angle": 0, + "content": "Attack Defended Model. We employ RNNS and MHM to attack the three defended models provided by ALERT (Yang et al., 2022). These models are prepared by adversarial fine-tuning. Table 6 presents the results. We can see that RNNS outperforms MHM in two tasks, and MHM is better in one task. This experiment setting actually is not friendly for RNNS because ALERT (Yang et al., 2022) uses the replacements from pre-trained models which implicitly have the semantic constraint." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.515, + 0.886, + 0.74 + ], + "angle": 0, + "content": "Retraining. We use the adversarial examples from RNNS to retrain the victim models of CodeBERT by contrastive adversarial learning. We use three 3 datasets, Defect, Authorship, and Java250. We generate the adversarial examples on the whole training dataset for them. Table 7 presents the results, all approaches achieve much lower ASR compared with the previous. RNNS adversarial examples can improve the mode robustness through contrastive adversarial retraining. If we compare Defect/Authorship+CodeBERT in Table 7 and Table 6, we can find that both retrained models via RNNS are more robust than the models from ALERT since they have much lower ASRs." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.753, + 0.822, + 0.767 + ], + "angle": 0, + "content": "5.5 RNNS vs Textual Attack Methods" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.775, + 0.885, + 0.902 + ], + "angle": 0, + "content": "To compare the effects of RNNS and textual attack methods, We conducted attack experiments on three datasets using the PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021). The three datasets Defect, Authorship, and Java250, represent three languages respectively, C, Python, and Java. To be fair, the search space of the PSO and LSH is the same as that of RNNS." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.904, + 0.882, + 0.919 + ], + "angle": 0, + "content": "As shown in Table 8, the QT of PSO algorithm" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9712" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.131, + 0.082, + 0.868, + 0.181 + ], + "angle": 0, + "content": "
TaskCodeBERTGraphCodeBERTCodeT5
RNNSMHMALERTRNNSMHMALERTRNNSMHMALERT
Clone3.55 ± 4.606.72 ± 16.576.86 ± 18.854.12 ± 4.946.21 ± 15.136.95 ± 18.993.43 ± 5.005.68 ± 14.017.65 ± 25.57
Defect3.39 ± 4.962.78 ± 7.893.49 ± 3.992.67 ± 1.752.84 ± 9.504.10 ± 11.052.51 ± 1.452.16 ± 3.583.49 ± 3.99
Authorship4.24 ± 7.477.52 ± 25.826.60 ± 22.963.65 ± 3.326.67 ± 22.297.75 ± 33.124.39 ± 9.005.72 ± 13.026.06 ± 18.74
Java2503.87 ± 4.707.11 ± 21.187.82 ± 28.963.87 ± 4.256.41 ± 16.247.83 ± 25.064.71 ± 6.877.04 ± 15.298.92 ± 25.97
Python8003.06 ± 1.875.21 ± 12.284.96 ± 8.474.12 ± 3.685.00 ± 10.834.63 ± 6.763.57 ± 3.045.29 ± 13.516.18 ± 11.45
C10003.00 ± 1.864.42 ± 7.494.13 ± 5.593.37 ± 2.385.14 ± 7.304.88 ± 6.243.39 ± 2.485.20 ± 7.435.43 ± 6.99
mean3.52 ± 4.245.63 ± 15.215.65 ± 14.803.63 ± 3.395.38 ± 13.556.02 ± 16.873.67 ± 4.645.18 ± 11.146.29 ± 15.45
" + }, + { + "type": "table_caption", + "bbox": [ + 0.268, + 0.191, + 0.726, + 0.204 + ], + "angle": 0, + "content": "Table 4: Replaced-variable number comparison, mean \\( \\pm \\) variance" + }, + { + "type": "table", + "bbox": [ + 0.131, + 0.218, + 0.861, + 0.334 + ], + "angle": 0, + "content": "
public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args)
Scanner obj = new Scanner(System.in);
int a = obj.nextInt();
int b = obj.nextInt();
int out = 1;
int ans = 0;
while (out < b) {}while (out < h) {}}while (tempOp < colArr) {}}}}}}}
out--;
out = out + a;
ans++;
}}System.out.println(ans);}}System.out.println(number_array);}}}}}}}
Original CodeAdversarial Code from RNNSAdversarial Code from MHMAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERT
" + }, + { + "type": "table_caption", + "bbox": [ + 0.281, + 0.345, + 0.715, + 0.358 + ], + "angle": 0, + "content": "Figure 2: Case study. Original vs. RNNS vs. MHM vs. ALERT" + }, + { + "type": "table", + "bbox": [ + 0.156, + 0.378, + 0.843, + 0.496 + ], + "angle": 0, + "content": "
TaskCodeBERTGraphCodeBERTCodeT5
RNNS-UnlimitedRNNSRNNS-UnlimitedRNNSRNNS-UnlimitedRNNS
ASRQTASRQTASRQTASRQTASRQTASRQT
Defect72.29590.9869.18588.3587.77381.8281.63404.7391.64338.4189.45344.29
Clone50.66955.9746.50666.4848.161105.1141.281122.0141.38920.6539.61895.79
Authorship91.74447.6873.391029.5991.17438.6980.39696.6488.88620.5671.79970.44
C100074.70502.0272.96537.7676.82498.6472.23634.2761.96704.9559.00697.06
Python80083.90460.9277.88514.1979.00496.3071.42730.1472.69646.5969.07662.28
Java25079.70760.9775.12815.9181.94744.5772.30853.7475.52910.9763.801049.46
Count6/64/60/62/66/66/60/60/66/65/60/61/6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.221, + 0.505, + 0.772, + 0.519 + ], + "angle": 0, + "content": "Table 5: Results of ablation study, before and after removing constraints, ASR %." + }, + { + "type": "table", + "bbox": [ + 0.139, + 0.535, + 0.468, + 0.595 + ], + "angle": 0, + "content": "
Defended ModelRNNSMHM
ASRQTASRQT
Clone+CodeBert12.90958.3528.171245.75
Defect+CodeBert95.37282.2092.23283.66
Authorship+CodeBert51.881524.4043.261026.08
" + }, + { + "type": "table_caption", + "bbox": [ + 0.155, + 0.605, + 0.444, + 0.618 + ], + "angle": 0, + "content": "Table 6: Attack defended models, ASR %." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.631, + 0.49, + 0.68 + ], + "angle": 0, + "content": "
ACCASR(RNNS)ASR(MHM)ASR(ALERT)
Authorship90.6219.8123.5814.28
Defect65.1440.4623.6924.53
Java25097.6319.676.6542.91
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.69, + 0.489, + 0.717 + ], + "angle": 0, + "content": "Table 7: Results of contrastive adversarial retraining, model: CodeBERT." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.743, + 0.489, + 0.919 + ], + "angle": 0, + "content": "is 4.22-6.7 times that of RNNS, and the ASR of PSQ algorithm is \\(5.55\\% - 27.82\\%\\) lower than that of RNNS algorithm. It can be inferred that for code variable attacks, combinatorial optimization is inefficient when the substitute set of variables is relatively large. The main reasons are the following two points. Firstly, code segments are generally longer, and the substitute set of code variables is much larger than the synonym set of natural language words. Secondly, the impact of variable replacement on code semantics is smaller than that" + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.535, + 0.879, + 0.587 + ], + "angle": 0, + "content": "
Task+ModelRNNSPSOLSH
ASRQTASRQTASRQT
Defect+CodeBert69.18588.3563.633945.0426.62321.78
Authorship+CodeBert73.391029.5952.294350.0019.26458.55
Java250+CodeBert75.12815.9147.35076.0231.58397.05
" + }, + { + "type": "table_caption", + "bbox": [ + 0.55, + 0.597, + 0.84, + 0.61 + ], + "angle": 0, + "content": "Table 8: RNNS vs PSO and LSH, ASR %." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.624, + 0.884, + 0.64 + ], + "angle": 0, + "content": "of word replacement on natural language semantics." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.656, + 0.883, + 0.768 + ], + "angle": 0, + "content": "RNNS's QT is 1.8-2.2 times that of LSH, and the QT has dropped significantly. However, LSH's ASR is inferior to RNNS by \\(42.56\\% - 54.13\\%\\). For code variable attacks, LSH has high efficiency, but its effectiveness is relatively low. One possible reason for LSH causing low ASR is the distribution of adversarial samples in each bucket is uneven." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.781, + 0.666, + 0.796 + ], + "angle": 0, + "content": "6 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.807, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Adversarial attacks for code models have been widely studied (Yang et al., 2022; Liu et al., 2023a; Li et al., 2023; Jha and Reddy, 2023). These works can be generally categorized into black-box attacks and white-box attacks. A black-box attack for code models queries the model outputs and selects the substitutes using a score function. For example," + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.52, + 0.941 + ], + "angle": 0, + "content": "9713" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.145, + 0.082, + 0.46, + 0.172 + ], + "angle": 0, + "content": "
AlgorithmSubstitutes SizeSubstitutes SourceReplacement PositionSubstitutes Selection
MHMmediumvocabularyrandomrandom sample
ALERTsmallmodel generationimportance scoretraverse
RNNSlargereal public variablesuncertainty scoreefficient constrained search
" + }, + { + "type": "table_caption", + "bbox": [ + 0.131, + 0.183, + 0.468, + 0.195 + ], + "angle": 0, + "content": "Table 9: Difference between RNNS to the others." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.208, + 0.49, + 0.56 + ], + "angle": 0, + "content": "ALERT (Yang et al., 2022) finds the adversarial examples using variable-name substitutes generated by pre-trained masked models. MHM (Zhang et al., 2020) uses Metropolis-Hastings to sample the replacement of code identifiers. STRATA (Springer et al., 2020) generates adversarial examples by replacing the code tokens based on the token distribution. Chen et al. (2022) apply pre-defined semantics-preserving code transformations to attack code models. CodeAttack (Jha and Reddy, 2023) uses code structure to generate adversarial data. White-box attacks require the code model gradient to modify inputs for adversarial example generation. CARROT (Zhang et al., 2022) selects code mutated variants based on the model gradient. Henkel et al. (2022) attack code models by gradient-based optimization of the abstract syntax tree transformation. Srikant et al. (2021) uses optimized program obfuscations to modify the code. DAMP (Yefet et al., 2020) derives the desired wrong prediction by changing inputs guided by the model gradient." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.562, + 0.486, + 0.849 + ], + "angle": 0, + "content": "Table 9 demonstrates the differences among RNNS, MHM (Zhang et al., 2020) and ALERT (Yang et al., 2022). MHM and ALERT represent the two methodologies most closely aligned with our research. Our approach considers identifier replacements like MHM and ALERT, ensuring that the adversarial example keeps the same semantics as the original one. Our substitute size is scalable and can be substantial, and RNNS searches the possible next adversarial example in the substitute space. In our approach, we locate vulnerable variables based on the uncertainty and search \\( \\text{sub}_{\\text{topk}} \\) without building adversarial samples and actual attacks. Our goal is to obtain high ASRs by searching real variable names. MHM has the same goal as ours but synthesizes variable names. ALERT sacrifices ASR to make the variable name readable." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.863, + 0.245, + 0.877 + ], + "angle": 0, + "content": "7 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.888, + 0.489, + 0.918 + ], + "angle": 0, + "content": "We propose a novel black-box adversarial search-based attack for variable replacement. RNNS has" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.884, + 0.293 + ], + "angle": 0, + "content": "three main contributions: 1) This work proposes a non-generation search-based black-box attacking method via predicting the attack effect of a substitute. This method can greatly reduce the verification cost of the substitute, remove the restrictions on the size and diversity of the substitute set, and achieve a significant improvement in terms of ASR without increasing QT. 2) This work proposes a simple and efficient method for constructing a substitute set. This method can construct a large-scale, diverse, and real substitute set at low cost. 3) The adversarial examples from RNNS can be used to improve the model robustness." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.308, + 0.644, + 0.322 + ], + "angle": 0, + "content": "8 Limitations" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.334, + 0.884, + 0.703 + ], + "angle": 0, + "content": "There are some limitations of RNNS. Firstly, RNNS does not revert to the preceding step to persist with the search upon an increase in the model probability of the ground truth label. While the incorporation of this step may bolster the Attack Success Rate (ASR), it could potentially compromise the Query Time (QT). Secondly, the size and diversity of the substitute set significantly influence RNNS; a minimal and homogeneous set can precipitate a diminished attack success rate. Thirdly, RNNS involves multiple hyperparameters whose values need to be manually set. One of the most important parameters is the moving parameter \\(\\alpha\\). The number of attacking iterations max itr is also significant. We set \\(\\alpha\\) to 0.2 and max itr to 6 with some small experimental trials. Fourthly, RNNS currently only targets untargeted attack scenarios, for targeted attacks, ASR will be very low when there are many category labels. For example, when performing targeted attacks on Authorship+Codebert with 66 labels, the ASR can only reach \\(6.4\\%\\). How to migrate to targeted attacks is a direction we need to study in the future." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.717, + 0.665, + 0.733 + ], + "angle": 0, + "content": "Acknowledgment" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.743, + 0.883, + 0.904 + ], + "angle": 0, + "content": "This work is supported by NRF and the CSA under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN), NRF and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-019), and NRF Investigatorship NRF-NRFI06-2020-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of NRF and CSA Singapore." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9714" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.116, + 0.085, + 0.214, + 0.099 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.107, + 0.487, + 0.173 + ], + "angle": 0, + "content": "Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4998-5007." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.181, + 0.488, + 0.26 + ], + "angle": 0, + "content": "Bander Alsulami, Edwin Dauber, Richard Harang, Spiros Mancoridis, and Rachel Greenstadt. 2017. Source code authorship attribution using long short-term memory based networks. In Computer Security - ESORICS 2017, pages 65-82, Cham. Springer International Publishing." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.268, + 0.488, + 0.346 + ], + "angle": 0, + "content": "Penglong Chen, Zhen Li, Yu Wen, and Lili Liu. 2022. Generating adversarial source programs using important tokens-based structural transformations. In 2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS), pages 173-182." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.355, + 0.488, + 0.434 + ], + "angle": 0, + "content": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.443, + 0.488, + 0.495 + ], + "angle": 0, + "content": "Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pages 933-944. IEEE." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.504, + 0.488, + 0.581 + ], + "angle": 0, + "content": "Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, LIU Shujie, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. 2020. Graphcodebert: Pre-training code representations with data flow. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.591, + 0.488, + 0.669 + ], + "angle": 0, + "content": "Jordan Henkel, Goutham Ramakrishnan, Zi Wang, Aws Albarghouthi, Somesh Jha, and Thomas Reps. 2022. Semantic robustness of models of source code. In 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 526-537." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.678, + 0.488, + 0.744 + ], + "angle": 0, + "content": "Akshita Jha and Chandan K Reddy. 2023. Codeattack: Code-based adversarial attacks for pre-trained programming language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 14892-14900." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.753, + 0.488, + 0.819 + ], + "angle": 0, + "content": "Liuqing Li, He Feng, Wenjie Zhuang, Na Meng, and Barbara Ryder. 2017. Cclearner: A deep learning-based clone detection approach. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 249-260. IEEE." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.826, + 0.488, + 0.919 + ], + "angle": 0, + "content": "Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, and Yang Liu. 2023. Multi-target backdoor attacks for code pre-trained models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7236-7254, Toronto, Canada. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.107, + 0.488, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.513, + 0.086, + 0.882, + 0.152 + ], + "angle": 0, + "content": "Yaoxian Li, Shiyi Qi, Cuiyun Gao, Yun Peng, David Lo, Zenglin Xu, and Michael R Lyu. 2022. A closer look into transformer-based code intelligence through code transformation: Challenges and opportunities. arXiv preprint arXiv:2207.04285." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.166, + 0.882, + 0.219 + ], + "angle": 0, + "content": "Shangqing Liu, Yu Chen, Xiaofei Xie, Jing Kai Siow, and Yang Liu. 2020. Retrieval-augmented generation for code summarization via hybrid gnn. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.232, + 0.882, + 0.298 + ], + "angle": 0, + "content": "Shangqing Liu, Bozhi Wu, Xiaofei Xie, Guozhu Meng, and Yang Liu. 2023a. Contrabert: Enhancing code pre-trained models via contrastive learning. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pages 2476-2487." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.311, + 0.882, + 0.377 + ], + "angle": 0, + "content": "Shangqing Liu, Xiaofei Xie, Jingkai Siow, Lei Ma, Guozhu Meng, and Yang Liu. 2023b. Graphsearch-net: Enhancing gnns via capturing global dependencies for semantic code search. IEEE Transactions on Software Engineering." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.391, + 0.882, + 0.457 + ], + "angle": 0, + "content": "Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. A strong baseline for query efficient attacks in a black box setting. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8396-8409." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.47, + 0.882, + 0.562 + ], + "angle": 0, + "content": "Ruchir Puri, David Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pajar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.576, + 0.882, + 0.627 + ], + "angle": 0, + "content": "Jacob M Springer, Bryn Marie Reinstadler, and Una-May O'Reilly. 2020. Strata: Simple, gradient-free attacks for models of code. arXiv preprint arXiv:2009.13562." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.642, + 0.882, + 0.708 + ], + "angle": 0, + "content": "Shashank Srikant, Sijia Liu, Tamara Mitrovska, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang, and Una-May O'Reilly. 2021. Generating adversarial computer programs using optimized obfuscations. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.721, + 0.882, + 0.8 + ], + "angle": 0, + "content": "Wenhan Wang, Ge Li, Bo Ma, Xin Xia, and Zhi Jin. 2020. Detecting code clones with graph neural network and flow-augmented abstract syntax tree. In 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 261-271. IEEE." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.814, + 0.882, + 0.919 + ], + "angle": 0, + "content": "Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696-8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9715" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.152 + ], + "angle": 0, + "content": "Martin White, Michele Tufano, Christopher Vendome, and Denys Poshyvanyk. 2016. Deep learning code fragments for code clone detection. In 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 87-98. IEEE." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.162, + 0.487, + 0.241 + ], + "angle": 0, + "content": "Zhou Yang, Jieke Shi, Junda He, and David Lo. 2022. Natural attack for pre-trained models of code. In Proceedings of the 44th International Conference on Software Engineering, ICSE '22, page 1482-1493, New York, NY, USA. Association for Computing Machinery." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.251, + 0.487, + 0.302 + ], + "angle": 0, + "content": "Noam Yefet, Uri Alon, and Eran Yahav. 2020. Adversarial examples for models of code. Proceedings of the ACM on Programming Languages, 4(OOPSLA):1-30." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.313, + 0.487, + 0.392 + ], + "angle": 0, + "content": "Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.402, + 0.487, + 0.468 + ], + "angle": 0, + "content": "Huangzhao Zhang, Zhiyi Fu, Ge Li, Lei Ma, Zhehao Zhao, Hua'an Yang, Yizhe Sun, Yang Liu, and Zhi Jin. 2022. Towards robustness of deep program processing models—detection, estimation, and enhancement. ACM Trans. Softw. Eng. Methodol., 31(3)." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.478, + 0.487, + 0.544 + ], + "angle": 0, + "content": "Huangzhao Zhang, Zhuo Li, Ge Li, Lei Ma, Yang Liu, and Zhi Jin. 2020. Generating adversarial examples for holding robustness of source code processing models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1169-1176." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.554, + 0.487, + 0.633 + ], + "angle": 0, + "content": "Vitalii Zhelezniak, Aleksandar Savkov, and Nils Hammerla. 2020. Estimating mutual information between dense word embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8361-8371, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.642, + 0.487, + 0.708 + ], + "angle": 0, + "content": "Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. Advances in neural information processing systems, 32." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.708 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "9716" + } + ] +] \ No newline at end of file diff --git a/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_origin.pdf b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7c05f8b62b80ef484cfea31c4d51a0a9e23e5f85 --- /dev/null +++ b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/21af0af7-6fd8-4600-8ad6-54b767b85a85_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5fefb94e85c1f0fdd8d963b9a4726be3c2c7b8b0329323569f5bab42311e69a +size 404359 diff --git a/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/full.md b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0c608b458b9d0a291c4f051b60bef2cdf36bb266 --- /dev/null +++ b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/full.md @@ -0,0 +1,268 @@ +# A Black-Box Attack on Code Models via Representation Nearest Neighbor Search + +Jie Zhang $^{1*}$ , Wei Ma $^{2\dagger}$ , Qiang Hu $^{3}$ , Shangqing Liu $^{2}$ , Xiaofei Xie $^{4}$ , Yves Le Traon $^{3}$ , and Yang Liu $^{2}$ + +1Noah's Ark Lab, Huawei + +$^{2}$ School of Computer Science and Engineering, Nanyang Technological University + +3The Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg + +$^{4}$ School of Computing and Information Systems, Singapore Management University + +# Abstract + +Existing methods for generating adversarial code examples face several challenges: limited availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, Python, and C. We employed three pre-trained code models (CodeBERT, GraphCodeBERT, and CodeT5) that resulted in a cumulative of 18 victim models. The results demonstrate that RNNS outperforms baselines in terms of ASR and QT. Furthermore, the perturbation of adversarial examples introduced by RNNS is smaller compared to the baselines in terms of the number of replaced variables and the change in variable length. Lastly, our experiments indicate that RNNS is efficient in attacking defended models and can be employed for adversarial training. + +# 1 Introduction + +Recently, since programming language can be seen as one kind of textual data and also inspired by the success of deep learning for text processing and understanding, researchers have tried to pretrain code models such as CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020), ContrabERT (Liu et al., 2023a) to help developers to solve multiple programming tasks, e.g., code search (Gu et al., 2018; Liu et al., 2023b), code clone detection (White et al., 2016; Li et al., 2017), code sum + +marization (Ahmad et al., 2020; Liu et al., 2020), and vulnerability detection (Zhou et al., 2019). Although these code models have achieved good performance on many code tasks, they are still suffering from robustness issues. A few adversarial attack methods have emerged to evaluate and improve the robustness of code models. + +There are certain considerations to be made. Firstly, code pre-training models are frequently deployed remotely, which limits access to the model parameters and renders white-box attacks infeasible. Secondly, among the numerous code-equivalent transformation methods, variable substitution exerts the most significant influence on the resilience of large code models while being the least detectable transformation (Li et al., 2022). As a result, black-box attack techniques based on variable substitution have emerged as a valuable avenue for research and multiple works have been proposed such as ALERT (Yang et al., 2022) and MHM (Zhang et al., 2020). + +However, these works have three limitations: 1) The number of substitute variables is limited and lacks diversity, which lowers the upper bound of the attack success rate. For example, ALERT employs 60 substitute variables for each variable, which are generated by a pre-trained model, and the substitute variables lack diversity. MHM also randomly selects 1500 words from a fixed dictionary as substitute variables. 2) The verification cost of substitute variables is high. To verify the attack effect of each substitute, it is necessary to replace the source variable with an adversarial sample and perform an actual attack on the victim model. ALERT uses a traversal method to select substitute variables, and in order to reduce the number of attacks, it limits the number of substitute variables; MHM uses a random sampling method to select substitute variables in order to reduce the number of attacks. Neither method is conducive to cost-effective attacks. 3) The generated adversarial samples have + +large perturbations. Each adversarial sample usually needs to replace multiple original variables to succeed in attacking, and MHM easily generates semantically incoherent and excessively long variable names. + +To address the aforementioned challenges, in this paper, we propose a search-based black-box adversarial attack method to create challenging adversarial samples based on the search seed vector in the variable representation space, namely Representation Nearest Neighbor Search (RNNS). Specifically, RNNS, first utilizes publicly available real code datasets to construct a large original substitute set, denoted as $subs_{original}$ . Then, based on the previous attack results, RNNS predicts the search seed vector required for the next round of attacks and efficiently searches for the $k$ nearest substitutes to the seed vector from the large-scale original substitute set to form the $subs_{topk}$ , where $k$ is much smaller than the size of the original substitute set. The generation process of the $subs_{topk}$ does not involve attacking the victim model even once. Furthermore, the length and similarity of the substitute must adhere to specific perturbation constraints to prevent excessive deviations from $var$ . + +To evaluate the effectiveness of RNNS, we investigate three pre-trained code models, CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020) and CodeT5 (Wang et al., 2021), and perform the attack on six code tasks in three programming languages, i.e., Java, Python, and C. The results on 18 victim models demonstrate that compared to the approaches MHM and ALERT, RNNS achieves a higher attack success rate (ASR) with a maximum of about $100\%$ improvement and 18/18 times as the winner. Meanwhile, RNNS needs fewer query times (QT) with 8/18 times as the winners. Furthermore, we analyze the quality of adversarial examples statistically and find that RNNS introduces minor perturbations. In the end, we apply RNNS to attack three defended models and find that our approach outperforms the baselines by up to $32.07\%$ ASR. We also use adversarial examples to improve the model's robustness through contrastive adversarial training. + +# 2 Preliminaries + +# 2.1 Textual Code Processing + +The nature of code data (in text format with discrete input space) makes it impossible to feed one + +$$ +\begin{array}{c c} x = \left( \begin{array}{c} S _ {0} \\ ... \\ S _ {i} \\ ... \\ S _ {j} \\ ... \\ S _ {l} \end{array} \right) \longrightarrow R ^ {l \times d} = \left( \begin{array}{c} \boldsymbol {v} _ {0} \\ ... \\ \boldsymbol {v} _ {i} \\ ... \\ \boldsymbol {v} _ {j} \\ ... \\ \boldsymbol {v} _ {l} \end{array} \right) \longrightarrow \boxed {f (\theta)} \\ \text {M o d e l} & \longrightarrow \left( \begin{array}{c} p _ {0} \\ ... \\ p _ {g} \\ ... \\ p _ {k} \end{array} \right) \\ \text {D o m a i n P r o b a b i l i t y} \\ \text {S p a c e} & \text {S p a c e} \end{array} +$$ + +Figure 1: One code model demo on the downstream task. + +code input $x$ directly into deep learning models. Thus, transferring code data to learnable continuous vectors is the first step in source code learning. Dense encoding (Zhelezniak et al., 2020) is one common method used to vectorize textual code data. To do so, first, we need to learn a tokenizer that splits the code text into a token sequence which is called Tokenization. After tokenization, code $x$ is represented by a sequence of tokens, namely, $x = (s_0, \dots, s_j, \dots, s_l)$ where $s_i$ is one token. Then, the code vocabulary dictionary is built by using all the appeared tokens $s_i$ , denoted $\mathbb{V}$ . After that, every word (token) in $\mathbb{V}$ is embedded by learned vectors $\boldsymbol{v}_i$ with dimension $d$ . Here, we use $E^{|\mathbb{V}| \times d}$ to represent the embedding matrix for $\mathbb{V}$ . Finally, $x$ can be converted into a embedding matrix $R^{l \times d} = (v_0, \dots, v_j, \dots, v_l)$ . After this code encoding, pre-trained code models based on the transformer take the matrix $R^{l \times d}$ as inputs and learn the contextual representation of $x$ for downstream tasks via pre-training such as Masked Language Modeling (MLM) and Causal Language Modeling (CLM). + +Figure 1 illustrates the main steps of the code processing models for the downstream classification tasks. First, we tokenize the textual code $x$ into a token sequence that is represented in a discrete integer space. Then, we map the discrete sequence ids into the token vector space $R^{l \times d}$ . Next, we feed the token vectors into the task model $f(\theta)$ . $f(\theta)$ is built on top of pre-trained models. Finally, we can predict the domain probabilities after fine-tuning. + +# 2.2 Problem Statement + +Since many critical code tasks are classification problems, e.g., defect prediction and code clone detection. In this paper, we focus on the adversarial attack for code classification tasks. Considering a code classification task, we use $f(x; \theta) \to y: R^{l \times d} \to \mathbb{C} = \{i | 0 \leq i \leq n\}$ to denote the victim model that maps a code token sequence $x$ to a label $y$ from a label set $\mathbb{C}$ with size $n$ , where + +$l$ is the sequence length and $d$ is the token vector dimension, and $i$ is one integer. By querying dictionary dense embedding $\pmb{E}^{|\mathbb{V}\times d|}$ , a code token sequence $x = (s_0,\dots,s_j,\dots,s_l)$ , is vectorized into $\pmb{R}^{l\times d}$ . Adversarial attacks for code models create an adversarial example $x^{\prime}$ by modifying some vulnerable tokens of $x$ with a limited maximum perturbation $\epsilon$ to change the correct label $y$ to a wrong label $y^\prime$ . Simply, we get a perturbed $x^{\prime}$ by modifying some tokens in $(s_0,\dots,s_j,\dots,s_l)$ such that $f(x^{\prime};\theta)\neq f(x;\theta)$ where $x^{\prime} = x + \sigma$ and $x^{\prime}$ has to have the same behavior with $x$ , + represent perturbation execution, $\sigma$ is the perturbation code transformation for $(s_0,\dots,s_j,\dots,s_l)$ , and $\sigma \leq \epsilon$ . We target the more practical attacking scenario - black-box attack that requires less information. We assume we cannot access the model parameters and can only utilize the final output of model $f(x;\theta)$ to conduct the attack. + +# 3 Methodology + +# 3.1 Motivation + +As mentioned in the introduction, the current methods face three limitations: 1) there is a limited number of substitute variables; 2) there is a high verification cost associated with substitute variables; and 3) the generated adversarial samples often exhibit large perturbations. Among these limitations, the second one holds the utmost significance as it significantly impacts both the first and third limitations. Due to the high cost involved, it becomes challenging to generate diverse adversarial examples within a reasonable budget. Additionally, attackers tend to introduce large perturbations without employing any perturbation constraints in order to maximize their attacks. + +To address these limitations, the first question arises: "Could we substantially reduce the verification cost while allowing for unrestricted diversity of substitute variables and minimizing perturbations?" To delve into the reasons behind the second limitation, we need to analyze its underlying factors. The low verification efficiency of the substitute set stems from the fact that each substitute can only be verified by constructing an adversarial sample to replace the original variable and then launching an actual attack on the victim model. This realization leads to the second question: "Is it feasible to predict the attack effect of a substitute instead of constructing an adversarial sample to attack the victim model?" + +![](images/30b4588241ab9a49cce19898cce8d7c4104bc2ea7b84b8d6c8b49fd8dc7c39bc.jpg) + +Given input code $x$ and one of its variables $var$ , different substitutes can be used to replace it to obtain different adversarial samples. After attacking the victim model, the probability of the label will also change. Conversely, if we want to reduce the probability of this label, the third question is following, "how to choose relatively better substitutes that can reduce the model confidence from a large-scale original substitute set?" It is possible to select good substitutes without actual attack if we can forecast, which is implemented by RNNS. + +The core idea of RNNS is maintaining a search seed updated based on the historical attack. The search seed is employed to search next adversarial substitutes that are possible to attack successfully. Since substitutes are discrete and cannot be directly involved in calculations, we first use a variable name pre-trained encoder denoted as $E$ to map substitutes to a unified continuous representation vector space. Then, based on the representation vectors of substitutes that have participated in the attack, we predict the search seed vector $e_{seed}$ for the next round of the substitute selection. Finally, we calculate the similarity between $e_{seed}$ and the representation vector of substitutes and then select relatively better substitutes. For specific details, please refer to Section 3.2.3. + +# 3.2 Representation Nearest Neighbor Search + +Algorithm 1 shows the workflow of our approach, First, we collect the original substitute set from public real code, following the process described + +in Section 3.2.1. We extract variables from the input code and sort them according to their uncertainty, referring to Section 3.2.2 (Line 3-4). We replace variables in sequence to form attack samples (Line 5). For a given $var$ , we first initialize the optimal substitute for this current iteration $sub_{cur}$ and the optimal substitute for the previous iteration $sub_{pre}$ to the $var$ . Then, we initialize the accumulated smooth increment of the representation vector $\Delta e_{smo}$ to a zero vector. $\Delta e_{smo}$ is used to record the historical representation change of the search seed $e_{seed}$ . We now commence the iterative attack process, as delineated in Line 11. We predict the search seed vector $e_{seed}$ with the process described in Section 3.2.3 (Line 12), and then extract topk substitutes based on $e_{seed}$ to form the candidate substitutes $subs_{topk}$ with the process described in Section 3.2.4 (Line 13). Subsequently, we replace $sub_{cur}$ in $x'$ with each substitute in $subs_{topk}$ to obtain the corresponding temporary adversarial sample $x'_tmp$ (Line 14-15). $x'$ is the current code that we are trying to attack and it is initialized with the original code $x$ . We use $x'_tmp$ to attack the victim model and obtain the probability $prob_y$ of the ground-truth label $y$ and predicted label $y'$ (Line 16). If the probability of the ground-truth label $y$ hits a new low ( $< prob_{min}$ ), we update $x'$ , $sub_{pre}$ , $sub_{cur}$ and $prob_{min}$ (Line 17-22). $prob_{min}$ records the minimum probability of label $y$ during the attack process. If $x'_tmp$ causes the victim model to predict an incorrect label, this attack is successful and returns the successful adversarial sample (Line 23-26); otherwise, proceed to the next iteration until all variables have completed iteration and return the final adversarial sample and attack result (Line 30). + +# 3.2.1 Collecting Large Original Substitute Set + +We have developed a tool for variable extraction that leverages the tree-sitter framework1. This tool, henceforth denoted as ExtractVar (see Line 3), operates in three distinct steps. In the first step, we extract all variables from the current dataset and then filter out duplicates. During the second step, each valid variable is tokenized, and we compute the embedding for each token using the variable-name encoder $E$ that is pre-trained on CodeSearchNet2. We then apply a mean pooling operation on these tokens to determine the variable's embedding. In the third step, we retain all the chosen variables + +along with their associated embeddings as the initial substitute set, represented as $subs_{original}$ . + +# 3.2.2 Computing Uncertainty + +Given a specific code $x$ , we replace each instance of $var \in x$ with a set of predefined fixed variables $VarArray$ , resulting in a set of mutated codes denoted as $X_{var}^{mutated}$ . These mutated codes are subsequently utilized to query the victim model, allowing us to obtain the probability distribution for each class. A greater variance in the distribution signifies increased uncertainty for $var$ , suggesting that $var$ should be prioritized for replacement. The uncertainty associated with $var$ is defined as follows: + +$$ +u n c e r t a i n t y _ {v a r} = \frac {1}{C} \sum_ {i = 1} ^ {C} v a r i a n c e (P _ {v a r} ^ {i}) +$$ + +where $P_{var}^{i} = \{p_{var}^{i}(x) | \forall x \in X_{var}^{mutated}\}$ , $C$ is the number of labels, $p_{var}^{i}(x)$ is the model probability for label $i$ given the mutated code $x$ , and variance denotes the standard variance function. A larger and more diverse $X_{var}^{mutated}$ ensures a closer approximation of uncertainty to the true value. It is important to note, however, that the magnitude of the change length must not be excessively large, as this would result in all probability changes converging to a single point. This is because samples subjected to large changes deviate significantly from the original, leading to a substantial decrease in the model confidence levels. Subsequently, we arrange the variables in descending order based on their uncertainties. The greater the uncertainty of a variable, the more valuable it is for attack. This process is denoted as RankVarsWithUncertainty at line 4. In our implementation, the size of this variable array VarArray is 16, and the variable length ranges from 1 to 5. + +# 3.2.3 Predicting Search Seed + +To filter out superior substitutes from the substantial $subs_{original}$ , it becomes necessary to predict the search seed within the substitute representation vector space. Given the optimal substitute $sub_{cur}$ of the current round, the optimal substitute $sub_{pre}$ from the previous round, and the accumulated smooth increment of the representation vector, denoted as $\Delta e_{smo}$ , from all preceding rounds of iteration, we initially compute the increment of the representation vector in the current round, $\Delta e$ : + +$$ +\Delta \boldsymbol {e} = E (s u b _ {c u r}) - E (s u b _ {p r e}) +$$ + +
TaskTrain / Val / TestCodeBERTGraphCodeBERTCodeT5
Defect21,854 / 2,732 / 2,73263.7663.6567.02
Clone90,102 / 4,000 / 4,00096.9797.3697.84
Authorship528 / - / 13282.5777.2788.63
C1000320,000 / 80,000 / 100,00082.5383.7984.46
Python800153,600 / 38,400 / 48,00096.3996.2996.79
Java25048,000 / 11,909 / 15,00096.9197.2797.72
+ +Table 1: Datasets and Victim Model Performance (Accuracy, %). + +, where $E$ is variable name encoder, trained on CodeSearchNet by masked language modelling independently so that RNNS is independent of victim downstream-task models. Then we update the $\Delta e_{smo}$ + +$$ +\Delta \mathbf {e} _ {s m o} = (1 - \alpha) \Delta \mathbf {e} _ {s m o} + \alpha \Delta \mathbf {e} +$$ + +, where $\alpha$ is a smooth rate limited 0 to 1, Finally, we predict the search seed $e_{\text{seed}}$ : + +$$ +\boldsymbol {e} _ {\text {s e e d}} = E \left(\operatorname {s u b} _ {\text {c u r}}\right) + \Delta \boldsymbol {e} _ {\text {s m o}} +$$ + +This process is denoted as PredictSeed at line 12. + +# 3.2.4 Searching Top-K Substitutes + +Initially, we filter out substitutes from $subs_{original}$ that comply with two constraints: 1) $1 - sim(E(sub), E(var)) < \epsilon$ and 2) $|len(sub) - len(var)| < \delta$ , where $var$ refers to the original variable in the input code that is to be replaced, $sim(.)$ is the similarity calculation function. $E(.)$ is the variable name encoder, and $len(.)$ is used to calculate the length of the variable name. Then, we calculate the similarity between the search seed $e_{seed}$ and the substitutes that are filtered by the two constraints and select the $k$ most similar substitutes to form $subs_{topk}$ . This process is denoted as SearchTopkSub at line 13. In our experiment, $\epsilon = 0.15$ , $\delta = 4$ , $k = 60$ , $sim(.)$ is cosine similarity. + +# 4 Experimental Setup + +Dataset and Model. To study the effectiveness and efficiency of RNNS, we conduct experiments on three popular programming languages (C, Python, and Java). For the datasets, we employed six widely studied open-source datasets that cover four important code tasks. Specifically, BigCloneBench (Wang et al., 2020) is one code clone detection dataset named Clone. Devign (Zhou et al., 2019) is a dataset used for vulnerability detection, named Defect. For authorship prediction, we use the dataset provided by (Alsulami et al., 2017). + +Besides, we utilize three problem-solving classification tasks, Java250, Python800, and C1000, provided by ProjectCodeNet (Puri et al., 2021). For all the datasets (except for authorship prediction which does not have enough data samples), we follow the original papers to split the data into the training set, validation set, and test set. Authorship prediction only has two split parts, training data and test data. + +For the code models, we follow the previous work (Yang et al., 2022) and investigate two pretrained models CodeBERT (Feng et al., 2020), and GraphCodeBERT (Guo et al., 2020). Besides, we add one more powerful model CodeT5 (Wang et al., 2021) in our study. Table 1 summarizes the details of our employed datasets and fine-tuned models. + +Evaluation Metric. To evaluate the effectiveness of adversarial attack methods, we employ the commonly used attack success rate (ASR) (Yang et al., 2022) as the measurement. To evaluate the efficiency of the attack methods, we use query times (QT) to check the average number of querying the victim model for one input code. Finally, we use the change of replaced-variable length and the number of replaced variables to study the quality/perturbation of adversarial examples. A smaller score means the attack method can generate adversarial examples with less perturbation injection. + +Baseline. We compare RNNS with two black-box attack baselines, MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022). MHM is a sampling search-based black-box attack that generates the substitutes from the vocabulary based on lexical rules for identifiers. MHM employs synthesized tokens as the candidates of substitutes, which could introduce meaningless variable names. ALERT is a recently proposed attack method that combines greedy attack and genetic algorithm to find the substitutes. We also use two textual attack algorithms PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021) as minor baselines, since they are not designed for code models. + +Implementation. We implement our approach in PyTorch and run all experiments on 32G-v100 GPUs. We reuse the source code from the baselines. We make our implementation publicly available. + +
Task+ModelALERTMHMRNNS
ASRQTASRQTASRQT
Clone+CodeBert28.672155.3939.66972.1546.50666.48
Clone+GraphCodeBert10.401466.689.58490.9941.281122.01
Clone+CodeT529.202359.7038.791069.0639.61895.79
Defect+CodeBert52.291079.6850.51862.1869.18588.35
Defect+GraphCodeBert74.29621.7775.19539.9381.63404.73
Defect+CodeT576.66721.0286.51344.0889.45344.29
Authorship+CodeBert34.98682.5764.70775.1173.391029.59
Authorship+GraphCodeBert58.821227.3675.49632.1080.39696.64
Authorship+CodeT564.951078.4066.97715.8971.79970.44
Java250+CodeBert50.50958.9674.03961.6075.12815.91
Java250+GraphCodeBert46.741026.1546.05946.5272.30853.74
Java250+CodeT552.041189.4230.591107.9563.801049.46
Python800+CodeBert58.30513.6356.67919.3777.88514.19
Python800+GraphCodeBert51.87577.7054.15917.9271.42730.14
Python800+CodeT552.84777.2036.951127.4469.07662.28
C1000+CodeBert53.50525.4359.75340.8872.96537.76
C1000+GraphCodeBert52.68566.1845.93837.0972.23634.27
C1000+CodeT547.86843.3336.45668.1559.00697.06
Count0/184/180/186/1818/188/18
+ +Table 2: Comparison results with MHM, and ALERT, ASR %. Count: the number of best results achieved. + +# 5 Results Analysis + +# 5.1 Attack Effectiveness and Efficiency + +We compare RNNS with two methods MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022) on six datasets and 18 victim models that have been fine-tuned for the downstream tasks. Table 2 shows the comparison results where the last row Count indicates how many times this method achieves the best results. We can see that RNNS achieves the best performance for 18/18 times in terms of ASR, and the lowest cost for 8/18 times in terms of QT in Table 2. Both of the indicators are better than the baselines. The two baselines have zero best ASR for all victim models and all datasets. The lowest QTs achieved by ALERT and MHM are 4 and 6, respectively. We conclude that for effectiveness and efficiency, RNNS outperforms ALERT and MHM in all cases. Especially, MHM and ALERT fail to attack GraphCodeBERT on BigClone dataset, and only have $9.58\%$ and $10.4\%$ ASR respectively, while RNNS has more than $40\%$ ASR. RNNS has almost two times larger ASR than MHM on Java250+CodeT5 and Python800+CodeT5. + +It should be noted that high ASR is not due to large QT. As shown in Table 2, the three groups of experiments with the most QTs are Clone+GraphCodeBert, Java250+CodeT5, and Authorship+CodeBert, with ASRs of $41.28\%$ , $63.80\%$ , and $73.39\%$ , respectively, which are not the highest. On the contrary, Defect+CodeT5 has the highest + +ASR of $89.45\%$ , but QT is the smallest. Therefore, there is no absolute causal relationship between QT and ASR. + +# 5.2 Perturbation of Adversarial Example + +We conduct a study about the quality of the adversarial examples to check if RNNS can generate looking-normal code, e.g., avoiding naively increasing the variable name length. To do so, firstly, we count the average length of the original variable and adversarial variables as demonstrated by Table 3. We also compute the mean and variances of their difference. Besides, we compute the average number of the replaced variables for the successful attack as shown in Table 4. Low values mean the inputs are modified less, and high qualities. + +In Table 3, the 2nd, 5th, and 8th columns are the average length for original variables (named Var Len) that are replaced. The 3rd, 6th, and 9th columns are the average lengths for adversarial variables (named Adv Var Len). The 4th, 7th, and 10th columns are the average and variance (mean $\pm$ variance) of the absolute length difference between original variables and adversarial variables (named Difference). We observe that MHM prefers to replace the long-length variables while RNNS likes replacing short-length variables if we compare the 2nd and 5th columns. Meanwhile, the change of variable length from RNNS is less than MHM. MHM introduces the average length difference of 3.39-6.82 while RNNS only has 2.02-2.54. MHM has much higher variances than RNNS in the length change. ALERT uses shorter adversarial variable names than RNNS + +
Task+ModelRNNSMHMALERT
Var LenAdv Var LenDifferenceVar LenAdv Var LenDifferenceVar LenAdv Var LenDifference
Clone+CodeBert6.126.792.35 ± 4.506.4710.66.34 ± 10.985.916.211.32 ± 2.02
Clone+GraphCodeBert6.326.972.54 ± 6.436.5810.416.82 ± 21.675.505.931.45 ± 2.49
Clone+CodeT56.456.692.51 ± 8.306.4610.466.17 ± 25.786.256.611.32 ± 2.72
Defect+CodeBert4.645.442.08 ± 2.494.449.596.57 ± 28.784.855.061.36 ± 1.93
Defect+GraphCodeBert4.085.342.13 ± 1.834.379.736.48 ± 26.514.475.221.33 ± 1.83
Defect+CodeT53.955.172.03 ± 1.934.339.816.59 ± 29.984.365.011.27 ± 1.57
Authorship+CodeBert3.815.182.28 ± 1.563.977.945.45 ± 16.724.425.351.40 ± 2.25
Authorship+GraphCodeBert3.695.232.36 ± 1.714.397.645.24 ± 15.383.744.461.22 ± 1.82
Authorship+CodeT53.955.182.03 ± 2.663.957.985.59 ± 20.943.814.501.22 ± 1.62
Java250+CodeBert2.354.222.11 ± 1.023.216.504.34 ± 15.203.223.650.94 ± 1.63
Java250+GraphCodeBert2.484.312.13 ± 1.073.136.594.42 ± 14.843.053.500.98 ± 1.54
Java250+CodeT52.764.472.10 ± 1.173.206.544.33 ± 14.603.167.314.41 ± 18.73
Python800+CodeBert1.503.542.21 ± 1.021.975.113.64 ± 9.061.782.270.64 ± 1.34
Python800+GraphCodeBert1.883.902.18 ± 0.781.996.014.46 ± 16.521.802.330.76 ± 1.30
Python800+CodeT51.653.592.13 ± 0.951.974.953.49 ± 8.181.885.844.10 ± 12.64
C1000+CodeBert1.583.442.08 ± 0.882.415.053.65 ± 12.022.132.520.67 ± 1.17
C1000+GraphCodeBert1.603.592.10 ±0.852.395.353.90 ± 12.982.182.670.66 ± 1.23
C1000+CodeBert1.383.332.02 ± 0.852.364.823.39 ± 10.982.106.564.74 ± 13.24
+ +Table 3: Replaced-variable length comparison, mean $\pm$ variance. + +with less change because it uses the pre-trained model to generate the replacements that are close to the replaced variables. + +Table 4 statistically shows the number of replaced variables. It can be seen that RNNS replaces around an average of 3.6 variables with a smaller variance of around (3.4-4.6) while MHM needs to modify about an average of 5.4 variables with a larger variance $(\geq 11.14)$ . ALERT also replaces more variables to attack models than RNNS and MHM. RNNS introduces less or equal perturbation than the baselines in terms of length change and change number. + +Figure 2 shows one example of RNNS, MHM, and ALERT attack successfully from the Java250 dataset. The changes are highlighted by shadow markers. RNNS only renames one variable $\mathbf{b}$ to $\mathbf{h}$ , ALERT renames two variables, while MHM almost renames all variables and also prefers longer names. + +# 5.3 Ablation Study + +We remove the two search constraints in Section 3.2.4, denoted this variant of RNNS as RNNS-Unlimited. Table 5 shows the comparing results between RNNS-Unlimited and RNNS. RNNS-Unlimited gets the first place for all the tasks in terms of ASR. ASR can be improved by a maximum of $8.35\%$ and a minimum of about $2\%$ after removing limitations. For QT, RNNS-Unlimited only loses 3 times among 18 evaluations. The improvement of RNNS-Unlimited is not surprising with respect to ASR and QT. Because RNNS-Unlimited can search the adversarial examples in the non-similar real names and use very long variable names. + +# 5.4 Attack Defended Model and Retraining + +Attack Defended Model. We employ RNNS and MHM to attack the three defended models provided by ALERT (Yang et al., 2022). These models are prepared by adversarial fine-tuning. Table 6 presents the results. We can see that RNNS outperforms MHM in two tasks, and MHM is better in one task. This experiment setting actually is not friendly for RNNS because ALERT (Yang et al., 2022) uses the replacements from pre-trained models which implicitly have the semantic constraint. + +Retraining. We use the adversarial examples from RNNS to retrain the victim models of CodeBERT by contrastive adversarial learning. We use three 3 datasets, Defect, Authorship, and Java250. We generate the adversarial examples on the whole training dataset for them. Table 7 presents the results, all approaches achieve much lower ASR compared with the previous. RNNS adversarial examples can improve the mode robustness through contrastive adversarial retraining. If we compare Defect/Authorship+CodeBERT in Table 7 and Table 6, we can find that both retrained models via RNNS are more robust than the models from ALERT since they have much lower ASRs. + +# 5.5 RNNS vs Textual Attack Methods + +To compare the effects of RNNS and textual attack methods, We conducted attack experiments on three datasets using the PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021). The three datasets Defect, Authorship, and Java250, represent three languages respectively, C, Python, and Java. To be fair, the search space of the PSO and LSH is the same as that of RNNS. + +As shown in Table 8, the QT of PSO algorithm + +
TaskCodeBERTGraphCodeBERTCodeT5
RNNSMHMALERTRNNSMHMALERTRNNSMHMALERT
Clone3.55 ± 4.606.72 ± 16.576.86 ± 18.854.12 ± 4.946.21 ± 15.136.95 ± 18.993.43 ± 5.005.68 ± 14.017.65 ± 25.57
Defect3.39 ± 4.962.78 ± 7.893.49 ± 3.992.67 ± 1.752.84 ± 9.504.10 ± 11.052.51 ± 1.452.16 ± 3.583.49 ± 3.99
Authorship4.24 ± 7.477.52 ± 25.826.60 ± 22.963.65 ± 3.326.67 ± 22.297.75 ± 33.124.39 ± 9.005.72 ± 13.026.06 ± 18.74
Java2503.87 ± 4.707.11 ± 21.187.82 ± 28.963.87 ± 4.256.41 ± 16.247.83 ± 25.064.71 ± 6.877.04 ± 15.298.92 ± 25.97
Python8003.06 ± 1.875.21 ± 12.284.96 ± 8.474.12 ± 3.685.00 ± 10.834.63 ± 6.763.57 ± 3.045.29 ± 13.516.18 ± 11.45
C10003.00 ± 1.864.42 ± 7.494.13 ± 5.593.37 ± 2.385.14 ± 7.304.88 ± 6.243.39 ± 2.485.20 ± 7.435.43 ± 6.99
mean3.52 ± 4.245.63 ± 15.215.65 ± 14.803.63 ± 3.395.38 ± 13.556.02 ± 16.873.67 ± 4.645.18 ± 11.146.29 ± 15.45
+ +Table 4: Replaced-variable number comparison, mean $\pm$ variance + +
public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args)
Scanner obj = new Scanner(System.in);
int a = obj.nextInt();
int b = obj.nextInt();
int out = 1;
int ans = 0;
while (out < b) {}while (out < h) {}}while (tempOp < colArr) {}}}}}}}
out--;
out = out + a;
ans++;
}}System.out.println(ans);}}System.out.println(number_array);}}}}}}}
Original CodeAdversarial Code from RNNSAdversarial Code from MHMAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERT
+ +Figure 2: Case study. Original vs. RNNS vs. MHM vs. ALERT + +
TaskCodeBERTGraphCodeBERTCodeT5
RNNS-UnlimitedRNNSRNNS-UnlimitedRNNSRNNS-UnlimitedRNNS
ASRQTASRQTASRQTASRQTASRQTASRQT
Defect72.29590.9869.18588.3587.77381.8281.63404.7391.64338.4189.45344.29
Clone50.66955.9746.50666.4848.161105.1141.281122.0141.38920.6539.61895.79
Authorship91.74447.6873.391029.5991.17438.6980.39696.6488.88620.5671.79970.44
C100074.70502.0272.96537.7676.82498.6472.23634.2761.96704.9559.00697.06
Python80083.90460.9277.88514.1979.00496.3071.42730.1472.69646.5969.07662.28
Java25079.70760.9775.12815.9181.94744.5772.30853.7475.52910.9763.801049.46
Count6/64/60/62/66/66/60/60/66/65/60/61/6
+ +Table 5: Results of ablation study, before and after removing constraints, ASR %. + +
Defended ModelRNNSMHM
ASRQTASRQT
Clone+CodeBert12.90958.3528.171245.75
Defect+CodeBert95.37282.2092.23283.66
Authorship+CodeBert51.881524.4043.261026.08
+ +Table 6: Attack defended models, ASR %. + +
ACCASR(RNNS)ASR(MHM)ASR(ALERT)
Authorship90.6219.8123.5814.28
Defect65.1440.4623.6924.53
Java25097.6319.676.6542.91
+ +is 4.22-6.7 times that of RNNS, and the ASR of PSQ algorithm is $5.55\% - 27.82\%$ lower than that of RNNS algorithm. It can be inferred that for code variable attacks, combinatorial optimization is inefficient when the substitute set of variables is relatively large. The main reasons are the following two points. Firstly, code segments are generally longer, and the substitute set of code variables is much larger than the synonym set of natural language words. Secondly, the impact of variable replacement on code semantics is smaller than that + +Table 7: Results of contrastive adversarial retraining, model: CodeBERT. + +
Task+ModelRNNSPSOLSH
ASRQTASRQTASRQT
Defect+CodeBert69.18588.3563.633945.0426.62321.78
Authorship+CodeBert73.391029.5952.294350.0019.26458.55
Java250+CodeBert75.12815.9147.35076.0231.58397.05
+ +Table 8: RNNS vs PSO and LSH, ASR %. + +of word replacement on natural language semantics. + +RNNS's QT is 1.8-2.2 times that of LSH, and the QT has dropped significantly. However, LSH's ASR is inferior to RNNS by $42.56\% - 54.13\%$ . For code variable attacks, LSH has high efficiency, but its effectiveness is relatively low. One possible reason for LSH causing low ASR is the distribution of adversarial samples in each bucket is uneven. + +# 6 Related Work + +Adversarial attacks for code models have been widely studied (Yang et al., 2022; Liu et al., 2023a; Li et al., 2023; Jha and Reddy, 2023). These works can be generally categorized into black-box attacks and white-box attacks. A black-box attack for code models queries the model outputs and selects the substitutes using a score function. For example, + +
AlgorithmSubstitutes SizeSubstitutes SourceReplacement PositionSubstitutes Selection
MHMmediumvocabularyrandomrandom sample
ALERTsmallmodel generationimportance scoretraverse
RNNSlargereal public variablesuncertainty scoreefficient constrained search
+ +Table 9: Difference between RNNS to the others. + +ALERT (Yang et al., 2022) finds the adversarial examples using variable-name substitutes generated by pre-trained masked models. MHM (Zhang et al., 2020) uses Metropolis-Hastings to sample the replacement of code identifiers. STRATA (Springer et al., 2020) generates adversarial examples by replacing the code tokens based on the token distribution. Chen et al. (2022) apply pre-defined semantics-preserving code transformations to attack code models. CodeAttack (Jha and Reddy, 2023) uses code structure to generate adversarial data. White-box attacks require the code model gradient to modify inputs for adversarial example generation. CARROT (Zhang et al., 2022) selects code mutated variants based on the model gradient. Henkel et al. (2022) attack code models by gradient-based optimization of the abstract syntax tree transformation. Srikant et al. (2021) uses optimized program obfuscations to modify the code. DAMP (Yefet et al., 2020) derives the desired wrong prediction by changing inputs guided by the model gradient. + +Table 9 demonstrates the differences among RNNS, MHM (Zhang et al., 2020) and ALERT (Yang et al., 2022). MHM and ALERT represent the two methodologies most closely aligned with our research. Our approach considers identifier replacements like MHM and ALERT, ensuring that the adversarial example keeps the same semantics as the original one. Our substitute size is scalable and can be substantial, and RNNS searches the possible next adversarial example in the substitute space. In our approach, we locate vulnerable variables based on the uncertainty and search $\text{sub}_{\text{topk}}$ without building adversarial samples and actual attacks. Our goal is to obtain high ASRs by searching real variable names. MHM has the same goal as ours but synthesizes variable names. ALERT sacrifices ASR to make the variable name readable. + +# 7 Conclusion + +We propose a novel black-box adversarial search-based attack for variable replacement. RNNS has + +three main contributions: 1) This work proposes a non-generation search-based black-box attacking method via predicting the attack effect of a substitute. This method can greatly reduce the verification cost of the substitute, remove the restrictions on the size and diversity of the substitute set, and achieve a significant improvement in terms of ASR without increasing QT. 2) This work proposes a simple and efficient method for constructing a substitute set. This method can construct a large-scale, diverse, and real substitute set at low cost. 3) The adversarial examples from RNNS can be used to improve the model robustness. + +# 8 Limitations + +There are some limitations of RNNS. Firstly, RNNS does not revert to the preceding step to persist with the search upon an increase in the model probability of the ground truth label. While the incorporation of this step may bolster the Attack Success Rate (ASR), it could potentially compromise the Query Time (QT). Secondly, the size and diversity of the substitute set significantly influence RNNS; a minimal and homogeneous set can precipitate a diminished attack success rate. Thirdly, RNNS involves multiple hyperparameters whose values need to be manually set. One of the most important parameters is the moving parameter $\alpha$ . The number of attacking iterations max itr is also significant. We set $\alpha$ to 0.2 and max itr to 6 with some small experimental trials. Fourthly, RNNS currently only targets untargeted attack scenarios, for targeted attacks, ASR will be very low when there are many category labels. For example, when performing targeted attacks on Authorship+Codebert with 66 labels, the ASR can only reach $6.4\%$ . How to migrate to targeted attacks is a direction we need to study in the future. + +# Acknowledgment + +This work is supported by NRF and the CSA under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN), NRF and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-019), and NRF Investigatorship NRF-NRFI06-2020-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of NRF and CSA Singapore. + +# References + +Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4998-5007. +Bander Alsulami, Edwin Dauber, Richard Harang, Spiros Mancoridis, and Rachel Greenstadt. 2017. Source code authorship attribution using long short-term memory based networks. In Computer Security - ESORICS 2017, pages 65-82, Cham. Springer International Publishing. +Penglong Chen, Zhen Li, Yu Wen, and Lili Liu. 2022. Generating adversarial source programs using important tokens-based structural transformations. In 2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS), pages 173-182. +Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547. +Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pages 933-944. IEEE. +Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, LIU Shujie, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. 2020. Graphcodebert: Pre-training code representations with data flow. In International Conference on Learning Representations. +Jordan Henkel, Goutham Ramakrishnan, Zi Wang, Aws Albarghouthi, Somesh Jha, and Thomas Reps. 2022. Semantic robustness of models of source code. In 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 526-537. +Akshita Jha and Chandan K Reddy. 2023. Codeattack: Code-based adversarial attacks for pre-trained programming language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 14892-14900. +Liuqing Li, He Feng, Wenjie Zhuang, Na Meng, and Barbara Ryder. 2017. Cclearner: A deep learning-based clone detection approach. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 249-260. IEEE. +Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, and Yang Liu. 2023. Multi-target backdoor attacks for code pre-trained models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7236-7254, Toronto, Canada. Association for Computational Linguistics. + +Yaoxian Li, Shiyi Qi, Cuiyun Gao, Yun Peng, David Lo, Zenglin Xu, and Michael R Lyu. 2022. A closer look into transformer-based code intelligence through code transformation: Challenges and opportunities. arXiv preprint arXiv:2207.04285. +Shangqing Liu, Yu Chen, Xiaofei Xie, Jing Kai Siow, and Yang Liu. 2020. Retrieval-augmented generation for code summarization via hybrid gnn. In International Conference on Learning Representations. +Shangqing Liu, Bozhi Wu, Xiaofei Xie, Guozhu Meng, and Yang Liu. 2023a. Contrabert: Enhancing code pre-trained models via contrastive learning. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pages 2476-2487. +Shangqing Liu, Xiaofei Xie, Jingkai Siow, Lei Ma, Guozhu Meng, and Yang Liu. 2023b. Graphsearch-net: Enhancing gnns via capturing global dependencies for semantic code search. IEEE Transactions on Software Engineering. +Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. A strong baseline for query efficient attacks in a black box setting. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8396-8409. +Ruchir Puri, David Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pajar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks. +Jacob M Springer, Bryn Marie Reinstadler, and Una-May O'Reilly. 2020. Strata: Simple, gradient-free attacks for models of code. arXiv preprint arXiv:2009.13562. +Shashank Srikant, Sijia Liu, Tamara Mitrovska, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang, and Una-May O'Reilly. 2021. Generating adversarial computer programs using optimized obfuscations. In International Conference on Learning Representations. +Wenhan Wang, Ge Li, Bo Ma, Xin Xia, and Zhi Jin. 2020. Detecting code clones with graph neural network and flow-augmented abstract syntax tree. In 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 261-271. IEEE. +Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696-8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. + +Martin White, Michele Tufano, Christopher Vendome, and Denys Poshyvanyk. 2016. Deep learning code fragments for code clone detection. In 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 87-98. IEEE. +Zhou Yang, Jieke Shi, Junda He, and David Lo. 2022. Natural attack for pre-trained models of code. In Proceedings of the 44th International Conference on Software Engineering, ICSE '22, page 1482-1493, New York, NY, USA. Association for Computing Machinery. +Noam Yefet, Uri Alon, and Eran Yahav. 2020. Adversarial examples for models of code. Proceedings of the ACM on Programming Languages, 4(OOPSLA):1-30. +Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080. +Huangzhao Zhang, Zhiyi Fu, Ge Li, Lei Ma, Zhehao Zhao, Hua'an Yang, Yizhe Sun, Yang Liu, and Zhi Jin. 2022. Towards robustness of deep program processing models—detection, estimation, and enhancement. ACM Trans. Softw. Eng. Methodol., 31(3). +Huangzhao Zhang, Zhuo Li, Ge Li, Lei Ma, Yang Liu, and Zhi Jin. 2020. Generating adversarial examples for holding robustness of source code processing models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1169-1176. +Vitalii Zhelezniak, Aleksandar Savkov, and Nils Hammerla. 2020. Estimating mutual information between dense word embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8361-8371, Online. Association for Computational Linguistics. +Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. Advances in neural information processing systems, 32. \ No newline at end of file diff --git a/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/images.zip b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d3a863f6d0548744c876ab296bfebc48475ad4e1 --- /dev/null +++ b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6f7d9523eca19051d9a35331ea333b8a2b289b87e3c37a88d24bbdbb6f25030 +size 605697 diff --git a/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/layout.json b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..12cb014f008df4b9cba28081dc304e9f7a96ff3d --- /dev/null +++ b/2023/A Black-Box Attack on Code Models via Representation Nearest Neighbor Search/layout.json @@ -0,0 +1,8613 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 68, + 70, + 526, + 103 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 70, + 526, + 103 + ], + "spans": [ + { + "bbox": [ + 68, + 70, + 526, + 103 + ], + "type": "text", + "content": "A Black-Box Attack on Code Models via Representation Nearest Neighbor Search" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "spans": [ + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "content": "Jie Zhang" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "content": ", Wei Ma" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "inline_equation", + "content": "^{2\\dagger}" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "content": ", Qiang Hu" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "content": ", Shangqing Liu" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "content": ", Xiaofei Xie" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "content": ", Yves Le Traon" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "text", + "content": ", and Yang Liu" + }, + { + "bbox": [ + 65, + 111, + 531, + 127 + ], + "type": "inline_equation", + "content": "^{2}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 234, + 137, + 362, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 234, + 137, + 362, + 150 + ], + "spans": [ + { + "bbox": [ + 234, + 137, + 362, + 150 + ], + "type": "text", + "content": "1Noah's Ark Lab, Huawei" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 96, + 151, + 498, + 165 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 151, + 498, + 165 + ], + "spans": [ + { + "bbox": [ + 96, + 151, + 498, + 165 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 96, + 151, + 498, + 165 + ], + "type": "text", + "content": "School of Computer Science and Engineering, Nanyang Technological University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 78, + 165, + 518, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 165, + 518, + 179 + ], + "spans": [ + { + "bbox": [ + 78, + 165, + 518, + 179 + ], + "type": "text", + "content": "3The Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 94, + 179, + 501, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 179, + 501, + 194 + ], + "spans": [ + { + "bbox": [ + 94, + 179, + 501, + 194 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 94, + 179, + 501, + 194 + ], + "type": "text", + "content": "School of Computing and Information Systems, Singapore Management University" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 155, + 212, + 204, + 226 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 204, + 226 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 204, + 226 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 85, + 234, + 274, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 234, + 274, + 581 + ], + "spans": [ + { + "bbox": [ + 85, + 234, + 274, + 581 + ], + "type": "text", + "content": "Existing methods for generating adversarial code examples face several challenges: limited availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, Python, and C. We employed three pre-trained code models (CodeBERT, GraphCodeBERT, and CodeT5) that resulted in a cumulative of 18 victim models. The results demonstrate that RNNS outperforms baselines in terms of ASR and QT. Furthermore, the perturbation of adversarial examples introduced by RNNS is smaller compared to the baselines in terms of the number of replaced variables and the change in variable length. Lastly, our experiments indicate that RNNS is efficient in attacking defended models and can be employed for adversarial training." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 589, + 155, + 602 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 589, + 155, + 602 + ], + "spans": [ + { + "bbox": [ + 68, + 589, + 155, + 602 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 610, + 292, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 610, + 292, + 745 + ], + "spans": [ + { + "bbox": [ + 67, + 610, + 292, + 745 + ], + "type": "text", + "content": "Recently, since programming language can be seen as one kind of textual data and also inspired by the success of deep learning for text processing and understanding, researchers have tried to pretrain code models such as CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020), ContrabERT (Liu et al., 2023a) to help developers to solve multiple programming tasks, e.g., code search (Gu et al., 2018; Liu et al., 2023b), code clone detection (White et al., 2016; Li et al., 2017), code sum" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 213, + 527, + 306 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 213, + 527, + 306 + ], + "spans": [ + { + "bbox": [ + 302, + 213, + 527, + 306 + ], + "type": "text", + "content": "marization (Ahmad et al., 2020; Liu et al., 2020), and vulnerability detection (Zhou et al., 2019). Although these code models have achieved good performance on many code tasks, they are still suffering from robustness issues. A few adversarial attack methods have emerged to evaluate and improve the robustness of code models." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 310, + 527, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 310, + 527, + 500 + ], + "spans": [ + { + "bbox": [ + 302, + 310, + 527, + 500 + ], + "type": "text", + "content": "There are certain considerations to be made. Firstly, code pre-training models are frequently deployed remotely, which limits access to the model parameters and renders white-box attacks infeasible. Secondly, among the numerous code-equivalent transformation methods, variable substitution exerts the most significant influence on the resilience of large code models while being the least detectable transformation (Li et al., 2022). As a result, black-box attack techniques based on variable substitution have emerged as a valuable avenue for research and multiple works have been proposed such as ALERT (Yang et al., 2022) and MHM (Zhang et al., 2020)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 502, + 527, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 502, + 527, + 774 + ], + "spans": [ + { + "bbox": [ + 302, + 502, + 527, + 774 + ], + "type": "text", + "content": "However, these works have three limitations: 1) The number of substitute variables is limited and lacks diversity, which lowers the upper bound of the attack success rate. For example, ALERT employs 60 substitute variables for each variable, which are generated by a pre-trained model, and the substitute variables lack diversity. MHM also randomly selects 1500 words from a fixed dictionary as substitute variables. 2) The verification cost of substitute variables is high. To verify the attack effect of each substitute, it is necessary to replace the source variable with an adversarial sample and perform an actual attack on the victim model. ALERT uses a traversal method to select substitute variables, and in order to reduce the number of attacks, it limits the number of substitute variables; MHM uses a random sampling method to select substitute variables in order to reduce the number of attacks. Neither method is conducive to cost-effective attacks. 3) The generated adversarial samples have" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 83, + 750, + 185, + 761 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 750, + 185, + 761 + ], + "spans": [ + { + "bbox": [ + 83, + 750, + 185, + 761 + ], + "type": "text", + "content": "* clark.zhang@huawei.com" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 83, + 761, + 248, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 761, + 248, + 772 + ], + "spans": [ + { + "bbox": [ + 83, + 761, + 248, + 772 + ], + "type": "text", + "content": "† corresponding author: ma_wei@ntu.edu.sg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 285, + 780, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 780, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 285, + 780, + 310, + 791 + ], + "type": "text", + "content": "9706" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 129, + 795, + 464, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 795, + 464, + 818 + ], + "spans": [ + { + "bbox": [ + 129, + 795, + 464, + 818 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9706-9716 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 137 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 137 + ], + "type": "text", + "content": "large perturbations. Each adversarial sample usually needs to replace multiple original variables to succeed in attacking, and MHM easily generates semantically incoherent and excessively long variable names." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "spans": [ + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "content": "To address the aforementioned challenges, in this paper, we propose a search-based black-box adversarial attack method to create challenging adversarial samples based on the search seed vector in the variable representation space, namely Representation Nearest Neighbor Search (RNNS). Specifically, RNNS, first utilizes publicly available real code datasets to construct a large original substitute set, denoted as " + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "inline_equation", + "content": "subs_{original}" + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "content": ". Then, based on the previous attack results, RNNS predicts the search seed vector required for the next round of attacks and efficiently searches for the " + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "content": " nearest substitutes to the seed vector from the large-scale original substitute set to form the " + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "inline_equation", + "content": "subs_{topk}" + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "content": " is much smaller than the size of the original substitute set. The generation process of the " + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "inline_equation", + "content": "subs_{topk}" + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "content": " does not involve attacking the victim model even once. Furthermore, the length and similarity of the substitute must adhere to specific perturbation constraints to prevent excessive deviations from " + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 69, + 139, + 291, + 422 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 424, + 291, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 424, + 291, + 694 + ], + "spans": [ + { + "bbox": [ + 69, + 424, + 291, + 694 + ], + "type": "text", + "content": "To evaluate the effectiveness of RNNS, we investigate three pre-trained code models, CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020) and CodeT5 (Wang et al., 2021), and perform the attack on six code tasks in three programming languages, i.e., Java, Python, and C. The results on 18 victim models demonstrate that compared to the approaches MHM and ALERT, RNNS achieves a higher attack success rate (ASR) with a maximum of about " + }, + { + "bbox": [ + 69, + 424, + 291, + 694 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 69, + 424, + 291, + 694 + ], + "type": "text", + "content": " improvement and 18/18 times as the winner. Meanwhile, RNNS needs fewer query times (QT) with 8/18 times as the winners. Furthermore, we analyze the quality of adversarial examples statistically and find that RNNS introduces minor perturbations. In the end, we apply RNNS to attack three defended models and find that our approach outperforms the baselines by up to " + }, + { + "bbox": [ + 69, + 424, + 291, + 694 + ], + "type": "inline_equation", + "content": "32.07\\%" + }, + { + "bbox": [ + 69, + 424, + 291, + 694 + ], + "type": "text", + "content": " ASR. We also use adversarial examples to improve the model's robustness through contrastive adversarial training." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 705, + 158, + 718 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 705, + 158, + 718 + ], + "spans": [ + { + "bbox": [ + 67, + 705, + 158, + 718 + ], + "type": "text", + "content": "2 Preliminaries" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 728, + 210, + 741 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 728, + 210, + 741 + ], + "spans": [ + { + "bbox": [ + 67, + 728, + 210, + 741 + ], + "type": "text", + "content": "2.1 Textual Code Processing" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 746, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 291, + 772 + ], + "type": "text", + "content": "The nature of code data (in text format with discrete input space) makes it impossible to feed one" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 74, + 528, + 157 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 74, + 528, + 157 + ], + "spans": [ + { + "bbox": [ + 305, + 74, + 528, + 157 + ], + "type": "interline_equation", + "content": "\\begin{array}{c c} x = \\left( \\begin{array}{c} S _ {0} \\\\ ... \\\\ S _ {i} \\\\ ... \\\\ S _ {j} \\\\ ... \\\\ S _ {l} \\end{array} \\right) \\longrightarrow R ^ {l \\times d} = \\left( \\begin{array}{c} \\boldsymbol {v} _ {0} \\\\ ... \\\\ \\boldsymbol {v} _ {i} \\\\ ... \\\\ \\boldsymbol {v} _ {j} \\\\ ... \\\\ \\boldsymbol {v} _ {l} \\end{array} \\right) \\longrightarrow \\boxed {f (\\theta)} \\\\ \\text {M o d e l} & \\longrightarrow \\left( \\begin{array}{c} p _ {0} \\\\ ... \\\\ p _ {g} \\\\ ... \\\\ p _ {k} \\end{array} \\right) \\\\ \\text {D o m a i n P r o b a b i l i t y} \\\\ \\text {S p a c e} & \\text {S p a c e} \\end{array}", + "image_path": "2e61c86309139283ec31eca2118525ddbe1765fd814f6ad8d4c3c4444c23a478.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 172, + 525, + 195 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 172, + 525, + 195 + ], + "spans": [ + { + "bbox": [ + 302, + 172, + 525, + 195 + ], + "type": "text", + "content": "Figure 1: One code model demo on the downstream task." + } + ] + } + ], + "index": 7, + "type": "text" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "spans": [ + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": "code input " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " directly into deep learning models. Thus, transferring code data to learnable continuous vectors is the first step in source code learning. Dense encoding (Zhelezniak et al., 2020) is one common method used to vectorize textual code data. To do so, first, we need to learn a tokenizer that splits the code text into a token sequence which is called Tokenization. After tokenization, code " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " is represented by a sequence of tokens, namely, " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "x = (s_0, \\dots, s_j, \\dots, s_l)" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "s_i" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " is one token. Then, the code vocabulary dictionary is built by using all the appeared tokens " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "s_i" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": ", denoted " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "\\mathbb{V}" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": ". After that, every word (token) in " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "\\mathbb{V}" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " is embedded by learned vectors " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "\\boldsymbol{v}_i" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " with dimension " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": ". Here, we use " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "E^{|\\mathbb{V}| \\times d}" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " to represent the embedding matrix for " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "\\mathbb{V}" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": ". Finally, " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " can be converted into a embedding matrix " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "R^{l \\times d} = (v_0, \\dots, v_j, \\dots, v_l)" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": ". After this code encoding, pre-trained code models based on the transformer take the matrix " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "R^{l \\times d}" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " as inputs and learn the contextual representation of " + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 201, + 526, + 512 + ], + "type": "text", + "content": " for downstream tasks via pre-training such as Masked Language Modeling (MLM) and Causal Language Modeling (CLM)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "spans": [ + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "text", + "content": "Figure 1 illustrates the main steps of the code processing models for the downstream classification tasks. First, we tokenize the textual code " + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "text", + "content": " into a token sequence that is represented in a discrete integer space. Then, we map the discrete sequence ids into the token vector space " + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "inline_equation", + "content": "R^{l \\times d}" + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "text", + "content": ". Next, we feed the token vectors into the task model " + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "inline_equation", + "content": "f(\\theta)" + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "inline_equation", + "content": "f(\\theta)" + }, + { + "bbox": [ + 302, + 513, + 525, + 635 + ], + "type": "text", + "content": " is built on top of pre-trained models. Finally, we can predict the domain probabilities after fine-tuning." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 646, + 421, + 658 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 646, + 421, + 658 + ], + "spans": [ + { + "bbox": [ + 302, + 646, + 421, + 658 + ], + "type": "text", + "content": "2.2 Problem Statement" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": "Since many critical code tasks are classification problems, e.g., defect prediction and code clone detection. In this paper, we focus on the adversarial attack for code classification tasks. Considering a code classification task, we use " + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "inline_equation", + "content": "f(x; \\theta) \\to y: R^{l \\times d} \\to \\mathbb{C} = \\{i | 0 \\leq i \\leq n\\}" + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": " to denote the victim model that maps a code token sequence " + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": " to a label " + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": " from a label set " + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "inline_equation", + "content": "\\mathbb{C}" + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": " with size " + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": ", where" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "9707" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " is the sequence length and " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " is the token vector dimension, and " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " is one integer. By querying dictionary dense embedding " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "\\pmb{E}^{|\\mathbb{V}\\times d|}" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": ", a code token sequence " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "x = (s_0,\\dots,s_j,\\dots,s_l)" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": ", is vectorized into " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "\\pmb{R}^{l\\times d}" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": ". Adversarial attacks for code models create an adversarial example " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "x^{\\prime}" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " by modifying some vulnerable tokens of " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " with a limited maximum perturbation " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " to change the correct label " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " to a wrong label " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "y^\\prime" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": ". Simply, we get a perturbed " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "x^{\\prime}" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " by modifying some tokens in " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "(s_0,\\dots,s_j,\\dots,s_l)" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " such that " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "f(x^{\\prime};\\theta)\\neq f(x;\\theta)" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "x^{\\prime} = x + \\sigma" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "x^{\\prime}" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " has to have the same behavior with " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": ", + represent perturbation execution, " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "\\sigma" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " is the perturbation code transformation for " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "(s_0,\\dots,s_j,\\dots,s_l)" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "\\sigma \\leq \\epsilon" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": ". We target the more practical attacking scenario - black-box attack that requires less information. We assume we cannot access the model parameters and can only utilize the final output of model " + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "inline_equation", + "content": "f(x;\\theta)" + }, + { + "bbox": [ + 67, + 71, + 293, + 328 + ], + "type": "text", + "content": " to conduct the attack." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 339, + 157, + 354 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 339, + 157, + 354 + ], + "spans": [ + { + "bbox": [ + 67, + 339, + 157, + 354 + ], + "type": "text", + "content": "3 Methodology" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 362, + 147, + 374 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 362, + 147, + 374 + ], + "spans": [ + { + "bbox": [ + 67, + 362, + 147, + 374 + ], + "type": "text", + "content": "3.1 Motivation" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 380, + 291, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 380, + 291, + 568 + ], + "spans": [ + { + "bbox": [ + 67, + 380, + 291, + 568 + ], + "type": "text", + "content": "As mentioned in the introduction, the current methods face three limitations: 1) there is a limited number of substitute variables; 2) there is a high verification cost associated with substitute variables; and 3) the generated adversarial samples often exhibit large perturbations. Among these limitations, the second one holds the utmost significance as it significantly impacts both the first and third limitations. Due to the high cost involved, it becomes challenging to generate diverse adversarial examples within a reasonable budget. Additionally, attackers tend to introduce large perturbations without employing any perturbation constraints in order to maximize their attacks." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "text", + "content": "To address these limitations, the first question arises: \"Could we substantially reduce the verification cost while allowing for unrestricted diversity of substitute variables and minimizing perturbations?\" To delve into the reasons behind the second limitation, we need to analyze its underlying factors. The low verification efficiency of the substitute set stems from the fact that each substitute can only be verified by constructing an adversarial sample to replace the original variable and then launching an actual attack on the victim model. This realization leads to the second question: \"Is it feasible to predict the attack effect of a substitute instead of constructing an adversarial sample to attack the victim model?\"" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 311, + 72, + 530, + 312 + ], + "blocks": [ + { + "bbox": [ + 311, + 72, + 530, + 312 + ], + "lines": [ + { + "bbox": [ + 311, + 72, + 530, + 312 + ], + "spans": [ + { + "bbox": [ + 311, + 72, + 530, + 312 + ], + "type": "image", + "image_path": "30b4588241ab9a49cce19898cce8d7c4104bc2ea7b84b8d6c8b49fd8dc7c39bc.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 337, + 526, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 337, + 526, + 485 + ], + "spans": [ + { + "bbox": [ + 302, + 337, + 526, + 485 + ], + "type": "text", + "content": "Given input code " + }, + { + "bbox": [ + 302, + 337, + 526, + 485 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 337, + 526, + 485 + ], + "type": "text", + "content": " and one of its variables " + }, + { + "bbox": [ + 302, + 337, + 526, + 485 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 302, + 337, + 526, + 485 + ], + "type": "text", + "content": ", different substitutes can be used to replace it to obtain different adversarial samples. After attacking the victim model, the probability of the label will also change. Conversely, if we want to reduce the probability of this label, the third question is following, \"how to choose relatively better substitutes that can reduce the model confidence from a large-scale original substitute set?\" It is possible to select good substitutes without actual attack if we can forecast, which is implemented by RNNS." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "spans": [ + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "text", + "content": "The core idea of RNNS is maintaining a search seed updated based on the historical attack. The search seed is employed to search next adversarial substitutes that are possible to attack successfully. Since substitutes are discrete and cannot be directly involved in calculations, we first use a variable name pre-trained encoder denoted as " + }, + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "text", + "content": " to map substitutes to a unified continuous representation vector space. Then, based on the representation vectors of substitutes that have participated in the attack, we predict the search seed vector " + }, + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "inline_equation", + "content": "e_{seed}" + }, + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "text", + "content": " for the next round of the substitute selection. Finally, we calculate the similarity between " + }, + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "inline_equation", + "content": "e_{seed}" + }, + { + "bbox": [ + 302, + 486, + 526, + 703 + ], + "type": "text", + "content": " and the representation vector of substitutes and then select relatively better substitutes. For specific details, please refer to Section 3.2.3." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 714, + 522, + 728 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 714, + 522, + 728 + ], + "spans": [ + { + "bbox": [ + 302, + 714, + 522, + 728 + ], + "type": "text", + "content": "3.2 Representation Nearest Neighbor Search" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 733, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 526, + 773 + ], + "type": "text", + "content": "Algorithm 1 shows the workflow of our approach, First, we collect the original substitute set from public real code, following the process described" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "9708" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": "in Section 3.2.1. We extract variables from the input code and sort them according to their uncertainty, referring to Section 3.2.2 (Line 3-4). We replace variables in sequence to form attack samples (Line 5). For a given " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": ", we first initialize the optimal substitute for this current iteration " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "sub_{cur}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " and the optimal substitute for the previous iteration " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "sub_{pre}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " to the " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": ". Then, we initialize the accumulated smooth increment of the representation vector " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "\\Delta e_{smo}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " to a zero vector. " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "\\Delta e_{smo}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " is used to record the historical representation change of the search seed " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "e_{seed}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": ". We now commence the iterative attack process, as delineated in Line 11. We predict the search seed vector " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "e_{seed}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " with the process described in Section 3.2.3 (Line 12), and then extract topk substitutes based on " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "e_{seed}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " to form the candidate substitutes " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "subs_{topk}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " with the process described in Section 3.2.4 (Line 13). Subsequently, we replace " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "sub_{cur}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " in " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "x'" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " with each substitute in " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "subs_{topk}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " to obtain the corresponding temporary adversarial sample " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "x'_tmp" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " (Line 14-15). " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "x'" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " is the current code that we are trying to attack and it is initialized with the original code " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": ". We use " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "x'_tmp" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " to attack the victim model and obtain the probability " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "prob_y" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " of the ground-truth label " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " and predicted label " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "y'" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " (Line 16). If the probability of the ground-truth label " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " hits a new low (" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "< prob_{min}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": "), we update " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "x'" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "sub_{pre}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "sub_{cur}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "prob_{min}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " (Line 17-22). " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "prob_{min}" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " records the minimum probability of label " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " during the attack process. If " + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "inline_equation", + "content": "x'_tmp" + }, + { + "bbox": [ + 67, + 71, + 292, + 559 + ], + "type": "text", + "content": " causes the victim model to predict an incorrect label, this attack is successful and returns the successful adversarial sample (Line 23-26); otherwise, proceed to the next iteration until all variables have completed iteration and return the final adversarial sample and attack result (Line 30)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 565, + 290, + 578 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 565, + 290, + 578 + ], + "spans": [ + { + "bbox": [ + 67, + 565, + 290, + 578 + ], + "type": "text", + "content": "3.2.1 Collecting Large Original Substitute Set" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 581, + 292, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 581, + 292, + 743 + ], + "spans": [ + { + "bbox": [ + 67, + 581, + 292, + 743 + ], + "type": "text", + "content": "We have developed a tool for variable extraction that leverages the tree-sitter framework1. This tool, henceforth denoted as ExtractVar (see Line 3), operates in three distinct steps. In the first step, we extract all variables from the current dataset and then filter out duplicates. During the second step, each valid variable is tokenized, and we compute the embedding for each token using the variable-name encoder " + }, + { + "bbox": [ + 67, + 581, + 292, + 743 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 67, + 581, + 292, + 743 + ], + "type": "text", + "content": " that is pre-trained on CodeSearchNet2. We then apply a mean pooling operation on these tokens to determine the variable's embedding. In the third step, we retain all the chosen variables" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 302, + 71, + 527, + 100 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 527, + 100 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 527, + 100 + ], + "type": "text", + "content": "along with their associated embeddings as the initial substitute set, represented as " + }, + { + "bbox": [ + 302, + 71, + 527, + 100 + ], + "type": "inline_equation", + "content": "subs_{original}" + }, + { + "bbox": [ + 302, + 71, + 527, + 100 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 105, + 451, + 118 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 105, + 451, + 118 + ], + "spans": [ + { + "bbox": [ + 302, + 105, + 451, + 118 + ], + "type": "text", + "content": "3.2.2 Computing Uncertainty" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "spans": [ + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": "Given a specific code " + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": ", we replace each instance of " + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "inline_equation", + "content": "var \\in x" + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": " with a set of predefined fixed variables " + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "inline_equation", + "content": "VarArray" + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": ", resulting in a set of mutated codes denoted as " + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "inline_equation", + "content": "X_{var}^{mutated}" + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": ". These mutated codes are subsequently utilized to query the victim model, allowing us to obtain the probability distribution for each class. A greater variance in the distribution signifies increased uncertainty for " + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": ", suggesting that " + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": " should be prioritized for replacement. The uncertainty associated with " + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 302, + 121, + 527, + 269 + ], + "type": "text", + "content": " is defined as follows:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 318, + 275, + 510, + 312 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 275, + 510, + 312 + ], + "spans": [ + { + "bbox": [ + 318, + 275, + 510, + 312 + ], + "type": "interline_equation", + "content": "u n c e r t a i n t y _ {v a r} = \\frac {1}{C} \\sum_ {i = 1} ^ {C} v a r i a n c e (P _ {v a r} ^ {i})", + "image_path": "80509e982502866a0abd70a517a56600b85fd69b75fc027d9096e1da1b6ba0e6.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "spans": [ + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "inline_equation", + "content": "P_{var}^{i} = \\{p_{var}^{i}(x) | \\forall x \\in X_{var}^{mutated}\\}" + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "content": " is the number of labels, " + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "inline_equation", + "content": "p_{var}^{i}(x)" + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "content": " is the model probability for label " + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "content": " given the mutated code " + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "content": ", and variance denotes the standard variance function. A larger and more diverse " + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "inline_equation", + "content": "X_{var}^{mutated}" + }, + { + "bbox": [ + 302, + 320, + 527, + 592 + ], + "type": "text", + "content": " ensures a closer approximation of uncertainty to the true value. It is important to note, however, that the magnitude of the change length must not be excessively large, as this would result in all probability changes converging to a single point. This is because samples subjected to large changes deviate significantly from the original, leading to a substantial decrease in the model confidence levels. Subsequently, we arrange the variables in descending order based on their uncertainties. The greater the uncertainty of a variable, the more valuable it is for attack. This process is denoted as RankVarsWithUncertainty at line 4. In our implementation, the size of this variable array VarArray is 16, and the variable length ranges from 1 to 5." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 599, + 448, + 612 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 599, + 448, + 612 + ], + "spans": [ + { + "bbox": [ + 302, + 599, + 448, + 612 + ], + "type": "text", + "content": "3.2.3 Predicting Search Seed" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "text", + "content": "To filter out superior substitutes from the substantial " + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "inline_equation", + "content": "subs_{original}" + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "text", + "content": ", it becomes necessary to predict the search seed within the substitute representation vector space. Given the optimal substitute " + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "inline_equation", + "content": "sub_{cur}" + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "text", + "content": " of the current round, the optimal substitute " + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "inline_equation", + "content": "sub_{pre}" + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "text", + "content": " from the previous round, and the accumulated smooth increment of the representation vector, denoted as " + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "inline_equation", + "content": "\\Delta e_{smo}" + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "text", + "content": ", from all preceding rounds of iteration, we initially compute the increment of the representation vector in the current round, " + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "inline_equation", + "content": "\\Delta e" + }, + { + "bbox": [ + 302, + 615, + 527, + 750 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 345, + 759, + 483, + 774 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 345, + 759, + 483, + 774 + ], + "spans": [ + { + "bbox": [ + 345, + 759, + 483, + 774 + ], + "type": "interline_equation", + "content": "\\Delta \\boldsymbol {e} = E (s u b _ {c u r}) - E (s u b _ {p r e})", + "image_path": "a0566f8b16b9b33218b2817b048069ae3c3aa99d3eb2112f1fc7cd0f50bcac02.jpg" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 80, + 749, + 232, + 761 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 749, + 232, + 761 + ], + "spans": [ + { + "bbox": [ + 80, + 749, + 232, + 761 + ], + "type": "text", + "content": "1https://tree-sitter.github.io/tree-sitter" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 81, + 761, + 251, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 761, + 251, + 772 + ], + "spans": [ + { + "bbox": [ + 81, + 761, + 251, + 772 + ], + "type": "text", + "content": "2https://huggingface.co/datasets/code_search_net" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "9709" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 69, + 291, + 121 + ], + "blocks": [ + { + "bbox": [ + 70, + 69, + 291, + 121 + ], + "lines": [ + { + "bbox": [ + 70, + 69, + 291, + 121 + ], + "spans": [ + { + "bbox": [ + 70, + 69, + 291, + 121 + ], + "type": "table", + "html": "
TaskTrain / Val / TestCodeBERTGraphCodeBERTCodeT5
Defect21,854 / 2,732 / 2,73263.7663.6567.02
Clone90,102 / 4,000 / 4,00096.9797.3697.84
Authorship528 / - / 13282.5777.2788.63
C1000320,000 / 80,000 / 100,00082.5383.7984.46
Python800153,600 / 38,400 / 48,00096.3996.2996.79
Java25048,000 / 11,909 / 15,00096.9197.2797.72
", + "image_path": "17b33fb314e463991cd8f0c9d307ef116209e0b26bc4871ae46163d3010a654f.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 134, + 290, + 158 + ], + "lines": [ + { + "bbox": [ + 67, + 134, + 290, + 158 + ], + "spans": [ + { + "bbox": [ + 67, + 134, + 290, + 158 + ], + "type": "text", + "content": "Table 1: Datasets and Victim Model Performance (Accuracy, %)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 180, + 290, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 180, + 290, + 248 + ], + "spans": [ + { + "bbox": [ + 67, + 180, + 290, + 248 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 180, + 290, + 248 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 67, + 180, + 290, + 248 + ], + "type": "text", + "content": " is variable name encoder, trained on CodeSearchNet by masked language modelling independently so that RNNS is independent of victim downstream-task models. Then we update the " + }, + { + "bbox": [ + 67, + 180, + 290, + 248 + ], + "type": "inline_equation", + "content": "\\Delta e_{smo}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 260, + 254, + 274 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 260, + 254, + 274 + ], + "spans": [ + { + "bbox": [ + 104, + 260, + 254, + 274 + ], + "type": "interline_equation", + "content": "\\Delta \\mathbf {e} _ {s m o} = (1 - \\alpha) \\Delta \\mathbf {e} _ {s m o} + \\alpha \\Delta \\mathbf {e}", + "image_path": "6efbd32ab0e6d8dbe99de9a8ec0d3252767caa96841ea84765e345d8e88c389a.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 285, + 290, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 285, + 290, + 312 + ], + "spans": [ + { + "bbox": [ + 67, + 285, + 290, + 312 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 285, + 290, + 312 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 67, + 285, + 290, + 312 + ], + "type": "text", + "content": " is a smooth rate limited 0 to 1, Finally, we predict the search seed " + }, + { + "bbox": [ + 67, + 285, + 290, + 312 + ], + "type": "inline_equation", + "content": "e_{\\text{seed}}" + }, + { + "bbox": [ + 67, + 285, + 290, + 312 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 113, + 324, + 245, + 338 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 324, + 245, + 338 + ], + "spans": [ + { + "bbox": [ + 113, + 324, + 245, + 338 + ], + "type": "interline_equation", + "content": "\\boldsymbol {e} _ {\\text {s e e d}} = E \\left(\\operatorname {s u b} _ {\\text {c u r}}\\right) + \\Delta \\boldsymbol {e} _ {\\text {s m o}}", + "image_path": "0a9935b3b75a2a4682ae1555fcc43b8a6131215e47aab4bfd9e59177a0541159.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 350, + 290, + 363 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 350, + 290, + 363 + ], + "spans": [ + { + "bbox": [ + 67, + 350, + 290, + 363 + ], + "type": "text", + "content": "This process is denoted as PredictSeed at line 12." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 372, + 238, + 385 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 372, + 238, + 385 + ], + "spans": [ + { + "bbox": [ + 67, + 372, + 238, + 385 + ], + "type": "text", + "content": "3.2.4 Searching Top-K Substitutes" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "spans": [ + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": "Initially, we filter out substitutes from " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "subs_{original}" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " that comply with two constraints: 1) " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "1 - sim(E(sub), E(var)) < \\epsilon" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " and 2) " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "|len(sub) - len(var)| < \\delta" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "var" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " refers to the original variable in the input code that is to be replaced, " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "sim(.)" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " is the similarity calculation function. " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "E(.)" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " is the variable name encoder, and " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "len(.)" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " is used to calculate the length of the variable name. Then, we calculate the similarity between the search seed " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "e_{seed}" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " and the substitutes that are filtered by the two constraints and select the " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " most similar substitutes to form " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "subs_{topk}" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": ". This process is denoted as SearchTopkSub at line 13. In our experiment, " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "\\epsilon = 0.15" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "\\delta = 4" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "k = 60" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "inline_equation", + "content": "sim(.)" + }, + { + "bbox": [ + 67, + 389, + 291, + 592 + ], + "type": "text", + "content": " is cosine similarity." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 602, + 191, + 617 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 602, + 191, + 617 + ], + "spans": [ + { + "bbox": [ + 67, + 602, + 191, + 617 + ], + "type": "text", + "content": "4 Experimental Setup" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 624, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 291, + 772 + ], + "type": "text", + "content": "Dataset and Model. To study the effectiveness and efficiency of RNNS, we conduct experiments on three popular programming languages (C, Python, and Java). For the datasets, we employed six widely studied open-source datasets that cover four important code tasks. Specifically, BigCloneBench (Wang et al., 2020) is one code clone detection dataset named Clone. Devign (Zhou et al., 2019) is a dataset used for vulnerability detection, named Defect. For authorship prediction, we use the dataset provided by (Alsulami et al., 2017)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 71, + 525, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 191 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 191 + ], + "type": "text", + "content": "Besides, we utilize three problem-solving classification tasks, Java250, Python800, and C1000, provided by ProjectCodeNet (Puri et al., 2021). For all the datasets (except for authorship prediction which does not have enough data samples), we follow the original papers to split the data into the training set, validation set, and test set. Authorship prediction only has two split parts, training data and test data." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 212, + 525, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 212, + 525, + 307 + ], + "spans": [ + { + "bbox": [ + 302, + 212, + 525, + 307 + ], + "type": "text", + "content": "For the code models, we follow the previous work (Yang et al., 2022) and investigate two pretrained models CodeBERT (Feng et al., 2020), and GraphCodeBERT (Guo et al., 2020). Besides, we add one more powerful model CodeT5 (Wang et al., 2021) in our study. Table 1 summarizes the details of our employed datasets and fine-tuned models." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 327, + 525, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 327, + 525, + 489 + ], + "spans": [ + { + "bbox": [ + 302, + 327, + 525, + 489 + ], + "type": "text", + "content": "Evaluation Metric. To evaluate the effectiveness of adversarial attack methods, we employ the commonly used attack success rate (ASR) (Yang et al., 2022) as the measurement. To evaluate the efficiency of the attack methods, we use query times (QT) to check the average number of querying the victim model for one input code. Finally, we use the change of replaced-variable length and the number of replaced variables to study the quality/perturbation of adversarial examples. A smaller score means the attack method can generate adversarial examples with less perturbation injection." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 509, + 525, + 698 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 509, + 525, + 698 + ], + "spans": [ + { + "bbox": [ + 302, + 509, + 525, + 698 + ], + "type": "text", + "content": "Baseline. We compare RNNS with two black-box attack baselines, MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022). MHM is a sampling search-based black-box attack that generates the substitutes from the vocabulary based on lexical rules for identifiers. MHM employs synthesized tokens as the candidates of substitutes, which could introduce meaningless variable names. ALERT is a recently proposed attack method that combines greedy attack and genetic algorithm to find the substitutes. We also use two textual attack algorithms PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021) as minor baselines, since they are not designed for code models." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "content": "Implementation. We implement our approach in PyTorch and run all experiments on 32G-v100 GPUs. We reuse the source code from the baselines. We make our implementation publicly available." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "9710" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 160, + 68, + 435, + 271 + ], + "blocks": [ + { + "bbox": [ + 160, + 68, + 435, + 271 + ], + "lines": [ + { + "bbox": [ + 160, + 68, + 435, + 271 + ], + "spans": [ + { + "bbox": [ + 160, + 68, + 435, + 271 + ], + "type": "table", + "html": "
Task+ModelALERTMHMRNNS
ASRQTASRQTASRQT
Clone+CodeBert28.672155.3939.66972.1546.50666.48
Clone+GraphCodeBert10.401466.689.58490.9941.281122.01
Clone+CodeT529.202359.7038.791069.0639.61895.79
Defect+CodeBert52.291079.6850.51862.1869.18588.35
Defect+GraphCodeBert74.29621.7775.19539.9381.63404.73
Defect+CodeT576.66721.0286.51344.0889.45344.29
Authorship+CodeBert34.98682.5764.70775.1173.391029.59
Authorship+GraphCodeBert58.821227.3675.49632.1080.39696.64
Authorship+CodeT564.951078.4066.97715.8971.79970.44
Java250+CodeBert50.50958.9674.03961.6075.12815.91
Java250+GraphCodeBert46.741026.1546.05946.5272.30853.74
Java250+CodeT552.041189.4230.591107.9563.801049.46
Python800+CodeBert58.30513.6356.67919.3777.88514.19
Python800+GraphCodeBert51.87577.7054.15917.9271.42730.14
Python800+CodeT552.84777.2036.951127.4469.07662.28
C1000+CodeBert53.50525.4359.75340.8872.96537.76
C1000+GraphCodeBert52.68566.1845.93837.0972.23634.27
C1000+CodeT547.86843.3336.45668.1559.00697.06
Count0/184/180/186/1818/188/18
", + "image_path": "0e919ba263daf8956316645eed9a72f94dc933a90beb1976bd24e049559a6608.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 81, + 279, + 511, + 291 + ], + "lines": [ + { + "bbox": [ + 81, + 279, + 511, + 291 + ], + "spans": [ + { + "bbox": [ + 81, + 279, + 511, + 291 + ], + "type": "text", + "content": "Table 2: Comparison results with MHM, and ALERT, ASR %. Count: the number of best results achieved." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 301, + 173, + 314 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 301, + 173, + 314 + ], + "spans": [ + { + "bbox": [ + 67, + 301, + 173, + 314 + ], + "type": "text", + "content": "5 Results Analysis" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 324, + 259, + 338 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 324, + 259, + 338 + ], + "spans": [ + { + "bbox": [ + 67, + 324, + 259, + 338 + ], + "type": "text", + "content": "5.1 Attack Effectiveness and Efficiency" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "spans": [ + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "text", + "content": "We compare RNNS with two methods MHM (Zhang et al., 2020) and NaturalAttack (ALERT) (Yang et al., 2022) on six datasets and 18 victim models that have been fine-tuned for the downstream tasks. Table 2 shows the comparison results where the last row Count indicates how many times this method achieves the best results. We can see that RNNS achieves the best performance for 18/18 times in terms of ASR, and the lowest cost for 8/18 times in terms of QT in Table 2. Both of the indicators are better than the baselines. The two baselines have zero best ASR for all victim models and all datasets. The lowest QTs achieved by ALERT and MHM are 4 and 6, respectively. We conclude that for effectiveness and efficiency, RNNS outperforms ALERT and MHM in all cases. Especially, MHM and ALERT fail to attack GraphCodeBERT on BigClone dataset, and only have " + }, + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "inline_equation", + "content": "9.58\\%" + }, + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "inline_equation", + "content": "10.4\\%" + }, + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "text", + "content": " ASR respectively, while RNNS has more than " + }, + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 67, + 344, + 290, + 655 + ], + "type": "text", + "content": " ASR. RNNS has almost two times larger ASR than MHM on Java250+CodeT5 and Python800+CodeT5." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "spans": [ + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "text", + "content": "It should be noted that high ASR is not due to large QT. As shown in Table 2, the three groups of experiments with the most QTs are Clone+GraphCodeBert, Java250+CodeT5, and Authorship+CodeBert, with ASRs of " + }, + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "inline_equation", + "content": "41.28\\%" + }, + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "inline_equation", + "content": "63.80\\%" + }, + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "inline_equation", + "content": "73.39\\%" + }, + { + "bbox": [ + 67, + 656, + 291, + 751 + ], + "type": "text", + "content": ", respectively, which are not the highest. On the contrary, Defect+CodeT5 has the highest" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 301, + 525, + 341 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 301, + 525, + 341 + ], + "spans": [ + { + "bbox": [ + 302, + 301, + 525, + 341 + ], + "type": "text", + "content": "ASR of " + }, + { + "bbox": [ + 302, + 301, + 525, + 341 + ], + "type": "inline_equation", + "content": "89.45\\%" + }, + { + "bbox": [ + 302, + 301, + 525, + 341 + ], + "type": "text", + "content": ", but QT is the smallest. Therefore, there is no absolute causal relationship between QT and ASR." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 358, + 505, + 371 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 358, + 505, + 371 + ], + "spans": [ + { + "bbox": [ + 302, + 358, + 505, + 371 + ], + "type": "text", + "content": "5.2 Perturbation of Adversarial Example" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 301, + 379, + 525, + 527 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 379, + 525, + 527 + ], + "spans": [ + { + "bbox": [ + 301, + 379, + 525, + 527 + ], + "type": "text", + "content": "We conduct a study about the quality of the adversarial examples to check if RNNS can generate looking-normal code, e.g., avoiding naively increasing the variable name length. To do so, firstly, we count the average length of the original variable and adversarial variables as demonstrated by Table 3. We also compute the mean and variances of their difference. Besides, we compute the average number of the replaced variables for the successful attack as shown in Table 4. Low values mean the inputs are modified less, and high qualities." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "type": "text", + "content": "In Table 3, the 2nd, 5th, and 8th columns are the average length for original variables (named Var Len) that are replaced. The 3rd, 6th, and 9th columns are the average lengths for adversarial variables (named Adv Var Len). The 4th, 7th, and 10th columns are the average and variance (mean " + }, + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "type": "inline_equation", + "content": "\\pm" + }, + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "type": "text", + "content": " variance) of the absolute length difference between original variables and adversarial variables (named Difference). We observe that MHM prefers to replace the long-length variables while RNNS likes replacing short-length variables if we compare the 2nd and 5th columns. Meanwhile, the change of variable length from RNNS is less than MHM. MHM introduces the average length difference of 3.39-6.82 while RNNS only has 2.02-2.54. MHM has much higher variances than RNNS in the length change. ALERT uses shorter adversarial variable names than RNNS" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 80, + 760, + 266, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 760, + 266, + 772 + ], + "spans": [ + { + "bbox": [ + 80, + 760, + 266, + 772 + ], + "type": "text", + "content": "3https://github.com/18682922316/RNNS-for-code-attack" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 286, + 780, + 308, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 308, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 308, + 791 + ], + "type": "text", + "content": "9711" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 88, + 68, + 505, + 248 + ], + "blocks": [ + { + "bbox": [ + 88, + 68, + 505, + 248 + ], + "lines": [ + { + "bbox": [ + 88, + 68, + 505, + 248 + ], + "spans": [ + { + "bbox": [ + 88, + 68, + 505, + 248 + ], + "type": "table", + "html": "
Task+ModelRNNSMHMALERT
Var LenAdv Var LenDifferenceVar LenAdv Var LenDifferenceVar LenAdv Var LenDifference
Clone+CodeBert6.126.792.35 ± 4.506.4710.66.34 ± 10.985.916.211.32 ± 2.02
Clone+GraphCodeBert6.326.972.54 ± 6.436.5810.416.82 ± 21.675.505.931.45 ± 2.49
Clone+CodeT56.456.692.51 ± 8.306.4610.466.17 ± 25.786.256.611.32 ± 2.72
Defect+CodeBert4.645.442.08 ± 2.494.449.596.57 ± 28.784.855.061.36 ± 1.93
Defect+GraphCodeBert4.085.342.13 ± 1.834.379.736.48 ± 26.514.475.221.33 ± 1.83
Defect+CodeT53.955.172.03 ± 1.934.339.816.59 ± 29.984.365.011.27 ± 1.57
Authorship+CodeBert3.815.182.28 ± 1.563.977.945.45 ± 16.724.425.351.40 ± 2.25
Authorship+GraphCodeBert3.695.232.36 ± 1.714.397.645.24 ± 15.383.744.461.22 ± 1.82
Authorship+CodeT53.955.182.03 ± 2.663.957.985.59 ± 20.943.814.501.22 ± 1.62
Java250+CodeBert2.354.222.11 ± 1.023.216.504.34 ± 15.203.223.650.94 ± 1.63
Java250+GraphCodeBert2.484.312.13 ± 1.073.136.594.42 ± 14.843.053.500.98 ± 1.54
Java250+CodeT52.764.472.10 ± 1.173.206.544.33 ± 14.603.167.314.41 ± 18.73
Python800+CodeBert1.503.542.21 ± 1.021.975.113.64 ± 9.061.782.270.64 ± 1.34
Python800+GraphCodeBert1.883.902.18 ± 0.781.996.014.46 ± 16.521.802.330.76 ± 1.30
Python800+CodeT51.653.592.13 ± 0.951.974.953.49 ± 8.181.885.844.10 ± 12.64
C1000+CodeBert1.583.442.08 ± 0.882.415.053.65 ± 12.022.132.520.67 ± 1.17
C1000+GraphCodeBert1.603.592.10 ±0.852.395.353.90 ± 12.982.182.670.66 ± 1.23
C1000+CodeBert1.383.332.02 ± 0.852.364.823.39 ± 10.982.106.564.74 ± 13.24
", + "image_path": "972dbb7978d3bef7c97b734d46e3c3771bbc4db71d27ef228eb675815d0a1b11.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 161, + 256, + 431, + 268 + ], + "lines": [ + { + "bbox": [ + 161, + 256, + 431, + 268 + ], + "spans": [ + { + "bbox": [ + 161, + 256, + 431, + 268 + ], + "type": "text", + "content": "Table 3: Replaced-variable length comparison, mean " + }, + { + "bbox": [ + 161, + 256, + 431, + 268 + ], + "type": "inline_equation", + "content": "\\pm" + }, + { + "bbox": [ + 161, + 256, + 431, + 268 + ], + "type": "text", + "content": " variance." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 279, + 290, + 318 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 279, + 290, + 318 + ], + "spans": [ + { + "bbox": [ + 67, + 279, + 290, + 318 + ], + "type": "text", + "content": "with less change because it uses the pre-trained model to generate the replacements that are close to the replaced variables." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 320, + 291, + 454 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 320, + 291, + 454 + ], + "spans": [ + { + "bbox": [ + 67, + 320, + 291, + 454 + ], + "type": "text", + "content": "Table 4 statistically shows the number of replaced variables. It can be seen that RNNS replaces around an average of 3.6 variables with a smaller variance of around (3.4-4.6) while MHM needs to modify about an average of 5.4 variables with a larger variance " + }, + { + "bbox": [ + 67, + 320, + 291, + 454 + ], + "type": "inline_equation", + "content": "(\\geq 11.14)" + }, + { + "bbox": [ + 67, + 320, + 291, + 454 + ], + "type": "text", + "content": ". ALERT also replaces more variables to attack models than RNNS and MHM. RNNS introduces less or equal perturbation than the baselines in terms of length change and change number." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 456, + 291, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 456, + 291, + 550 + ], + "spans": [ + { + "bbox": [ + 67, + 456, + 291, + 550 + ], + "type": "text", + "content": "Figure 2 shows one example of RNNS, MHM, and ALERT attack successfully from the Java250 dataset. The changes are highlighted by shadow markers. RNNS only renames one variable " + }, + { + "bbox": [ + 67, + 456, + 291, + 550 + ], + "type": "inline_equation", + "content": "\\mathbf{b}" + }, + { + "bbox": [ + 67, + 456, + 291, + 550 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 67, + 456, + 291, + 550 + ], + "type": "inline_equation", + "content": "\\mathbf{h}" + }, + { + "bbox": [ + 67, + 456, + 291, + 550 + ], + "type": "text", + "content": ", ALERT renames two variables, while MHM almost renames all variables and also prefers longer names." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 564, + 167, + 577 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 564, + 167, + 577 + ], + "spans": [ + { + "bbox": [ + 67, + 564, + 167, + 577 + ], + "type": "text", + "content": "5.3 Ablation Study" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 584, + 292, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 584, + 292, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 584, + 292, + 772 + ], + "type": "text", + "content": "We remove the two search constraints in Section 3.2.4, denoted this variant of RNNS as RNNS-Unlimited. Table 5 shows the comparing results between RNNS-Unlimited and RNNS. RNNS-Unlimited gets the first place for all the tasks in terms of ASR. ASR can be improved by a maximum of " + }, + { + "bbox": [ + 67, + 584, + 292, + 772 + ], + "type": "inline_equation", + "content": "8.35\\%" + }, + { + "bbox": [ + 67, + 584, + 292, + 772 + ], + "type": "text", + "content": " and a minimum of about " + }, + { + "bbox": [ + 67, + 584, + 292, + 772 + ], + "type": "inline_equation", + "content": "2\\%" + }, + { + "bbox": [ + 67, + 584, + 292, + 772 + ], + "type": "text", + "content": " after removing limitations. For QT, RNNS-Unlimited only loses 3 times among 18 evaluations. The improvement of RNNS-Unlimited is not surprising with respect to ASR and QT. Because RNNS-Unlimited can search the adversarial examples in the non-similar real names and use very long variable names." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 279, + 516, + 292 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 279, + 516, + 292 + ], + "spans": [ + { + "bbox": [ + 302, + 279, + 516, + 292 + ], + "type": "text", + "content": "5.4 Attack Defended Model and Retraining" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 301, + 296, + 527, + 432 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 296, + 527, + 432 + ], + "spans": [ + { + "bbox": [ + 301, + 296, + 527, + 432 + ], + "type": "text", + "content": "Attack Defended Model. We employ RNNS and MHM to attack the three defended models provided by ALERT (Yang et al., 2022). These models are prepared by adversarial fine-tuning. Table 6 presents the results. We can see that RNNS outperforms MHM in two tasks, and MHM is better in one task. This experiment setting actually is not friendly for RNNS because ALERT (Yang et al., 2022) uses the replacements from pre-trained models which implicitly have the semantic constraint." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 433, + 527, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 433, + 527, + 622 + ], + "spans": [ + { + "bbox": [ + 302, + 433, + 527, + 622 + ], + "type": "text", + "content": "Retraining. We use the adversarial examples from RNNS to retrain the victim models of CodeBERT by contrastive adversarial learning. We use three 3 datasets, Defect, Authorship, and Java250. We generate the adversarial examples on the whole training dataset for them. Table 7 presents the results, all approaches achieve much lower ASR compared with the previous. RNNS adversarial examples can improve the mode robustness through contrastive adversarial retraining. If we compare Defect/Authorship+CodeBERT in Table 7 and Table 6, we can find that both retrained models via RNNS are more robust than the models from ALERT since they have much lower ASRs." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 633, + 489, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 633, + 489, + 645 + ], + "spans": [ + { + "bbox": [ + 302, + 633, + 489, + 645 + ], + "type": "text", + "content": "5.5 RNNS vs Textual Attack Methods" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 651, + 526, + 758 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 651, + 526, + 758 + ], + "spans": [ + { + "bbox": [ + 302, + 651, + 526, + 758 + ], + "type": "text", + "content": "To compare the effects of RNNS and textual attack methods, We conducted attack experiments on three datasets using the PSO (Zang et al., 2020) and LSH (Maheshwary et al., 2021). The three datasets Defect, Authorship, and Java250, represent three languages respectively, C, Python, and Java. To be fair, the search space of the PSO and LSH is the same as that of RNNS." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 314, + 760, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 760, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 314, + 760, + 524, + 772 + ], + "type": "text", + "content": "As shown in Table 8, the QT of PSO algorithm" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "9712" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 77, + 68, + 516, + 152 + ], + "blocks": [ + { + "bbox": [ + 77, + 68, + 516, + 152 + ], + "lines": [ + { + "bbox": [ + 77, + 68, + 516, + 152 + ], + "spans": [ + { + "bbox": [ + 77, + 68, + 516, + 152 + ], + "type": "table", + "html": "
TaskCodeBERTGraphCodeBERTCodeT5
RNNSMHMALERTRNNSMHMALERTRNNSMHMALERT
Clone3.55 ± 4.606.72 ± 16.576.86 ± 18.854.12 ± 4.946.21 ± 15.136.95 ± 18.993.43 ± 5.005.68 ± 14.017.65 ± 25.57
Defect3.39 ± 4.962.78 ± 7.893.49 ± 3.992.67 ± 1.752.84 ± 9.504.10 ± 11.052.51 ± 1.452.16 ± 3.583.49 ± 3.99
Authorship4.24 ± 7.477.52 ± 25.826.60 ± 22.963.65 ± 3.326.67 ± 22.297.75 ± 33.124.39 ± 9.005.72 ± 13.026.06 ± 18.74
Java2503.87 ± 4.707.11 ± 21.187.82 ± 28.963.87 ± 4.256.41 ± 16.247.83 ± 25.064.71 ± 6.877.04 ± 15.298.92 ± 25.97
Python8003.06 ± 1.875.21 ± 12.284.96 ± 8.474.12 ± 3.685.00 ± 10.834.63 ± 6.763.57 ± 3.045.29 ± 13.516.18 ± 11.45
C10003.00 ± 1.864.42 ± 7.494.13 ± 5.593.37 ± 2.385.14 ± 7.304.88 ± 6.243.39 ± 2.485.20 ± 7.435.43 ± 6.99
mean3.52 ± 4.245.63 ± 15.215.65 ± 14.803.63 ± 3.395.38 ± 13.556.02 ± 16.873.67 ± 4.645.18 ± 11.146.29 ± 15.45
", + "image_path": "cbe72181236b7b03ac2707b60e04bdf414d0b44d39c985b525b4408dbd2f25d7.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 77, + 183, + 512, + 280 + ], + "blocks": [ + { + "bbox": [ + 159, + 160, + 431, + 171 + ], + "lines": [ + { + "bbox": [ + 159, + 160, + 431, + 171 + ], + "spans": [ + { + "bbox": [ + 159, + 160, + 431, + 171 + ], + "type": "text", + "content": "Table 4: Replaced-variable number comparison, mean " + }, + { + "bbox": [ + 159, + 160, + 431, + 171 + ], + "type": "inline_equation", + "content": "\\pm" + }, + { + "bbox": [ + 159, + 160, + 431, + 171 + ], + "type": "text", + "content": " variance" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 77, + 183, + 512, + 280 + ], + "lines": [ + { + "bbox": [ + 77, + 183, + 512, + 280 + ], + "spans": [ + { + "bbox": [ + 77, + 183, + 512, + 280 + ], + "type": "table", + "html": "
public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args) {public static void main(String[] args)
Scanner obj = new Scanner(System.in);
int a = obj.nextInt();
int b = obj.nextInt();
int out = 1;
int ans = 0;
while (out < b) {}while (out < h) {}}while (tempOp < colArr) {}}}}}}}
out--;
out = out + a;
ans++;
}}System.out.println(ans);}}System.out.println(number_array);}}}}}}}
Original CodeAdversarial Code from RNNSAdversarial Code from MHMAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERTAdversarial Code from ALERT
", + "image_path": "4e8ab953e457c12148e74c5d4481000d71071bf77ba1781d8277c663e7ff9cea.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 92, + 317, + 501, + 417 + ], + "blocks": [ + { + "bbox": [ + 167, + 290, + 425, + 301 + ], + "lines": [ + { + "bbox": [ + 167, + 290, + 425, + 301 + ], + "spans": [ + { + "bbox": [ + 167, + 290, + 425, + 301 + ], + "type": "text", + "content": "Figure 2: Case study. Original vs. RNNS vs. MHM vs. ALERT" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 92, + 317, + 501, + 417 + ], + "lines": [ + { + "bbox": [ + 92, + 317, + 501, + 417 + ], + "spans": [ + { + "bbox": [ + 92, + 317, + 501, + 417 + ], + "type": "table", + "html": "
TaskCodeBERTGraphCodeBERTCodeT5
RNNS-UnlimitedRNNSRNNS-UnlimitedRNNSRNNS-UnlimitedRNNS
ASRQTASRQTASRQTASRQTASRQTASRQT
Defect72.29590.9869.18588.3587.77381.8281.63404.7391.64338.4189.45344.29
Clone50.66955.9746.50666.4848.161105.1141.281122.0141.38920.6539.61895.79
Authorship91.74447.6873.391029.5991.17438.6980.39696.6488.88620.5671.79970.44
C100074.70502.0272.96537.7676.82498.6472.23634.2761.96704.9559.00697.06
Python80083.90460.9277.88514.1979.00496.3071.42730.1472.69646.5969.07662.28
Java25079.70760.9775.12815.9181.94744.5772.30853.7475.52910.9763.801049.46
Count6/64/60/62/66/66/60/60/66/65/60/61/6
", + "image_path": "771539b6beaf6d7585994be7cb1aeb67e9d57accf249b095eeaef2a5a545fe47.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 82, + 449, + 278, + 500 + ], + "blocks": [ + { + "bbox": [ + 131, + 424, + 459, + 436 + ], + "lines": [ + { + "bbox": [ + 131, + 424, + 459, + 436 + ], + "spans": [ + { + "bbox": [ + 131, + 424, + 459, + 436 + ], + "type": "text", + "content": "Table 5: Results of ablation study, before and after removing constraints, ASR %." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 82, + 449, + 278, + 500 + ], + "lines": [ + { + "bbox": [ + 82, + 449, + 278, + 500 + ], + "spans": [ + { + "bbox": [ + 82, + 449, + 278, + 500 + ], + "type": "table", + "html": "
Defended ModelRNNSMHM
ASRQTASRQT
Clone+CodeBert12.90958.3528.171245.75
Defect+CodeBert95.37282.2092.23283.66
Authorship+CodeBert51.881524.4043.261026.08
", + "image_path": "7860bc92b68fbb77ab00f2beece6397a0ec64ac8885eb875e9adf450ed8b2bc2.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 70, + 530, + 291, + 571 + ], + "blocks": [ + { + "bbox": [ + 92, + 508, + 264, + 519 + ], + "lines": [ + { + "bbox": [ + 92, + 508, + 264, + 519 + ], + "spans": [ + { + "bbox": [ + 92, + 508, + 264, + 519 + ], + "type": "text", + "content": "Table 6: Attack defended models, ASR %." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 530, + 291, + 571 + ], + "lines": [ + { + "bbox": [ + 70, + 530, + 291, + 571 + ], + "spans": [ + { + "bbox": [ + 70, + 530, + 291, + 571 + ], + "type": "table", + "html": "
ACCASR(RNNS)ASR(MHM)ASR(ALERT)
Authorship90.6219.8123.5814.28
Defect65.1440.4623.6924.53
Java25097.6319.676.6542.91
", + "image_path": "206b7875bdf32f367983f1f679ef90c30f57bb0719d1fb24d992aca8532fb2eb.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "type": "text", + "content": "is 4.22-6.7 times that of RNNS, and the ASR of PSQ algorithm is " + }, + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "type": "inline_equation", + "content": "5.55\\% - 27.82\\%" + }, + { + "bbox": [ + 67, + 624, + 290, + 772 + ], + "type": "text", + "content": " lower than that of RNNS algorithm. It can be inferred that for code variable attacks, combinatorial optimization is inefficient when the substitute set of variables is relatively large. The main reasons are the following two points. Firstly, code segments are generally longer, and the substitute set of code variables is much larger than the synonym set of natural language words. Secondly, the impact of variable replacement on code semantics is smaller than that" + } + ] + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 307, + 449, + 523, + 493 + ], + "blocks": [ + { + "bbox": [ + 67, + 580, + 290, + 602 + ], + "lines": [ + { + "bbox": [ + 67, + 580, + 290, + 602 + ], + "spans": [ + { + "bbox": [ + 67, + 580, + 290, + 602 + ], + "type": "text", + "content": "Table 7: Results of contrastive adversarial retraining, model: CodeBERT." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 307, + 449, + 523, + 493 + ], + "lines": [ + { + "bbox": [ + 307, + 449, + 523, + 493 + ], + "spans": [ + { + "bbox": [ + 307, + 449, + 523, + 493 + ], + "type": "table", + "html": "
Task+ModelRNNSPSOLSH
ASRQTASRQTASRQT
Defect+CodeBert69.18588.3563.633945.0426.62321.78
Authorship+CodeBert73.391029.5952.294350.0019.26458.55
Java250+CodeBert75.12815.9147.35076.0231.58397.05
", + "image_path": "1bc38d7614c0281243716bf757f4a186790439153b00aace795c5a254f5daba8.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 327, + 502, + 499, + 513 + ], + "lines": [ + { + "bbox": [ + 327, + 502, + 499, + 513 + ], + "spans": [ + { + "bbox": [ + 327, + 502, + 499, + 513 + ], + "type": "text", + "content": "Table 8: RNNS vs PSO and LSH, ASR %." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 524, + 525, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 524, + 525, + 538 + ], + "spans": [ + { + "bbox": [ + 302, + 524, + 525, + 538 + ], + "type": "text", + "content": "of word replacement on natural language semantics." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 551, + 525, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 551, + 525, + 645 + ], + "spans": [ + { + "bbox": [ + 302, + 551, + 525, + 645 + ], + "type": "text", + "content": "RNNS's QT is 1.8-2.2 times that of LSH, and the QT has dropped significantly. However, LSH's ASR is inferior to RNNS by " + }, + { + "bbox": [ + 302, + 551, + 525, + 645 + ], + "type": "inline_equation", + "content": "42.56\\% - 54.13\\%" + }, + { + "bbox": [ + 302, + 551, + 525, + 645 + ], + "type": "text", + "content": ". For code variable attacks, LSH has high efficiency, but its effectiveness is relatively low. One possible reason for LSH causing low ASR is the distribution of adversarial samples in each bucket is uneven." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 656, + 396, + 669 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 656, + 396, + 669 + ], + "spans": [ + { + "bbox": [ + 302, + 656, + 396, + 669 + ], + "type": "text", + "content": "6 Related Work" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 301, + 678, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 678, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 301, + 678, + 525, + 772 + ], + "type": "text", + "content": "Adversarial attacks for code models have been widely studied (Yang et al., 2022; Liu et al., 2023a; Li et al., 2023; Jha and Reddy, 2023). These works can be generally categorized into black-box attacks and white-box attacks. A black-box attack for code models queries the model outputs and selects the substitutes using a score function. For example," + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "9713" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 86, + 68, + 273, + 144 + ], + "blocks": [ + { + "bbox": [ + 86, + 68, + 273, + 144 + ], + "lines": [ + { + "bbox": [ + 86, + 68, + 273, + 144 + ], + "spans": [ + { + "bbox": [ + 86, + 68, + 273, + 144 + ], + "type": "table", + "html": "
AlgorithmSubstitutes SizeSubstitutes SourceReplacement PositionSubstitutes Selection
MHMmediumvocabularyrandomrandom sample
ALERTsmallmodel generationimportance scoretraverse
RNNSlargereal public variablesuncertainty scoreefficient constrained search
", + "image_path": "771b4e0040f235590205eb87e09d7749767d61c59325e1d876bb32b805c7b3f4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 77, + 153, + 278, + 163 + ], + "lines": [ + { + "bbox": [ + 77, + 153, + 278, + 163 + ], + "spans": [ + { + "bbox": [ + 77, + 153, + 278, + 163 + ], + "type": "text", + "content": "Table 9: Difference between RNNS to the others." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 69, + 174, + 291, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 174, + 291, + 470 + ], + "spans": [ + { + "bbox": [ + 69, + 174, + 291, + 470 + ], + "type": "text", + "content": "ALERT (Yang et al., 2022) finds the adversarial examples using variable-name substitutes generated by pre-trained masked models. MHM (Zhang et al., 2020) uses Metropolis-Hastings to sample the replacement of code identifiers. STRATA (Springer et al., 2020) generates adversarial examples by replacing the code tokens based on the token distribution. Chen et al. (2022) apply pre-defined semantics-preserving code transformations to attack code models. CodeAttack (Jha and Reddy, 2023) uses code structure to generate adversarial data. White-box attacks require the code model gradient to modify inputs for adversarial example generation. CARROT (Zhang et al., 2022) selects code mutated variants based on the model gradient. Henkel et al. (2022) attack code models by gradient-based optimization of the abstract syntax tree transformation. Srikant et al. (2021) uses optimized program obfuscations to modify the code. DAMP (Yefet et al., 2020) derives the desired wrong prediction by changing inputs guided by the model gradient." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 472, + 289, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 472, + 289, + 714 + ], + "spans": [ + { + "bbox": [ + 69, + 472, + 289, + 714 + ], + "type": "text", + "content": "Table 9 demonstrates the differences among RNNS, MHM (Zhang et al., 2020) and ALERT (Yang et al., 2022). MHM and ALERT represent the two methodologies most closely aligned with our research. Our approach considers identifier replacements like MHM and ALERT, ensuring that the adversarial example keeps the same semantics as the original one. Our substitute size is scalable and can be substantial, and RNNS searches the possible next adversarial example in the substitute space. In our approach, we locate vulnerable variables based on the uncertainty and search " + }, + { + "bbox": [ + 69, + 472, + 289, + 714 + ], + "type": "inline_equation", + "content": "\\text{sub}_{\\text{topk}}" + }, + { + "bbox": [ + 69, + 472, + 289, + 714 + ], + "type": "text", + "content": " without building adversarial samples and actual attacks. Our goal is to obtain high ASRs by searching real variable names. MHM has the same goal as ours but synthesizes variable names. ALERT sacrifices ASR to make the variable name readable." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 725, + 145, + 737 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 725, + 145, + 737 + ], + "spans": [ + { + "bbox": [ + 67, + 725, + 145, + 737 + ], + "type": "text", + "content": "7 Conclusion" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "type": "text", + "content": "We propose a novel black-box adversarial search-based attack for variable replacement. RNNS has" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 525, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 246 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 246 + ], + "type": "text", + "content": "three main contributions: 1) This work proposes a non-generation search-based black-box attacking method via predicting the attack effect of a substitute. This method can greatly reduce the verification cost of the substitute, remove the restrictions on the size and diversity of the substitute set, and achieve a significant improvement in terms of ASR without increasing QT. 2) This work proposes a simple and efficient method for constructing a substitute set. This method can construct a large-scale, diverse, and real substitute set at low cost. 3) The adversarial examples from RNNS can be used to improve the model robustness." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 259, + 383, + 270 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 259, + 383, + 270 + ], + "spans": [ + { + "bbox": [ + 302, + 259, + 383, + 270 + ], + "type": "text", + "content": "8 Limitations" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "spans": [ + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "text", + "content": "There are some limitations of RNNS. Firstly, RNNS does not revert to the preceding step to persist with the search upon an increase in the model probability of the ground truth label. While the incorporation of this step may bolster the Attack Success Rate (ASR), it could potentially compromise the Query Time (QT). Secondly, the size and diversity of the substitute set significantly influence RNNS; a minimal and homogeneous set can precipitate a diminished attack success rate. Thirdly, RNNS involves multiple hyperparameters whose values need to be manually set. One of the most important parameters is the moving parameter " + }, + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "text", + "content": ". The number of attacking iterations max itr is also significant. We set " + }, + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "text", + "content": " to 0.2 and max itr to 6 with some small experimental trials. Fourthly, RNNS currently only targets untargeted attack scenarios, for targeted attacks, ASR will be very low when there are many category labels. For example, when performing targeted attacks on Authorship+Codebert with 66 labels, the ASR can only reach " + }, + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "inline_equation", + "content": "6.4\\%" + }, + { + "bbox": [ + 302, + 280, + 525, + 591 + ], + "type": "text", + "content": ". How to migrate to targeted attacks is a direction we need to study in the future." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 303, + 602, + 395, + 616 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 602, + 395, + 616 + ], + "spans": [ + { + "bbox": [ + 303, + 602, + 395, + 616 + ], + "type": "text", + "content": "Acknowledgment" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 624, + 525, + 760 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 624, + 525, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 624, + 525, + 760 + ], + "type": "text", + "content": "This work is supported by NRF and the CSA under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN), NRF and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-019), and NRF Investigatorship NRF-NRFI06-2020-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of NRF and CSA Singapore." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "9714" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 89, + 290, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 69, + 89, + 289, + 145 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 89, + 289, + 145 + ], + "spans": [ + { + "bbox": [ + 69, + 89, + 289, + 145 + ], + "type": "text", + "content": "Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4998-5007." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 152, + 290, + 218 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 152, + 290, + 218 + ], + "spans": [ + { + "bbox": [ + 69, + 152, + 290, + 218 + ], + "type": "text", + "content": "Bander Alsulami, Edwin Dauber, Richard Harang, Spiros Mancoridis, and Rachel Greenstadt. 2017. Source code authorship attribution using long short-term memory based networks. In Computer Security - ESORICS 2017, pages 65-82, Cham. Springer International Publishing." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 225, + 290, + 290 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 225, + 290, + 290 + ], + "spans": [ + { + "bbox": [ + 69, + 225, + 290, + 290 + ], + "type": "text", + "content": "Penglong Chen, Zhen Li, Yu Wen, and Lili Liu. 2022. Generating adversarial source programs using important tokens-based structural transformations. In 2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS), pages 173-182." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 298, + 290, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 298, + 290, + 364 + ], + "spans": [ + { + "bbox": [ + 69, + 298, + 290, + 364 + ], + "type": "text", + "content": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 372, + 290, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 372, + 290, + 416 + ], + "spans": [ + { + "bbox": [ + 69, + 372, + 290, + 416 + ], + "type": "text", + "content": "Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pages 933-944. IEEE." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 423, + 290, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 423, + 290, + 488 + ], + "spans": [ + { + "bbox": [ + 69, + 423, + 290, + 488 + ], + "type": "text", + "content": "Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, LIU Shujie, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. 2020. Graphcodebert: Pre-training code representations with data flow. In International Conference on Learning Representations." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 497, + 290, + 562 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 497, + 290, + 562 + ], + "spans": [ + { + "bbox": [ + 69, + 497, + 290, + 562 + ], + "type": "text", + "content": "Jordan Henkel, Goutham Ramakrishnan, Zi Wang, Aws Albarghouthi, Somesh Jha, and Thomas Reps. 2022. Semantic robustness of models of source code. In 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 526-537." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 570, + 290, + 625 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 570, + 290, + 625 + ], + "spans": [ + { + "bbox": [ + 69, + 570, + 290, + 625 + ], + "type": "text", + "content": "Akshita Jha and Chandan K Reddy. 2023. Codeattack: Code-based adversarial attacks for pre-trained programming language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 14892-14900." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 633, + 290, + 688 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 633, + 290, + 688 + ], + "spans": [ + { + "bbox": [ + 69, + 633, + 290, + 688 + ], + "type": "text", + "content": "Liuqing Li, He Feng, Wenjie Zhuang, Na Meng, and Barbara Ryder. 2017. Cclearner: A deep learning-based clone detection approach. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 249-260. IEEE." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 694, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 694, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 694, + 290, + 772 + ], + "type": "text", + "content": "Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, and Yang Liu. 2023. Multi-target backdoor attacks for code pre-trained models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7236-7254, Toronto, Canada. Association for Computational Linguistics." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 305, + 72, + 524, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 524, + 127 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 524, + 127 + ], + "type": "text", + "content": "Yaoxian Li, Shiyi Qi, Cuiyun Gao, Yun Peng, David Lo, Zenglin Xu, and Michael R Lyu. 2022. A closer look into transformer-based code intelligence through code transformation: Challenges and opportunities. arXiv preprint arXiv:2207.04285." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 139, + 524, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 139, + 524, + 184 + ], + "spans": [ + { + "bbox": [ + 304, + 139, + 524, + 184 + ], + "type": "text", + "content": "Shangqing Liu, Yu Chen, Xiaofei Xie, Jing Kai Siow, and Yang Liu. 2020. Retrieval-augmented generation for code summarization via hybrid gnn. In International Conference on Learning Representations." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 195, + 524, + 250 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 195, + 524, + 250 + ], + "spans": [ + { + "bbox": [ + 304, + 195, + 524, + 250 + ], + "type": "text", + "content": "Shangqing Liu, Bozhi Wu, Xiaofei Xie, Guozhu Meng, and Yang Liu. 2023a. Contrabert: Enhancing code pre-trained models via contrastive learning. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pages 2476-2487." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 261, + 524, + 317 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 261, + 524, + 317 + ], + "spans": [ + { + "bbox": [ + 304, + 261, + 524, + 317 + ], + "type": "text", + "content": "Shangqing Liu, Xiaofei Xie, Jingkai Siow, Lei Ma, Guozhu Meng, and Yang Liu. 2023b. Graphsearch-net: Enhancing gnns via capturing global dependencies for semantic code search. IEEE Transactions on Software Engineering." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 328, + 524, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 328, + 524, + 384 + ], + "spans": [ + { + "bbox": [ + 304, + 328, + 524, + 384 + ], + "type": "text", + "content": "Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. A strong baseline for query efficient attacks in a black box setting. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8396-8409." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 395, + 524, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 395, + 524, + 472 + ], + "spans": [ + { + "bbox": [ + 304, + 395, + 524, + 472 + ], + "type": "text", + "content": "Ruchir Puri, David Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pajar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 484, + 524, + 527 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 484, + 524, + 527 + ], + "spans": [ + { + "bbox": [ + 304, + 484, + 524, + 527 + ], + "type": "text", + "content": "Jacob M Springer, Bryn Marie Reinstadler, and Una-May O'Reilly. 2020. Strata: Simple, gradient-free attacks for models of code. arXiv preprint arXiv:2009.13562." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 539, + 524, + 595 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 539, + 524, + 595 + ], + "spans": [ + { + "bbox": [ + 304, + 539, + 524, + 595 + ], + "type": "text", + "content": "Shashank Srikant, Sijia Liu, Tamara Mitrovska, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang, and Una-May O'Reilly. 2021. Generating adversarial computer programs using optimized obfuscations. In International Conference on Learning Representations." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 606, + 524, + 672 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 606, + 524, + 672 + ], + "spans": [ + { + "bbox": [ + 304, + 606, + 524, + 672 + ], + "type": "text", + "content": "Wenhan Wang, Ge Li, Bo Ma, Xin Xia, and Zhi Jin. 2020. Detecting code clones with graph neural network and flow-augmented abstract syntax tree. In 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 261-271. IEEE." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 684, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 684, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 684, + 524, + 772 + ], + "type": "text", + "content": "Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696-8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "9715" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 595 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 127 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 127 + ], + "type": "text", + "content": "Martin White, Michele Tufano, Christopher Vendome, and Denys Poshyvanyk. 2016. Deep learning code fragments for code clone detection. In 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 87-98. IEEE." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 136, + 289, + 202 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 136, + 289, + 202 + ], + "spans": [ + { + "bbox": [ + 69, + 136, + 289, + 202 + ], + "type": "text", + "content": "Zhou Yang, Jieke Shi, Junda He, and David Lo. 2022. Natural attack for pre-trained models of code. In Proceedings of the 44th International Conference on Software Engineering, ICSE '22, page 1482-1493, New York, NY, USA. Association for Computing Machinery." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 211, + 289, + 253 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 211, + 289, + 253 + ], + "spans": [ + { + "bbox": [ + 69, + 211, + 289, + 253 + ], + "type": "text", + "content": "Noam Yefet, Uri Alon, and Eran Yahav. 2020. Adversarial examples for models of code. Proceedings of the ACM on Programming Languages, 4(OOPSLA):1-30." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 263, + 289, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 263, + 289, + 329 + ], + "spans": [ + { + "bbox": [ + 69, + 263, + 289, + 329 + ], + "type": "text", + "content": "Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066-6080." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 338, + 289, + 393 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 338, + 289, + 393 + ], + "spans": [ + { + "bbox": [ + 69, + 338, + 289, + 393 + ], + "type": "text", + "content": "Huangzhao Zhang, Zhiyi Fu, Ge Li, Lei Ma, Zhehao Zhao, Hua'an Yang, Yizhe Sun, Yang Liu, and Zhi Jin. 2022. Towards robustness of deep program processing models—detection, estimation, and enhancement. ACM Trans. Softw. Eng. Methodol., 31(3)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 401, + 289, + 457 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 401, + 289, + 457 + ], + "spans": [ + { + "bbox": [ + 69, + 401, + 289, + 457 + ], + "type": "text", + "content": "Huangzhao Zhang, Zhuo Li, Ge Li, Lei Ma, Yang Liu, and Zhi Jin. 2020. Generating adversarial examples for holding robustness of source code processing models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1169-1176." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 465, + 289, + 532 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 465, + 289, + 532 + ], + "spans": [ + { + "bbox": [ + 69, + 465, + 289, + 532 + ], + "type": "text", + "content": "Vitalii Zhelezniak, Aleksandar Savkov, and Nils Hammerla. 2020. Estimating mutual information between dense word embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8361-8371, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 539, + 289, + 595 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 539, + 289, + 595 + ], + "spans": [ + { + "bbox": [ + 69, + 539, + 289, + 595 + ], + "type": "text", + "content": "Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. Advances in neural information processing systems, 32." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "9716" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_content_list.json b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..eac6e176158f6e5621bf67fdc583cbb25d2214f9 --- /dev/null +++ b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_content_list.json @@ -0,0 +1,1827 @@ +[ + { + "type": "text", + "text": "A Boundary Offset Prediction Network for Named Entity Recognition", + "text_level": 1, + "bbox": [ + 136, + 89, + 860, + 111 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Minghao Tang $^{1,2}$ , Yongquan He $^{3}$ , Yongxiu Xu $^{1,2*}$ , Hongbo Xu $^{1}$ , Wenyuan Zhang $^{1,2}$ and Yang Lin $^{3}$", + "bbox": [ + 152, + 123, + 855, + 158 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Institute of Information Engineering, CAS, China", + "bbox": [ + 295, + 159, + 705, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ School of Cyber Security, UCAS, China", + "bbox": [ + 332, + 175, + 668, + 191 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3Meituan, China", + "bbox": [ + 433, + 192, + 569, + 206 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{tangminghao,xuyongxiu,hbxu}@ie.ac.cn, heyongquan@meituan.com", + "bbox": [ + 186, + 209, + 815, + 225 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 266 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Named entity recognition (NER) is a fundamental task in natural language processing that aims to identify and classify named entities in text. However, span-based methods for NER typically assign entity types to text spans, resulting in an imbalanced sample space and neglecting the connections between non-entity and entity spans. To address these issues, we propose a novel approach for NER, named the Boundary Offset Prediction Network (BOPN), which predicts the boundary offsets between candidate spans and their nearest entity spans. By leveraging the guiding semantics of boundary offsets, BOPN establishes connections between non-entity and entity spans, enabling non-entity spans to function as additional positive samples for entity detection. Furthermore, our method integrates entity type and span representations to generate type-aware boundary offsets instead of using entity types as detection targets. We conduct experiments on eight widely-used NER datasets, and the results demonstrate that our proposed BOPN outperforms previous state-of-the-art methods.", + "bbox": [ + 141, + 278, + 460, + 619 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 631, + 258, + 645 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Named entity recognition (NER) is a fundamental task in natural language processing (NLP) that involves identifying and categorizing named entities in text, such as people, locations and organizations. It has drawn much attention from the community due to its relevance in various NLP applications, such as entity linking (Le and Titov, 2018; Hou et al., 2020) and relation extraction (Miwa and Bansal, 2016; Li et al., 2021b).", + "bbox": [ + 112, + 656, + 487, + 800 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Various paradigms have been proposed for NER, including the sequence labeling (Huang et al., 2015; Ju et al., 2018), hypergraph-based (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018), sequence-to-sequence (Gillick et al., 2016; Yan et al., 2021) and span-based methods (Sohrab", + "bbox": [ + 112, + 801, + 489, + 898 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/afdc6e2991a1a66df20f5b60549e17670764767519c981aef32ce0850c1a9658.jpg", + "image_caption": [ + "protein protein cell_type HMG box containing transcription factors in lymphocyte differentiation", + "Figure 1: A sentence from GENIA dataset (Ohta et al., 2002), containing 8 words and 3 entities. The candidate spans covers the upper triangular region with a total of 36 samples of each matrix. There are 2 and 1 positive samples for \"protein\" and \"cell type\" entity types, respectively." + ], + "image_footnote": [], + "bbox": [ + 515, + 288, + 877, + 413 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "and Miwa, 2018; Shen et al., 2021; Chen et al., 2021). Among these approaches, the span-based method has become the most popular due to its simplicity and effectiveness. It is straightforward that typically embeds all possible text spans and predicts their entity types, making it suitable for various NER subtasks (Li et al., 2021a, 2022).", + "bbox": [ + 507, + 544, + 882, + 656 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Despite significant progress made by span-based methods in NER, there remain two critical issues that require attention. Firstly, these methods often suffer from highly imbalanced sample spaces, as exemplified in Figure 1. Such imbalance can negatively impact the trainability and performance of deep neural networks (Johnson and Khoshgoftaar, 2019). Although some methods (Shen et al., 2021; Wan et al., 2022) mitigate this issue by restricting the maximum span length, such an approach can also constrain the model's predictive power. Secondly, current span-based methods primarily focus on learning the distinction between non-entities and entities, disregarding their relationships. While a model can identify whether \"HMG box\" is an entity, it may fail to recognize the connection be", + "bbox": [ + 507, + 662, + 884, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* Yongxiu Xu is the corresponding author", + "bbox": [ + 141, + 904, + 396, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "14834", + "bbox": [ + 475, + 927, + 524, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14834-14846 December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 208, + 945, + 786, + 972 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/e231c942f392d6606bda85462443b42c741df015dc8fca7920e84c9d9c0c8541.jpg", + "image_caption": [ + "Figure 2: Text spans annotated with boundary offset. \"1S\" or \"1E\" represents a span has 1 offset from its nearest entity at the start or end boundary, and so on." + ], + "image_footnote": [], + "bbox": [ + 122, + 80, + 482, + 190 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "tween \"HMG\" and \"HMG box.\" To enhance the model's ability to recognize entities, it is crucial to explicitly capture both boundary differences and connections between non-entities and entities.", + "bbox": [ + 112, + 271, + 487, + 335 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we intend to model text spans by utilizing boundary offset information as supervision, rather than predict their probability of belonging to entities. As shown in Figure 2, there could be two advantages for deep models when boundary offsets are learnable: i) The natural quantitative relationships between offset values enable the model to capture boundary differences and connections simultaneously. ii) Non-entity spans can have specific semantics that guide the positioning of entity spans, leading to an improved sample space with fewer negative samples.", + "bbox": [ + 112, + 337, + 489, + 529 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Based on this observation, we propose the Boundary Offset Prediction Network (BOPN) for NER. BOPN focuses on predicting boundary offsets between candidate spans and their nearest entities, providing a new perspective on modeling text spans. Specifically, our method follows the pipeline of first learning span representations and then classifying them for offset prediction. BERT (Devlin et al., 2019) and BiLSTM (Lample et al., 2016) are used to embed texts, followed by a Conditional Layer (Liu et al., 2021) for building span representations. Meanwhile, we also treat entity types as inputs rather than classification targets, which are fused with span representations to generate type-aware boundary offsets in parallel. Finally, we incorporate multiple 3D convolution layers to capture the natural quantitative relationships between the offset values.", + "bbox": [ + 112, + 532, + 489, + 819 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We evaluate our method on eight widely-used NER datasets, including five English NER datasets and three Chinese NER datasets. The experimental results demonstrate that our approach outperforms the existing state-of-the-art methods. Furthermore, a detailed examination reveals a significant im", + "bbox": [ + 112, + 822, + 489, + 917 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "provement in recall scores when aggregating results across offset labels, which is particularly beneficial for recall-sensitive applications.", + "bbox": [ + 507, + 84, + 882, + 131 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Problem Definition", + "text_level": 1, + "bbox": [ + 507, + 143, + 709, + 159 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Named Entity Recognition (NER) aims to identify of all entities within an input sentence $\\mathrm{X} = \\{x_{n}\\}_{n = 1}^{N}$ , based on a pre-defined set of entity types $\\mathrm{Y} = \\{y_{m}\\}_{m = 1}^{M}$ . Typically, an entity is specified by token boundaries and a entity types.", + "bbox": [ + 507, + 170, + 884, + 250 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our proposed method focuses on predicting the boundary offset between each candidate text span and its nearest entity. Hence, we formulate each text span as a quadruple: $\\{x_{i}, x_{j}, f_{s}, y_{m}\\}$ , where $i$ and $j$ denote the start and end boundary indices of the span, $f_{s}$ represents the start or end boundary offset from its nearest entity of type $y_{m}$ . Note that an entity span is a special case with $f_{s} = 0$ .", + "bbox": [ + 507, + 250, + 882, + 380 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Annotation Guidelines To facilitate understanding, we present the essential boundary offset labels as follow:", + "bbox": [ + 507, + 388, + 882, + 435 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Center Span: refers to an entity span with an offset label of \"0\".", + "- $\\mathbf{\\nabla}^{*}\\mathbf{S}$ or $\\mathbf{\\nabla}^{*}\\mathbf{E}$ : denotes the annotation of the start or end boundary offsets for non-entity spans. \" $\\ast$ \" represents an offset value in the range of $[-S, \\dots, -1, 1, \\dots, S]$ , where $S$ denotes the maximum offset value.", + "- Out-of-Range: refers to the annotation of a non-entity span with an absolute boundary offset value from its nearest entity exceeding the maximum offset value $S$ ." + ], + "bbox": [ + 531, + 450, + 880, + 646 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The annotation procedure for boundary offsets involves three steps. Initially, a 3-dimensional matrix $\\mathcal{O} \\in \\mathbb{R}^{M \\times N \\times N}$ is constructed according to the input sentence $X$ , where $M$ denotes the number of entity types and $N$ represents the length of the sentence. Next, we annotate the center spans with the offset label \"0\" based on the golden entities present in $X$ . Entities of different types are assigned to their respective sub-matrices. Finally, for non-entity spans, we compute the start and end boundary offset values with respect to all center spans. Their annotation is determined by the absolute minimum offset value. If the absolute minimum offset value is less than $S$ , we annotate the corresponding *S or *E; otherwise, we label the span as \"Out-of-Range\".", + "bbox": [ + 507, + 661, + 884, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "14835", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/c3d5830016737eb1c91b120afd7c7b2898f072f40fbc8081c3a749cb57e2a4a5.jpg", + "image_caption": [ + "(a) Span Encoder", + "Figure 3: An overview architecture of our method, which mainly consists of two components: a Span Encoder and a Boundary Offset Predictor." + ], + "image_footnote": [], + "bbox": [ + 115, + 96, + 880, + 324 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Methods", + "text_level": 1, + "bbox": [ + 114, + 387, + 223, + 401 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Figure 3 provides an overview of our method, which encompasses two primary components: a Span Encoder (Section 3.1) and a Boundary Offset Predictor (Section 3.2). The Span Encoder is responsible for encoding entity types and sentences, utilizing word representations to construct span representations. Subsequently, the entity type and span representations are inputted into the boundary offset predictor, facilitating type-aware offset classification.", + "bbox": [ + 112, + 419, + 489, + 580 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Span Encoder", + "text_level": 1, + "bbox": [ + 112, + 600, + 272, + 615 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Drawing inspiration from the prompt-based methods (Qin and Eisner, 2021; Han et al., 2022), we consider entity types as task-oriented inputs, indicating the specific types of entities that the model needs to predict within a given sentence.", + "bbox": [ + 112, + 626, + 487, + 706 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To achieve this, we create a set of additional type tokens, denoted as $\\mathrm{P} = \\{p_m\\}_{m=1}^M$ , where $p_m$ represents a learnable special token corresponding to entity type $y_m$ . Next, we concatenate the soft tokens $\\mathrm{P}$ with the sentence $\\mathrm{X}$ to form a single sequence, and employ BERT (Devlin et al., 2019) to encode them simultaneously. The output of BERT is then passed through a BiLSTM (Lample et al., 2016) to generate final embedding features $\\mathrm{H} = \\{h_1, h_2, \\dots, h_{M+N}\\} \\in \\mathbb{R}^{(M+N) \\times d}$ , where $d$ is the hidden size. Finally, we split $\\mathrm{H}$ to obtain entity type representations $\\mathrm{H}^Y \\in \\mathbb{R}^{M \\times d}$ and token representations $\\mathrm{H}^X \\in \\mathbb{R}^{N \\times d}$ , respectively.", + "bbox": [ + 112, + 709, + 489, + 919 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Span Representation Given the token representations $\\mathrm{H}^X = \\{h_1, h_2, \\dots, h_N\\}$ , the span representation $v_{ij}$ can be considered as a fusion of the boundary representations $(h_i, h_j)$ . Following Li et al. (2022), we adopt the Conditional Layer Normalization (CLN) (Liu et al., 2021) mechanism to build a high-quality span representation:", + "bbox": [ + 507, + 387, + 884, + 501 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} v _ {i j} = \\operatorname {C L N} \\left(h _ {i}, h _ {j}\\right) \\tag {1} \\\\ = \\gamma_ {j} \\otimes \\operatorname {N o r m} \\left(h _ {i}\\right) + \\lambda_ {j}, \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 589, + 513, + 882, + 550 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathrm{Norm}(\\cdot)$ is the instance normalization function (Ulyanov et al., 2016), $\\gamma_{j}$ and $\\lambda_{j}$ are the condition parameters that are obtained by two different feedforward networks: $\\gamma_{j} = \\mathrm{FFN}(h_{j})$ and $\\lambda_{j} = \\mathrm{FFN}(h_{j})$ .", + "bbox": [ + 507, + 562, + 884, + 644 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While valid candidate spans are restricted to the upper triangular region of the adjacent text span matrix, a region embedding $\\mathrm{E} = [e_{up}, e_{low}] \\in \\mathbb{R}^{2 \\times d_e}$ are adapted to distinguish the positions of text spans. The final representation of each span is obtained as: $\\hat{v}_{ij} = [v_{ij}, e_{up}]$ if $i \\leq j$ ; $\\hat{v}_{ij} = [v_{ij}, e_{low}]$ if $i > j$ .", + "bbox": [ + 507, + 644, + 882, + 756 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2 Boundary Offset Predictor", + "text_level": 1, + "bbox": [ + 507, + 768, + 766, + 784 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As previously mentioned, we utilize the entity types as inputs to guide the model in generating type-aware boundary offsets, rather than categorizing each text span into a particular entity type.", + "bbox": [ + 507, + 790, + 884, + 854 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The biaffine classifier (Yu et al., 2020) is employed to fuse entity type representations and span representations. Specifically, given an entity type representation $h_m \\in \\mathbf{H}^Y$ and span representation", + "bbox": [ + 507, + 854, + 882, + 919 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "14836", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "$\\hat{v}_{ij}\\in \\widehat{\\mathbf{V}}$ , a scoring vector $c_{mij}\\in \\mathbb{R}^L$ can be computed as:", + "bbox": [ + 112, + 83, + 489, + 116 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nh _ {y} ^ {\\prime} = \\operatorname {F F N} \\left(h _ {y}\\right), \\quad \\hat {v} _ {i j} ^ {\\prime} = \\operatorname {F F N} \\left(\\hat {v} _ {i j}\\right), \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 169, + 126, + 487, + 147 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nc _ {m i j} = \\left(h _ {m} ^ {\\prime}\\right) ^ {T} U \\hat {v} _ {i j} ^ {\\prime} + W \\left(h _ {m} ^ {\\prime} \\oplus v _ {i j} ^ {\\prime}\\right) + b, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 137, + 156, + 487, + 179 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $L$ is the number of offset labels $^1$ ; $U \\in \\mathbb{R}^{L \\times d_b \\times d_b}$ , $W \\in \\mathbb{R}^{L \\times 2d_b}$ and $b \\in \\mathbb{R}^L$ are learnable parameters, $d_b$ is the biaffine hidden size.", + "bbox": [ + 112, + 184, + 489, + 233 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3D Convolution Layer Furthermore, we utilize multiple 3-dimensional convolution (3DConv) layers to capture the inherent quantitative relationships between the boundary offsets of adjacent text spans. As depicted in Figure 3(b), the 3D convolution kernels traverse the complete score matrix $C$ in three directions, thereby aggregating offset predictions for adjacent text spans across all entity types. The computation in a single convolution layer can be expressed as:", + "bbox": [ + 112, + 241, + 489, + 401 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathrm {Q} = \\sigma (\\mathrm {3 D C o n v} (\\mathrm {C})), \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 218, + 414, + 487, + 432 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathbf{Q} \\in \\mathbb{R}^{M \\times N \\times N \\times L}$ , $\\sigma$ is the GELU activation function (Hendrycks and Gimpel, 2016). We assign different dilation rates to each convolution layer, and then concatenate their outputs followed by a linear to calculate final prediction scores:", + "bbox": [ + 112, + 442, + 487, + 525 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathrm {Q}} = \\operatorname {L i n e a r} \\left(\\mathrm {Q} _ {1} \\oplus \\mathrm {Q} _ {2} \\oplus \\mathrm {Q} _ {3}\\right), \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 189, + 535, + 487, + 555 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To obtain the probability distribution of span $(i,j)$ over the offset labels, $\\hat{q}_{mij} \\in \\hat{\\mathbf{Q}}$ is fed into a softmax layer:", + "bbox": [ + 112, + 565, + 487, + 613 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {o} _ {m i j} = \\operatorname {s o f t m a x} \\left(\\hat {q} _ {m i j}\\right), \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 211, + 627, + 487, + 644 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3 Training and Inference", + "text_level": 1, + "bbox": [ + 112, + 655, + 341, + 671 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Learning Objective In our method, the learning objective is to accurately assign a boundary offset to each text span, which can be treated as a multiclass classification problem and optimized using cross-entropy loss:", + "bbox": [ + 112, + 676, + 487, + 756 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = - \\frac {1}{M N ^ {2}} \\sum_ {m} ^ {M} \\sum_ {i} ^ {N} \\sum_ {j} ^ {N} o _ {m i j} ^ {T} \\log \\left(\\hat {o} _ {m i j}\\right) \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 139, + 766, + 487, + 812 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $o_{mij} \\in \\mathbb{R}^D$ represents the ground-truth, which is an one-hot vector encoded from the annotated adjacent text span matrix $\\mathcal{O}$ .", + "bbox": [ + 112, + 824, + 489, + 872 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Inference with Boundary offsets During the inference process, decoding entities based on predicted boundary offsets is a straightforward procedure. The output of our method is a matrix of size $M \\times N \\times N$ , where each cell represents a potential entity and contains information about its boundaries and type. For example, a cell with coordinates $(m, i, j)$ and the prediction \"-1E\" indicates an entity of type $y_{m}$ with a start boundary at $x_{i}$ and an end boundary at $x_{j+1}$ . Conversely, if the predicted value is \"out-of-range,\" it implies that the cell does not correspond to any entity.", + "bbox": [ + 507, + 84, + 884, + 277 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "However, blindly accepting all predicted boundary offsets may result in sub-optimal outcomes as it disregards the quantitative relationship between boundary offsets. Therefore, we introduce two heuristic rules to identify unreasonable predictions: i) Predicted boundary offsets that do not align with their nearest center span. ii) Predicted boundary offsets that do not adhere to a sequential order with neighboring spans.", + "bbox": [ + 507, + 279, + 884, + 423 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4 Experimental Settings", + "text_level": 1, + "bbox": [ + 507, + 439, + 734, + 456 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.1 Datasets", + "text_level": 1, + "bbox": [ + 507, + 467, + 623, + 481 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To evaluate our method, we conducted experiments on five English NER datasets, including CoNLL 2003 (Sang and De Meulder, 2003), OntoNotes $5^{2}$ , ACE $2004^{3}$ , ACE $2005^{4}$ and GENIA (Ohta et al., 2002); and three Chinese NER datasets, including MSRA (Levow, 2006), Resume NER (Zhang and Yang, 2018) and Weibo NER (Peng and Dredze, 2015). Note that ACE 2004, ACE 2005 and GENIA are nested NER datasets, others are flat datasets.", + "bbox": [ + 507, + 489, + 882, + 633 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For OntoNotes 5, we take the same train/dev/test as used in CoNLL 2012 shared task (Pradhan et al., 2012). For ACE 2004 and ACE 2005, we use the same data split as Lu and Roth (2015). For GENIA, we follow Katiyar and Cardie (2018) to split train/test as 9:1. For other datasets, we employ the same settings in previous works (Ma et al., 2020; Yan et al., 2021; Zhu and Li, 2022).", + "bbox": [ + 507, + 636, + 882, + 764 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.2 Implementation Details", + "text_level": 1, + "bbox": [ + 507, + 778, + 741, + 794 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese", + "bbox": [ + 507, + 801, + 882, + 866 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "Given a maximum offset $S$ , $L = 4S + 2$ when considering both start and end boundary offset; $L = 2S + 2$ when only considering start or end boundary offset.", + "bbox": [ + 112, + 879, + 487, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "$^{2}$ https://catalog.ldc.upenn.edu/LDC2005T09 \n $^{3}$ https://catalog.ldc.upenn.edu/LDC2005T09", + "bbox": [ + 529, + 878, + 805, + 904 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "4https://catalog.ldc.upenn.edu/LDC2006T06", + "bbox": [ + 529, + 904, + 803, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "14837", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/51a8df1d54a01656e26c278e13c8423906cc8511a43b3e4105312ce1cebe8872.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelsCoNLL 2003OntoNotes 5
PRF1PRF1
Sequence Labeling Methods
BiLSTM-CRF (Miwa and Bansal, 2016)--91.0386.0486.5386.28
BERT-Tagger (Devlin et al., 2019)--92.8090.0188.3589.16
Span-based Methods
Biaffine (Yu et al., 2020)*†92.4692.6792.5589.9489.8189.88
W2NER (Li et al., 2022)92.7193.4493.0790.0390.9790.50
Boundary Smooth (Zhu and Li, 2022)*†92.8993.2093.0490.4290.8190.61
DiffusionNER (Shen et al., 2023a)92.9992.5692.7890.3191.0290.66
Others
Seq2Seq (Straková et al., 2019)--92.98---
BartNER (Yan et al., 2021)†92.5793.5393.0589.6590.8790.26
PIQN (Shen et al., 2022)93.2992.4692.8791.4390.7390.96
PromptNER (Shen et al., 2023b)92.4892.3392.41---
BOPN (Ours)93.2293.1593.1990.9391.4091.16
", + "bbox": [ + 201, + 80, + 796, + 318 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/95bc12ee2f9aafbfc8ab866b4cf6f2aface10670f739ee9baa72c4e8aa505dc2.jpg", + "table_caption": [ + "Table 1: Results on English flat NER datasets CoNLL 2003 and OntoNotes 5. † means our re-implementation via their code. * denotes a fair comparison that their BERT encoder is consistent with our model." + ], + "table_footnote": [], + "table_body": "
ModelsMSRAResume NERWeibo NER
PRF1PRF1PRF1
Sequence Labeling Methods
Lattice (Zhang and Yang, 2018)93.5792.7993.1894.8194.1194.4653.0462.2558.79
Flat (Li et al., 2020)--96.09--95.86--68.55
SoftLexicon (Ma et al., 2020)95.7595.1095.4296.0896.1396.1170.9467.0270.50
MECT (Wu et al., 2021)--96.24--95.98--70.43
Span-based Methods
W2NER (Li et al., 2022)96.1296.0896.1096.9696.3596.6570.8473.8772.32
Boundary Smooth (Zhu and Li, 2022)96.3796.1596.2696.6396.6996.6670.1675.3672.66
DiffusionNER (Shen et al., 2023a)95.7194.1194.91------
BOPN (Ours)96.4496.3496.3996.7396.8396.7871.7973.9072.92
", + "bbox": [ + 119, + 369, + 882, + 554 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 2: Results on Chinese flat NER datasets MSRA, Resume and Weibo.", + "bbox": [ + 243, + 564, + 751, + 579 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021).", + "bbox": [ + 112, + 605, + 485, + 636 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The BiLSTM has one layer and 256 hidden size with dropout rate of 0.5. The size of region embedding $d_{e}$ is 20. The maximum offset value $S$ is selected in $\\{1,2,3\\}$ . For all datasets, we train our models by using AdamW Optimizer (Loshchilov and Hutter, 2017) with a linear warmup-decay learning rate schedule. See Appendix A for more details. Our source code can be obtained from https://github.com/mhtang1995/BOPN.", + "bbox": [ + 112, + 638, + 485, + 783 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.3 Evaluation", + "text_level": 1, + "bbox": [ + 112, + 799, + 247, + 813 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We use strict evaluation metrics where a predicted entity is considered correct only when both the boundaries (after adding boundary offset) and type are accurately matched. The precision, recall and $F_{1}$ scores are employed. We run our model for five times and report averaged values.", + "bbox": [ + 112, + 822, + 485, + 917 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "5 Results and Analysis", + "text_level": 1, + "bbox": [ + 507, + 604, + 722, + 620 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "5.1 Main Results", + "text_level": 1, + "bbox": [ + 507, + 634, + 660, + 649 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The performance of our proposed method and the baselines on English flat NER datasets is presented in Table 1. The experimental results demonstrate that our approach surpasses the previous state-of-the-art (SOTA) methods by $+0.12\\%$ on the CoNLL 2003 dataset and $+0.20\\%$ on the OntoNotes 5 dataset, achieving superior performance with $F_{1}$ scores of $93.19\\%$ and $91.16\\%$ , respectively. For Chinese flat NER datasets, we provide the results in Table 2. Similarly, our proposed method achieves SOTA performance in terms of $F_{1}$ scores, surpassing the previous best method by $+0.13\\%$ , $+0.12\\%$ , and $+0.26\\%$ in $F_{1}$ scores on the MSRA, Resume NER, and Weibo NER datasets, respectively.", + "bbox": [ + 505, + 659, + 882, + 884 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The performance results on English nested NER datasets are presented in Table 3. Remarkably,", + "bbox": [ + 507, + 887, + 882, + 919 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "14838", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/fa2bba559ca22fd4759b58d7f6b9102e9206b9c08053052018dc26058c40e7ae.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelsACE 2004ACE 2005GENIA
PRF1PRF1PRF1
Sequence Labeling Methods
Layered (Ju et al., 2018)---74.270.372.278.571.374.7
Pyramid (Wang et al., 2020)86.0886.4886.2883.9585.3984.6679.4578.9479.19
Span-based Methods
Biaffine (Yu et al., 2020)87.386.086.785.285.685.478.278.278.2
Locate and Label (Shen et al., 2021)87.4487.3887.4186.0987.2786.6780.1980.8980.54
W2NER (Li et al., 2022)87.3387.7187.5285.0388.6286.7983.1079.7681.39
Triaffine (Yuan et al., 2022)87.1387.6887.6086.7086.9486.8280.4282.0681.23
Boundary Smooth (Zhu and Li, 2022)88.4387.5387.9886.2588.0787.15---
DiffusionNER (Shen et al., 2023a)88.1188.6688.3986.1587.7286.9382.1080.9781.53
Others
Seq2Seq (Straková et al., 2019)--84.33--83.42--78.20
BartNER (Yan et al., 2021)87.2786.4186.8483.1686.3884.7478.5779.3078.93
PIQN (Shen et al., 2022)88.4887.8188.1486.2788.6087.4283.2480.3581.77
PromptNER (Shen et al., 2023b)87.5888.7688.1686.0788.3887.21---
BOPN (Ours)89.1389.4089.2689.5691.2390.3982.1482.1682.14
", + "bbox": [ + 119, + 80, + 884, + 355 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/2f8978854f1e34a1d256ff1d158bd7686e6740c1f55e0a214c8ffcb8517f8bc4.jpg", + "table_caption": [ + "Table 3: Results on English nested NER datasets ACE 2004, ACE 2004 and GENIA." + ], + "table_footnote": [], + "table_body": "
CoNLL 2003Resume NERACE 2004
BOPN (Ours)93.1996.7889.26
- w/o Type Inp.92.8796.4188.83
- w/o Region Emb.92.7196.2288.71
- w/o BO92.7496.2688.62
- w/o 3DConv92.8796.4089.11
- MBO (S=1)93.1196.7589.14
- MBO (S=2)93.1596.7889.26
- MBO (S=3)93.1996.7189.22
- 3DConv (l=1)93.0896.6989.18
- 3DConv (l=2)93.1996.7589.26
- 3DConv (l=3)93.0596.7889.25
", + "bbox": [ + 119, + 401, + 485, + 606 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 4: Ablation Studies. MBO means the maximum boundary offset value.", + "bbox": [ + 112, + 615, + 485, + 645 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "our proposed BOPN achieves substantial improvements in performance on these datasets, with $F_{1}$ scores increasing by $+0.87\\%$ , $+2.97\\%$ , and $+0.37\\%$ on ACE 2004, ACE 2005, and GENIA, respectively. These results align with our expectations, as the boundary features of nested entities are more intricate compared to flat entities. We attribute this improvement to two key factors: 1) Our method predicts the boundary information of various entity types in parallel, effectively avoiding nested boundary conflicts between different types of entities. 2) By predicting boundary offsets, our method expands the predictive range for each text span, allowing for more granular and precise identification of entity boundaries.", + "bbox": [ + 112, + 677, + 490, + 917 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.2 Ablation Studies", + "text_level": 1, + "bbox": [ + 507, + 404, + 685, + 418 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In order to assess the impact of each component in our method, we conduct ablation studies on the CoNLL 2003, ACE 2005, and Resume NER datasets. The results of these studies are presented in Table 4.", + "bbox": [ + 507, + 426, + 882, + 505 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Maximum Boundary Offset We investigate the impact of training the model with different maximum offset values $S$ through our ablation studies. The hyperparameter $S$ determines the annotation scope of non-entity spans with boundary offset. Specifically, the extreme scenario of setting $S$ to 0 corresponds to a condition \"w/o BO\" (without Boundary Offset). The results indicate a significant decline in performance when employing \"w/o BO,\" confirming the usefulness of utilizing boundary offsets as supervision. However, we also observe that the optimal $S$ value varies across different datasets. This could be attributed to the fact that a larger $S$ value provides more boundary knowledge but also increases the label search space. Consequently, hyperparameter tuning for $S$ becomes necessary to achieve the best performance in practice.", + "bbox": [ + 507, + 516, + 884, + 789 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In addition, we analyze the learning curves of our model with different maximum offset values. Figure 4 demonstrates that a larger $S$ can accelerate the training process of the model. We think the reason may be that a larger $S$ not only leads to an increase of positive samples but also results in a decrease of negative samples, thereby ultimately enhancing the trainability of the model.", + "bbox": [ + 507, + 790, + 882, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "14839", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/80743a7388518887de4cddd798286acf3542bc8074bcaf3df54dbf23b325835e.jpg", + "image_caption": [ + "Figure 4: The learning curves on ACE 2004 dataset." + ], + "image_footnote": [], + "bbox": [ + 134, + 82, + 463, + 259 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/9525a18639f669a1f475103602f75af1476495c53c686972228cffcd243de4c9.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
LabelPRF1Support
-2S81.5182.0281.765029
-1S81.6282.9782.295292
1S79.5581.4780.503281
2S76.2779.5577.881438
-2E78.6477.1977.901464
-1E79.7980.5880.183254
1E82.2682.2082.235393
2E82.3780.7581.575113
081.9281.9581.935495
ALL79.2184.2281.645495
- w/ rules81.8582.5682.205495
", + "bbox": [ + 119, + 299, + 485, + 511 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 5: Performance of each boundary offset label on GENIA, where the maximum offset value is 2. The reported results is one out of five experiments.", + "bbox": [ + 112, + 521, + 485, + 565 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "3D Convolution Layer \"w/o 3DConv\" indicates the 3D convolution layers are removed. As seen, the results show a decline in performance across all datasets, indicating the importance of 3D convolution layers in capturing the interactions between boundary offsets of adjacent text spans.", + "bbox": [ + 112, + 590, + 487, + 687 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Type Inputs \"w/o Type Inputs\" refers to a setting where the entity types encoded with the sentence are replaced, in which the randomly initialized entity type embeddings are fed into the biaffine classifier. The results obtained in this setting show a slight decline in performance.", + "bbox": [ + 112, + 699, + 487, + 794 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Region Embedding The results demonstrate a slight drop in performance across all datasets without region embeddings. This suggests that integrating sample distribution features can be a reasonable approach for enhancing text span representations.", + "bbox": [ + 112, + 806, + 487, + 885 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As the CLN layer and biaffine classifier serve as fundamental components in our approach for span", + "bbox": [ + 112, + 887, + 487, + 919 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/8053ec891e88602ef839556b5f26ee561c5811df50fa4443201d11aa38f9185d.jpg", + "image_caption": [ + "Figure 5: A comparison of F1-scores on entities of different lengths in GENIA dataset. Entity supports are in the parentheses." + ], + "image_footnote": [], + "bbox": [ + 529, + 82, + 863, + 236 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "representation and classification, they cannot be evaluated independently. Nonetheless, our ablation studies demonstrate the effectiveness of learning boundary offset information and the usefulness of each composition in our model.", + "bbox": [ + 507, + 319, + 882, + 400 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.3 Detailed Analysis", + "text_level": 1, + "bbox": [ + 507, + 420, + 692, + 435 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Performance on Different Offset Labels We investigate the performance of each boundary offset label, and the results are presented in Table 5. Notably, the offset label \"0\" has complete entity support and achieves an $F_{1}$ score of $82.04\\%$ . Furthermore, we observed a positive correlation between the quantity of entity support and the performance of boundary offset labels.", + "bbox": [ + 507, + 447, + 882, + 575 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "When a text span is not predicted as \"out-of-range\", its assigned label can be utilized to determine the position of its nearest entity. By aggregating all predictions of offset labels, we observe a sharp decrease in precision score, along with a significant increase in recall score, when compared to only considering the center span (with an offset label of \"0\"). This finding suggests that different offset labels provide distinct information that assists the model in recognizing additional entities. Nevertheless, this approach can introduce noisy predictions due to the model's inadequate performance on certain labels. Despite this limitation, it may have practical applicability in recall-sensitive applications.", + "bbox": [ + 507, + 577, + 882, + 819 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As discussed in Section 3.3, we devise two heuristic rules to remove improbable predictions. Our findings reveal that this approach enhances the precision score, with only a minor reduction in the recall score, leading to an overall improvement in the $F_{1}$ score.", + "bbox": [ + 507, + 822, + 882, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "14840", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7ecb4838dceef94fe33a44e5fd507039fdc00eaa747ec70b3febb19d1bbb31c2.jpg", + "image_caption": [ + "Figure 6: Effect of varying percentage of training samples on GENIA. We train all models for 50 epochs and report their best performance." + ], + "image_footnote": [], + "bbox": [ + 132, + 82, + 463, + 262 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Performance on Entities with Varying Lengths We explore the model performance on entities of different lengths in GENIA. As shown in Figure 5, we compare the $F_{1}$ scores of models which are training with different $S$ . The model achieves higher $F_{1}$ scores across all columns when $S = 2$ , with a more pronounced performance improvement for longer entities. The results highlight the usefulness of learning boundary offsets between nonentity and entity spans, which helps the model learn boundary features more effectively.", + "bbox": [ + 112, + 342, + 487, + 519 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Size of Training Data As the boundary offset labels contain more informative knowledge, we hypothesize that our proposed BOPN would perform better with limited training data. As shown in Figure 6, our model achieves impressive results, exhibiting only a $5.46\\%$ decrease in performance when trained with a mere $12.5\\%$ of the available training data. In contrast, when boundary information is not utilized during training, the model's performance declines rapidly as the amount of training data decreases, thus creating significant obstacles to effective training.", + "bbox": [ + 112, + 527, + 489, + 721 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 Related Work", + "text_level": 1, + "bbox": [ + 112, + 732, + 268, + 747 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In recent years, various paradigms for named entity recognition (NER) have been proposed, among which span-based methods have become one of the most mainstream approaches, treating NER as a text span classification problem. With the development of pre-trained language models, some works (Sohrab and Miwa, 2018; Luan et al., 2019; Wadden et al., 2019) obtain span representations by connecting boundary representations or aggregating token representations and feeding them into", + "bbox": [ + 112, + 758, + 489, + 917 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "a linear classifier for type prediction. Alternatively, Yu et al. (2020) utilizes a biaffine classifier to fuse start and end boundary representations directly for span classification. To further enhance span representation, several other methods (Wan et al., 2022; Yuan et al., 2022) propose fusing representations of token, boundary, and related entity spans.", + "bbox": [ + 507, + 84, + 884, + 197 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Meanwhile, some methods try to improve span-based methods by adding boundary supervision. Specifically, Zheng et al. (2019) and Tan et al. (2020) additionally detect entity boundaries with multi-task learning, while Shen et al. (2021) perform boundary regression after span prediction. Li et al. (2022) design two word-word relations for span classification. Compared with previous methods, our proposed method utilizes continuous boundary offset values to model text spans, which can capture both the boundary differences and connections between non-entity and entity spans.", + "bbox": [ + 507, + 198, + 884, + 391 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In addition to span-based methods, there are three widely-used NER methods. The traditional sequence labeling methods (Huang et al., 2015; Lample et al., 2016) assign each token a tag with a pre-designed tagging scheme (e.g., $BIO$ ). To address nested entities, some works (Ju et al., 2018; Wang et al., 2020; Rojas et al., 2022) add struggles or design special tagging schemes. Hypergraph-based methods (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018) represent the input sentence as a hypergraph for detecting nested entities, which must be carefully designed to avoid spurious structures. Sequence-to-sequence methods reformulate NER as a sequence generation problem. For example, Gillick et al. (2016) first apply the Seq2Seq model for NER, inputting the sentence and outputting start positions, entity lengths, and types. Straková et al. (2019) use the Seq2Seq model and enhanced BILOU scheme to address nested NER. Yan et al. (2021) treats NER as an entity span sequence generation problem with pointer network based on BART (Lewis et al., 2019).", + "bbox": [ + 507, + 393, + 884, + 745 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "7 Conclusion", + "text_level": 1, + "bbox": [ + 509, + 762, + 640, + 778 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we introduce a novel approach for named entity recognition (NER) called the Boundary Offset Prediction Network (BOPN). BOPN predicts the boundary offsets between candidate spans and their nearest entities, leveraging entity types as inputs. By incorporating entity types, BOPN enables parallel prediction of type-aware boundary offsets, enhancing the model's ability to capture", + "bbox": [ + 507, + 790, + 882, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "14841", + "bbox": [ + 477, + 928, + 522, + 940 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "fine-grained entity boundaries. To capture the interactions between boundary offsets, we employ multiple 3D convolution layers, which refine the offset predictions and capture the inherent quantitative relationships between adjacent text spans.", + "bbox": [ + 112, + 84, + 487, + 164 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The experimental results demonstrate that our proposed method achieves state-of-the-art performance on eight widely-used datasets, including five English NER datasets and three Chinese NER datasets. Moreover, further analysis reveals a significant improvement in recall scores by utilizing boundary offset as supervision, showcasing the utility of our approach for recall-sensitive applications in NER.", + "bbox": [ + 115, + 166, + 489, + 307 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 112, + 321, + 218, + 336 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The proposed BOPN approach has certain limitations that should be acknowledged. Firstly, while BOPN treats boundary offsets as classification targets, it does not explicitly model the order relationship between offset values. Although the 3D convolution layers are employed to implicitly capture interactions between boundary offsets, they do not provide a strong constraint on the ordering of offset labels.", + "bbox": [ + 112, + 346, + 487, + 489 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Additionally, the method uses boundary offsets to convert some non-entity spans into positive samples, which leads to higher recall scores but potentially lower precision scores. To optimize prediction results, heuristic rules are applied to filter out unreasonable samples. However, these rules are based on observations and may not be comprehensive enough to handle all cases effectively.", + "bbox": [ + 112, + 491, + 487, + 619 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Therefore, there is still a need to explore more effective ways to integrate and optimize the offset predictions in order to address these limitations and enhance the overall performance of the BOPN approach.", + "bbox": [ + 112, + 620, + 487, + 700 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 112, + 712, + 265, + 726 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "To address ethical concerns, we provide the two detailed description: 1) All experiments were conducted on existing datasets derived from public scientific papers. 2) Our work does not contain any personally identifiable information and does not harm anyone.", + "bbox": [ + 112, + 737, + 487, + 833 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 112, + 846, + 285, + 860 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work was supported by Strategic Priority Research Program of Chinese Academy of Sciences (N0. XDC02040400).", + "bbox": [ + 112, + 870, + 487, + 917 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 510, + 83, + 608, + 98 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Pei Chen, Haibo Ding, Jun Araki, and Ruihong Huang. 2021. Explicitly capturing relations between entity mentions via graph neural networks for domain-specific named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 735-742.", + "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3504-3514.", + "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.", + "Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296-1306.", + "Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2022.Ptr: Prompt tuning with rules for text classification. AI Open, 3:182-192.", + "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415.", + "Feng Hou, Ruili Wang, Jun He, and Yi Zhou. 2020. Improving entity linking through semantic reinforced entity embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6843-6848.", + "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.", + "Justin M Johnson and Taghi M Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Journal of Big Data, 6(1):1-54.", + "Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459.", + "Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1." + ], + "bbox": [ + 510, + 105, + 884, + 917 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "14842", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270.", + "Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1595-1604.", + "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", + "Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN workshop on Chinese language processing, pages 108-117.", + "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", + "Fei Li, ZhiChao Lin, Meishan Zhang, and Donghong Ji. 2021a. A span-based model for joint overlapped and discontinuous named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4814-4828, Online. Association for Computational Linguistics.", + "Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022. Unified named entity recognition as word-word relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10965-10973.", + "Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021b. Mrn: A locally and globally mention-based reasoning network for document-level relation extraction. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1359-1370.", + "Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuan-Jing Huang. 2020. Flat: Chinese ner using flat-lattice transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6836-6842.", + "Ruibo Liu, Jason Wei, Chenyan Jia, and Soroush Vosoughi. 2021. Modulating language models with" + ], + "bbox": [ + 115, + 85, + 489, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "emotions. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4332-4339.", + "Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.", + "Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857-867.", + "Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics.", + "Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei, and Xuan-Jing Huang. 2020. Simplify the usage of lexicon in chinese ner. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5951-5960.", + "Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116.", + "Tomoko Ohta, Yuka Tateisi, Jin-Dong Kim, Hideki Mima, and Junichi Tsujii. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the human language technology conference, pages 73-77. CiteSeer.", + "Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 548-554.", + "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in onthonotes. In Joint conference on EMNLP and CoNLL-shared task, pages 1-40.", + "Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203-5212.", + "Matías Rojas, Felipe Bravo-Marquez, and Jocelyn Dunstan. 2022. Simple yet powerful: An overlooked architecture for nested named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2108-2117." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "14843", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.", + "Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2782-2794.", + "Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023a. Diffusion: Boundary diffusion for named entity recognition. arXiv preprint arXiv:2305.13298.", + "Yongliang Shen, Zeqi Tan, Shuhui Wu, Wenqi Zhang, Rongsheng Zhang, Yadong Xi, Weiming Lu, and Yueting Zhuang. 2023b. Prompter: Prompt locating and typing for named entity recognition. arXiv preprint arXiv:2305.17104.", + "Yongliang Shen, Xiaobin Wang, Zeqi Tan, Guangwei Xu, Pengjun Xie, Fei Huang, Weiming Lu, and Yueting Zhuang. 2022. Parallel instance query network for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 947-961.", + "Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843-2849.", + "Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested ner through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331.", + "Chuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, and Fei Huang. 2020. Boundary enhanced neural span classification for nested named entity recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 9016-9023.", + "Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022.", + "David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789, Hong Kong, China. Association for Computational Linguistics." + ], + "bbox": [ + 115, + 85, + 489, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Juncheng Wan, Dongyu Ru, Weinan Zhang, and Yong Yu. 2022. Nested named entity recognition with span-level graphs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 892-903, Dublin, Ireland. Association for Computational Linguistics.", + "Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204-214.", + "Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928.", + "Shuang Wu, Xiaoning Song, and Zhenhua Feng. 2021. Mect: Multi-metadata embedding based cross-transformer for chinese named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1529-1539.", + "Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various ner subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822.", + "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476.", + "Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 3174-3186.", + "Yue Zhang and Jie Yang. 2018. Chinese ner using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564.", + "Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357-366, Hong Kong, China. Association for Computational Linguistics.", + "Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the" + ], + "bbox": [ + 510, + 85, + 884, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "14844", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7096-7108.", + "bbox": [ + 131, + 85, + 489, + 124 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A Appendix", + "text_level": 1, + "bbox": [ + 114, + 149, + 236, + 166 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A.1 Datasets", + "text_level": 1, + "bbox": [ + 114, + 175, + 231, + 190 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "We evaluate our method on eight datasets, including CoNLL 2003, OntoNotes 5, ACE 2004, ACE 2005, and GENIA for English NER datasets; MSRA, Resume NER and Weibo NER for Chinese NER datasets. Table 6 presents the detailed statistics of datasets.", + "bbox": [ + 112, + 196, + 489, + 292 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A.2 Implementation Details", + "text_level": 1, + "bbox": [ + 114, + 304, + 349, + 319 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021). Our model is implemented with PyTorch and trained with a NVIDIA RTX3090 GPU. We use a grid search to find the best hyperparameters which are tuned on the development set. The range of hyperparameters we used for eight datasets are listed in Table 7.", + "bbox": [ + 112, + 324, + 489, + 500 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A.3Baselines", + "text_level": 1, + "bbox": [ + 114, + 512, + 236, + 526 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "We compare BOPN with the following baselines:", + "bbox": [ + 112, + 533, + 480, + 548 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- BiLSTM-CRF (Miwa and Bansal, 2016) is a model for sequence labeling tasks that combines BiLSTM with CRF layers.", + "- BERT-Tagger (Devlin et al., 2019) that utilizes the pre-trained language model BERT as a feature extractor and incorporates a tag classifier for fine-tuning.", + "- Lattice (Zhang and Yang, 2018) proposed a lattice-structured LSTM model for Chinese NER.", + "- Layered (Ju et al., 2018) dynamically stacks flat NER layers to solve nested NER task.", + "- Flat (Li et al., 2020) proposes a flat-lattice transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans.", + "- Pyramid (Wang et al., 2020) designs pyramid layer and inverse pyramid layer to decode nested entities." + ], + "bbox": [ + 136, + 558, + 489, + 917 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- SoftLexicon (Ma et al., 2020) proposes a Chinese NER method in which lexicon information is introduced by simply adjusting the character representation layer.", + "- MECT (Wu et al., 2021) uses multi-metadata embedding in a two-stream transformer to integrate Chinese character features with the radical-level embedding.", + "- Biaffine (Yu et al., 2020) classifies text spans by a biaffine classifier between boundary representations.", + "- Locate and Label (Shen et al., 2021) proposed a two-stage identifier of locating entities with boundary regression first and classifying them later.", + "- W2NER (Li et al., 2022) models NER as word-word relation classification, including the next-neighboring-word and the tail-head-word relations.", + "- Triaffine (Yuan et al., 2022) proposed a tri-affine mechanism to fuse information of inside tokens, boundaries, labels for NER.", + "- Boundary Smooth (Zhu and Li, 2022) proposed boundary smoothing as a regularization technique for span-based neural NER models.", + "- DiffusionNER (Shen et al., 2023a) formulates NER as a boundary-denoising diffusion process, which samples noisy spans from a Gaussian distribution.", + "- Seq2Seq (Straková et al., 2019) converts the labels of nested entities into a sequence and then uses a seq2seq model to decode entities.", + "- BartNER (Yan et al., 2021) formulates NER as an entity span sequence generation problem based on the pre-training Seq2Seq model BART (Lewis et al., 2019).", + "PIQN (Shen et al., 2022) sets up global and learnable instance queries to extract entities from a sentence in a parallel manner.", + "- PromptNER (Shen et al., 2023b) unifies entity locating and entity typing in prompt learning for NER, which predicts all entities by filling position slots and type slots." + ], + "bbox": [ + 531, + 84, + 884, + 892 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "14845", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/5dffea8ce0cd1df58b1b2e1979097de0347a1fe50c0ec75bfd57ecae3a4bd1a4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
CoNLL 2003OntoNotes 5ACE 2004ACE 2005GENIAMSRAResumeWeibo
Types418775384
#Train.S172915992462007194166924647138191350
#Dev.S-8528745969--463270
#Test.S34538262812104718544376477270
Avg.Len.S14.3818.1122.6118.9725.4145.5431.1754.57
#Train.E294411287382220493895050974703134381855
#Dev.E-2035425141112--1497379
#Test.E56481258630351118550661811630409
Avg.Len.E1.451.832.502.281.973.245.882.60
", + "bbox": [ + 119, + 80, + 884, + 247 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/a371baf555b1f0288923db625c9c7bd334695def450301addcfc8eab21880179.jpg", + "table_caption": [ + "Table 6: Dataset Statistics. \"#\" denotes the amount. \"S.\" and \"E.\" denote sentence and entity mentions, respectively." + ], + "table_footnote": [], + "table_body": "
ParameterValue
Epoch[50, 80]
Batch size[8, 16]
Learning rate (BERT)[5e-6, 3e-5]
Learning rate (Other)1e-3
LSTM hidden size d256
LSTM dropout0.5
Region embedding size de20
Biaffine hidden size db150
Biaffine dropout0.2
Maximum offset value S[1, 3]
Adam epsilon1e-8
Warm factor0.1
", + "bbox": [ + 139, + 478, + 460, + 705 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Table 7: Hyper-parameter settings.", + "bbox": [ + 181, + 714, + 418, + 730 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "14846", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 12 + } +] \ No newline at end of file diff --git a/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_model.json b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_model.json new file mode 100644 index 0000000000000000000000000000000000000000..81c5bfdbd3e807248c60e20dafcc388cd010e4b3 --- /dev/null +++ b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_model.json @@ -0,0 +1,2613 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.137, + 0.09, + 0.861, + 0.112 + ], + "angle": 0, + "content": "A Boundary Offset Prediction Network for Named Entity Recognition" + }, + { + "type": "text", + "bbox": [ + 0.153, + 0.124, + 0.856, + 0.159 + ], + "angle": 0, + "content": "Minghao Tang\\(^{1,2}\\), Yongquan He\\(^{3}\\), Yongxiu Xu\\(^{1,2*}\\), Hongbo Xu\\(^{1}\\), Wenyuan Zhang\\(^{1,2}\\) and Yang Lin\\(^{3}\\)" + }, + { + "type": "text", + "bbox": [ + 0.297, + 0.16, + 0.707, + 0.175 + ], + "angle": 0, + "content": "\\(^{1}\\)Institute of Information Engineering, CAS, China" + }, + { + "type": "text", + "bbox": [ + 0.334, + 0.177, + 0.669, + 0.192 + ], + "angle": 0, + "content": "\\(^{2}\\)School of Cyber Security, UCAS, China" + }, + { + "type": "text", + "bbox": [ + 0.434, + 0.193, + 0.57, + 0.208 + ], + "angle": 0, + "content": "3Meituan, China" + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.21, + 0.816, + 0.226 + ], + "angle": 0, + "content": "{tangminghao,xuyongxiu,hbxu}@ie.ac.cn, heyongquan@meituan.com" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.267 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.279, + 0.461, + 0.62 + ], + "angle": 0, + "content": "Named entity recognition (NER) is a fundamental task in natural language processing that aims to identify and classify named entities in text. However, span-based methods for NER typically assign entity types to text spans, resulting in an imbalanced sample space and neglecting the connections between non-entity and entity spans. To address these issues, we propose a novel approach for NER, named the Boundary Offset Prediction Network (BOPN), which predicts the boundary offsets between candidate spans and their nearest entity spans. By leveraging the guiding semantics of boundary offsets, BOPN establishes connections between non-entity and entity spans, enabling non-entity spans to function as additional positive samples for entity detection. Furthermore, our method integrates entity type and span representations to generate type-aware boundary offsets instead of using entity types as detection targets. We conduct experiments on eight widely-used NER datasets, and the results demonstrate that our proposed BOPN outperforms previous state-of-the-art methods." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.632, + 0.26, + 0.646 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.657, + 0.489, + 0.801 + ], + "angle": 0, + "content": "Named entity recognition (NER) is a fundamental task in natural language processing (NLP) that involves identifying and categorizing named entities in text, such as people, locations and organizations. It has drawn much attention from the community due to its relevance in various NLP applications, such as entity linking (Le and Titov, 2018; Hou et al., 2020) and relation extraction (Miwa and Bansal, 2016; Li et al., 2021b)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.802, + 0.49, + 0.899 + ], + "angle": 0, + "content": "Various paradigms have been proposed for NER, including the sequence labeling (Huang et al., 2015; Ju et al., 2018), hypergraph-based (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018), sequence-to-sequence (Gillick et al., 2016; Yan et al., 2021) and span-based methods (Sohrab" + }, + { + "type": "image_caption", + "bbox": [ + 0.526, + 0.261, + 0.864, + 0.286 + ], + "angle": 0, + "content": "protein protein cell_type HMG box containing transcription factors in lymphocyte differentiation" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.29, + 0.878, + 0.414 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.424, + 0.886, + 0.511 + ], + "angle": 0, + "content": "Figure 1: A sentence from GENIA dataset (Ohta et al., 2002), containing 8 words and 3 entities. The candidate spans covers the upper triangular region with a total of 36 samples of each matrix. There are 2 and 1 positive samples for \"protein\" and \"cell type\" entity types, respectively." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.545, + 0.884, + 0.657 + ], + "angle": 0, + "content": "and Miwa, 2018; Shen et al., 2021; Chen et al., 2021). Among these approaches, the span-based method has become the most popular due to its simplicity and effectiveness. It is straightforward that typically embeds all possible text spans and predicts their entity types, making it suitable for various NER subtasks (Li et al., 2021a, 2022)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.663, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Despite significant progress made by span-based methods in NER, there remain two critical issues that require attention. Firstly, these methods often suffer from highly imbalanced sample spaces, as exemplified in Figure 1. Such imbalance can negatively impact the trainability and performance of deep neural networks (Johnson and Khoshgoftaar, 2019). Although some methods (Shen et al., 2021; Wan et al., 2022) mitigate this issue by restricting the maximum span length, such an approach can also constrain the model's predictive power. Secondly, current span-based methods primarily focus on learning the distinction between non-entities and entities, disregarding their relationships. While a model can identify whether \"HMG box\" is an entity, it may fail to recognize the connection be" + }, + { + "type": "page_footnote", + "bbox": [ + 0.142, + 0.905, + 0.398, + 0.919 + ], + "angle": 0, + "content": "* Yongxiu Xu is the corresponding author" + }, + { + "type": "page_number", + "bbox": [ + 0.477, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14834" + }, + { + "type": "footer", + "bbox": [ + 0.21, + 0.946, + 0.788, + 0.973 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14834-14846 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.124, + 0.081, + 0.483, + 0.191 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.202, + 0.49, + 0.245 + ], + "angle": 0, + "content": "Figure 2: Text spans annotated with boundary offset. \"1S\" or \"1E\" represents a span has 1 offset from its nearest entity at the start or end boundary, and so on." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.272, + 0.489, + 0.336 + ], + "angle": 0, + "content": "tween \"HMG\" and \"HMG box.\" To enhance the model's ability to recognize entities, it is crucial to explicitly capture both boundary differences and connections between non-entities and entities." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.338, + 0.49, + 0.53 + ], + "angle": 0, + "content": "In this paper, we intend to model text spans by utilizing boundary offset information as supervision, rather than predict their probability of belonging to entities. As shown in Figure 2, there could be two advantages for deep models when boundary offsets are learnable: i) The natural quantitative relationships between offset values enable the model to capture boundary differences and connections simultaneously. ii) Non-entity spans can have specific semantics that guide the positioning of entity spans, leading to an improved sample space with fewer negative samples." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.533, + 0.49, + 0.82 + ], + "angle": 0, + "content": "Based on this observation, we propose the Boundary Offset Prediction Network (BOPN) for NER. BOPN focuses on predicting boundary offsets between candidate spans and their nearest entities, providing a new perspective on modeling text spans. Specifically, our method follows the pipeline of first learning span representations and then classifying them for offset prediction. BERT (Devlin et al., 2019) and BiLSTM (Lample et al., 2016) are used to embed texts, followed by a Conditional Layer (Liu et al., 2021) for building span representations. Meanwhile, we also treat entity types as inputs rather than classification targets, which are fused with span representations to generate type-aware boundary offsets in parallel. Finally, we incorporate multiple 3D convolution layers to capture the natural quantitative relationships between the offset values." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.49, + 0.919 + ], + "angle": 0, + "content": "We evaluate our method on eight widely-used NER datasets, including five English NER datasets and three Chinese NER datasets. The experimental results demonstrate that our approach outperforms the existing state-of-the-art methods. Furthermore, a detailed examination reveals a significant im" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.133 + ], + "angle": 0, + "content": "provement in recall scores when aggregating results across offset labels, which is particularly beneficial for recall-sensitive applications." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.145, + 0.71, + 0.16 + ], + "angle": 0, + "content": "2 Problem Definition" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.171, + 0.885, + 0.251 + ], + "angle": 0, + "content": "Named Entity Recognition (NER) aims to identify of all entities within an input sentence \\(\\mathrm{X} = \\{x_{n}\\}_{n = 1}^{N}\\), based on a pre-defined set of entity types \\(\\mathrm{Y} = \\{y_{m}\\}_{m = 1}^{M}\\). Typically, an entity is specified by token boundaries and a entity types." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.252, + 0.884, + 0.381 + ], + "angle": 0, + "content": "Our proposed method focuses on predicting the boundary offset between each candidate text span and its nearest entity. Hence, we formulate each text span as a quadruple: \\(\\{x_{i}, x_{j}, f_{s}, y_{m}\\}\\), where \\(i\\) and \\(j\\) denote the start and end boundary indices of the span, \\(f_{s}\\) represents the start or end boundary offset from its nearest entity of type \\(y_{m}\\). Note that an entity span is a special case with \\(f_{s} = 0\\)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.39, + 0.884, + 0.436 + ], + "angle": 0, + "content": "Annotation Guidelines To facilitate understanding, we present the essential boundary offset labels as follow:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.451, + 0.88, + 0.481 + ], + "angle": 0, + "content": "- Center Span: refers to an entity span with an offset label of \"0\"." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.494, + 0.882, + 0.572 + ], + "angle": 0, + "content": "- \\(\\mathbf{\\nabla}^{*}\\mathbf{S}\\) or \\(\\mathbf{\\nabla}^{*}\\mathbf{E}\\): denotes the annotation of the start or end boundary offsets for non-entity spans. \" \\(\\ast\\) \" represents an offset value in the range of \\([-S, \\dots, -1, 1, \\dots, S]\\), where \\(S\\) denotes the maximum offset value." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.586, + 0.881, + 0.648 + ], + "angle": 0, + "content": "- Out-of-Range: refers to the annotation of a non-entity span with an absolute boundary offset value from its nearest entity exceeding the maximum offset value \\(S\\)." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.451, + 0.882, + 0.648 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.662, + 0.885, + 0.919 + ], + "angle": 0, + "content": "The annotation procedure for boundary offsets involves three steps. Initially, a 3-dimensional matrix \\(\\mathcal{O} \\in \\mathbb{R}^{M \\times N \\times N}\\) is constructed according to the input sentence \\(X\\), where \\(M\\) denotes the number of entity types and \\(N\\) represents the length of the sentence. Next, we annotate the center spans with the offset label \"0\" based on the golden entities present in \\(X\\). Entities of different types are assigned to their respective sub-matrices. Finally, for non-entity spans, we compute the start and end boundary offset values with respect to all center spans. Their annotation is determined by the absolute minimum offset value. If the absolute minimum offset value is less than \\(S\\), we annotate the corresponding *S or *E; otherwise, we label the span as \"Out-of-Range\"." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14835" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.178, + 0.085, + 0.276, + 0.096 + ], + "angle": 0, + "content": "(a) Span Encoder" + }, + { + "type": "image", + "bbox": [ + 0.117, + 0.097, + 0.881, + 0.325 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.334, + 0.882, + 0.363 + ], + "angle": 0, + "content": "Figure 3: An overview architecture of our method, which mainly consists of two components: a Span Encoder and a Boundary Offset Predictor." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.388, + 0.225, + 0.403 + ], + "angle": 0, + "content": "3 Methods" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.42, + 0.49, + 0.581 + ], + "angle": 0, + "content": "Figure 3 provides an overview of our method, which encompasses two primary components: a Span Encoder (Section 3.1) and a Boundary Offset Predictor (Section 3.2). The Span Encoder is responsible for encoding entity types and sentences, utilizing word representations to construct span representations. Subsequently, the entity type and span representations are inputted into the boundary offset predictor, facilitating type-aware offset classification." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.601, + 0.273, + 0.617 + ], + "angle": 0, + "content": "3.1 Span Encoder" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.627, + 0.489, + 0.707 + ], + "angle": 0, + "content": "Drawing inspiration from the prompt-based methods (Qin and Eisner, 2021; Han et al., 2022), we consider entity types as task-oriented inputs, indicating the specific types of entities that the model needs to predict within a given sentence." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.71, + 0.49, + 0.92 + ], + "angle": 0, + "content": "To achieve this, we create a set of additional type tokens, denoted as \\(\\mathrm{P} = \\{p_m\\}_{m=1}^M\\), where \\(p_m\\) represents a learnable special token corresponding to entity type \\(y_m\\). Next, we concatenate the soft tokens \\(\\mathrm{P}\\) with the sentence \\(\\mathrm{X}\\) to form a single sequence, and employ BERT (Devlin et al., 2019) to encode them simultaneously. The output of BERT is then passed through a BiLSTM (Lample et al., 2016) to generate final embedding features \\(\\mathrm{H} = \\{h_1, h_2, \\dots, h_{M+N}\\} \\in \\mathbb{R}^{(M+N) \\times d}\\), where \\(d\\) is the hidden size. Finally, we split \\(\\mathrm{H}\\) to obtain entity type representations \\(\\mathrm{H}^Y \\in \\mathbb{R}^{M \\times d}\\) and token representations \\(\\mathrm{H}^X \\in \\mathbb{R}^{N \\times d}\\), respectively." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.388, + 0.885, + 0.502 + ], + "angle": 0, + "content": "Span Representation Given the token representations \\(\\mathrm{H}^X = \\{h_1, h_2, \\dots, h_N\\}\\), the span representation \\(v_{ij}\\) can be considered as a fusion of the boundary representations \\((h_i, h_j)\\). Following Li et al. (2022), we adopt the Conditional Layer Normalization (CLN) (Liu et al., 2021) mechanism to build a high-quality span representation:" + }, + { + "type": "equation", + "bbox": [ + 0.591, + 0.514, + 0.884, + 0.551 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} v _ {i j} = \\operatorname {C L N} \\left(h _ {i}, h _ {j}\\right) \\tag {1} \\\\ = \\gamma_ {j} \\otimes \\operatorname {N o r m} \\left(h _ {i}\\right) + \\lambda_ {j}, \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.563, + 0.885, + 0.645 + ], + "angle": 0, + "content": "where \\(\\mathrm{Norm}(\\cdot)\\) is the instance normalization function (Ulyanov et al., 2016), \\(\\gamma_{j}\\) and \\(\\lambda_{j}\\) are the condition parameters that are obtained by two different feedforward networks: \\(\\gamma_{j} = \\mathrm{FFN}(h_{j})\\) and \\(\\lambda_{j} = \\mathrm{FFN}(h_{j})\\)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.645, + 0.884, + 0.757 + ], + "angle": 0, + "content": "While valid candidate spans are restricted to the upper triangular region of the adjacent text span matrix, a region embedding \\( \\mathrm{E} = [e_{up}, e_{low}] \\in \\mathbb{R}^{2 \\times d_e} \\) are adapted to distinguish the positions of text spans. The final representation of each span is obtained as: \\( \\hat{v}_{ij} = [v_{ij}, e_{up}] \\) if \\( i \\leq j \\); \\( \\hat{v}_{ij} = [v_{ij}, e_{low}] \\) if \\( i > j \\)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.769, + 0.767, + 0.785 + ], + "angle": 0, + "content": "3.2 Boundary Offset Predictor" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.791, + 0.885, + 0.855 + ], + "angle": 0, + "content": "As previously mentioned, we utilize the entity types as inputs to guide the model in generating type-aware boundary offsets, rather than categorizing each text span into a particular entity type." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.856, + 0.884, + 0.92 + ], + "angle": 0, + "content": "The biaffine classifier (Yu et al., 2020) is employed to fuse entity type representations and span representations. Specifically, given an entity type representation \\( h_m \\in \\mathbf{H}^Y \\) and span representation" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14836" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.084, + 0.49, + 0.117 + ], + "angle": 0, + "content": "\\(\\hat{v}_{ij}\\in \\widehat{\\mathbf{V}}\\) , a scoring vector \\(c_{mij}\\in \\mathbb{R}^L\\) can be computed as:" + }, + { + "type": "equation", + "bbox": [ + 0.17, + 0.127, + 0.488, + 0.148 + ], + "angle": 0, + "content": "\\[\nh _ {y} ^ {\\prime} = \\operatorname {F F N} \\left(h _ {y}\\right), \\quad \\hat {v} _ {i j} ^ {\\prime} = \\operatorname {F F N} \\left(\\hat {v} _ {i j}\\right), \\tag {2}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.138, + 0.158, + 0.488, + 0.18 + ], + "angle": 0, + "content": "\\[\nc _ {m i j} = \\left(h _ {m} ^ {\\prime}\\right) ^ {T} U \\hat {v} _ {i j} ^ {\\prime} + W \\left(h _ {m} ^ {\\prime} \\oplus v _ {i j} ^ {\\prime}\\right) + b, \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.185, + 0.49, + 0.234 + ], + "angle": 0, + "content": "where \\(L\\) is the number of offset labels\\(^1\\); \\(U \\in \\mathbb{R}^{L \\times d_b \\times d_b}\\), \\(W \\in \\mathbb{R}^{L \\times 2d_b}\\) and \\(b \\in \\mathbb{R}^L\\) are learnable parameters, \\(d_b\\) is the biaffine hidden size." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.242, + 0.49, + 0.403 + ], + "angle": 0, + "content": "3D Convolution Layer Furthermore, we utilize multiple 3-dimensional convolution (3DConv) layers to capture the inherent quantitative relationships between the boundary offsets of adjacent text spans. As depicted in Figure 3(b), the 3D convolution kernels traverse the complete score matrix \\(C\\) in three directions, thereby aggregating offset predictions for adjacent text spans across all entity types. The computation in a single convolution layer can be expressed as:" + }, + { + "type": "equation", + "bbox": [ + 0.22, + 0.416, + 0.488, + 0.433 + ], + "angle": 0, + "content": "\\[\n\\mathrm {Q} = \\sigma (\\mathrm {3 D C o n v} (\\mathrm {C})), \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.443, + 0.489, + 0.526 + ], + "angle": 0, + "content": "where \\( \\mathbf{Q} \\in \\mathbb{R}^{M \\times N \\times N \\times L} \\), \\( \\sigma \\) is the GELU activation function (Hendrycks and Gimpel, 2016). We assign different dilation rates to each convolution layer, and then concatenate their outputs followed by a linear to calculate final prediction scores:" + }, + { + "type": "equation", + "bbox": [ + 0.191, + 0.536, + 0.488, + 0.556 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathrm {Q}} = \\operatorname {L i n e a r} \\left(\\mathrm {Q} _ {1} \\oplus \\mathrm {Q} _ {2} \\oplus \\mathrm {Q} _ {3}\\right), \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.567, + 0.488, + 0.614 + ], + "angle": 0, + "content": "To obtain the probability distribution of span \\((i,j)\\) over the offset labels, \\(\\hat{q}_{mij} \\in \\hat{\\mathbf{Q}}\\) is fed into a softmax layer:" + }, + { + "type": "equation", + "bbox": [ + 0.213, + 0.628, + 0.488, + 0.645 + ], + "angle": 0, + "content": "\\[\n\\hat {o} _ {m i j} = \\operatorname {s o f t m a x} \\left(\\hat {q} _ {m i j}\\right), \\tag {6}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.656, + 0.342, + 0.672 + ], + "angle": 0, + "content": "3.3 Training and Inference" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.677, + 0.489, + 0.757 + ], + "angle": 0, + "content": "Learning Objective In our method, the learning objective is to accurately assign a boundary offset to each text span, which can be treated as a multiclass classification problem and optimized using cross-entropy loss:" + }, + { + "type": "equation", + "bbox": [ + 0.14, + 0.768, + 0.488, + 0.813 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = - \\frac {1}{M N ^ {2}} \\sum_ {m} ^ {M} \\sum_ {i} ^ {N} \\sum_ {j} ^ {N} o _ {m i j} ^ {T} \\log \\left(\\hat {o} _ {m i j}\\right) \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.825, + 0.49, + 0.873 + ], + "angle": 0, + "content": "where \\(o_{mij} \\in \\mathbb{R}^D\\) represents the ground-truth, which is an one-hot vector encoded from the annotated adjacent text span matrix \\(\\mathcal{O}\\)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.278 + ], + "angle": 0, + "content": "Inference with Boundary offsets During the inference process, decoding entities based on predicted boundary offsets is a straightforward procedure. The output of our method is a matrix of size \\( M \\times N \\times N \\), where each cell represents a potential entity and contains information about its boundaries and type. For example, a cell with coordinates \\( (m, i, j) \\) and the prediction \"-1E\" indicates an entity of type \\( y_{m} \\) with a start boundary at \\( x_{i} \\) and an end boundary at \\( x_{j+1} \\). Conversely, if the predicted value is \"out-of-range,\" it implies that the cell does not correspond to any entity." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.28, + 0.885, + 0.424 + ], + "angle": 0, + "content": "However, blindly accepting all predicted boundary offsets may result in sub-optimal outcomes as it disregards the quantitative relationship between boundary offsets. Therefore, we introduce two heuristic rules to identify unreasonable predictions: i) Predicted boundary offsets that do not align with their nearest center span. ii) Predicted boundary offsets that do not adhere to a sequential order with neighboring spans." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.44, + 0.736, + 0.457 + ], + "angle": 0, + "content": "4 Experimental Settings" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.468, + 0.625, + 0.482 + ], + "angle": 0, + "content": "4.1 Datasets" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.491, + 0.884, + 0.634 + ], + "angle": 0, + "content": "To evaluate our method, we conducted experiments on five English NER datasets, including CoNLL 2003 (Sang and De Meulder, 2003), OntoNotes \\(5^{2}\\), ACE \\(2004^{3}\\), ACE \\(2005^{4}\\) and GENIA (Ohta et al., 2002); and three Chinese NER datasets, including MSRA (Levow, 2006), Resume NER (Zhang and Yang, 2018) and Weibo NER (Peng and Dredze, 2015). Note that ACE 2004, ACE 2005 and GENIA are nested NER datasets, others are flat datasets." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.637, + 0.884, + 0.765 + ], + "angle": 0, + "content": "For OntoNotes 5, we take the same train/dev/test as used in CoNLL 2012 shared task (Pradhan et al., 2012). For ACE 2004 and ACE 2005, we use the same data split as Lu and Roth (2015). For GENIA, we follow Katiyar and Cardie (2018) to split train/test as 9:1. For other datasets, we employ the same settings in previous works (Ma et al., 2020; Yan et al., 2021; Zhu and Li, 2022)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.78, + 0.742, + 0.795 + ], + "angle": 0, + "content": "4.2 Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.802, + 0.884, + 0.867 + ], + "angle": 0, + "content": "We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese" + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.88, + 0.488, + 0.919 + ], + "angle": 0, + "content": "Given a maximum offset \\( S \\), \\( L = 4S + 2 \\) when considering both start and end boundary offset; \\( L = 2S + 2 \\) when only considering start or end boundary offset." + }, + { + "type": "page_footnote", + "bbox": [ + 0.53, + 0.879, + 0.806, + 0.905 + ], + "angle": 0, + "content": "\\(^{2}\\)https://catalog.ldc.upenn.edu/LDC2005T09 \n\\(^{3}\\)https://catalog.ldc.upenn.edu/LDC2005T09" + }, + { + "type": "page_footnote", + "bbox": [ + 0.53, + 0.905, + 0.805, + 0.919 + ], + "angle": 0, + "content": "4https://catalog.ldc.upenn.edu/LDC2006T06" + }, + { + "type": "list", + "bbox": [ + 0.53, + 0.879, + 0.806, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14837" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.202, + 0.082, + 0.797, + 0.319 + ], + "angle": 0, + "content": "
ModelsCoNLL 2003OntoNotes 5
PRF1PRF1
Sequence Labeling Methods
BiLSTM-CRF (Miwa and Bansal, 2016)--91.0386.0486.5386.28
BERT-Tagger (Devlin et al., 2019)--92.8090.0188.3589.16
Span-based Methods
Biaffine (Yu et al., 2020)*†92.4692.6792.5589.9489.8189.88
W2NER (Li et al., 2022)92.7193.4493.0790.0390.9790.50
Boundary Smooth (Zhu and Li, 2022)*†92.8993.2093.0490.4290.8190.61
DiffusionNER (Shen et al., 2023a)92.9992.5692.7890.3191.0290.66
Others
Seq2Seq (Straková et al., 2019)--92.98---
BartNER (Yan et al., 2021)†92.5793.5393.0589.6590.8790.26
PIQN (Shen et al., 2022)93.2992.4692.8791.4390.7390.96
PromptNER (Shen et al., 2023b)92.4892.3392.41---
BOPN (Ours)93.2293.1593.1990.9391.4091.16
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.328, + 0.884, + 0.357 + ], + "angle": 0, + "content": "Table 1: Results on English flat NER datasets CoNLL 2003 and OntoNotes 5. † means our re-implementation via their code. * denotes a fair comparison that their BERT encoder is consistent with our model." + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.37, + 0.884, + 0.555 + ], + "angle": 0, + "content": "
ModelsMSRAResume NERWeibo NER
PRF1PRF1PRF1
Sequence Labeling Methods
Lattice (Zhang and Yang, 2018)93.5792.7993.1894.8194.1194.4653.0462.2558.79
Flat (Li et al., 2020)--96.09--95.86--68.55
SoftLexicon (Ma et al., 2020)95.7595.1095.4296.0896.1396.1170.9467.0270.50
MECT (Wu et al., 2021)--96.24--95.98--70.43
Span-based Methods
W2NER (Li et al., 2022)96.1296.0896.1096.9696.3596.6570.8473.8772.32
Boundary Smooth (Zhu and Li, 2022)96.3796.1596.2696.6396.6996.6670.1675.3672.66
DiffusionNER (Shen et al., 2023a)95.7194.1194.91------
BOPN (Ours)96.4496.3496.3996.7396.8396.7871.7973.9072.92
" + }, + { + "type": "table_caption", + "bbox": [ + 0.244, + 0.565, + 0.752, + 0.58 + ], + "angle": 0, + "content": "Table 2: Results on Chinese flat NER datasets MSRA, Resume and Weibo." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.606, + 0.486, + 0.637 + ], + "angle": 0, + "content": "corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.639, + 0.487, + 0.784 + ], + "angle": 0, + "content": "The BiLSTM has one layer and 256 hidden size with dropout rate of 0.5. The size of region embedding \\( d_{e} \\) is 20. The maximum offset value \\( S \\) is selected in \\( \\{1,2,3\\} \\). For all datasets, we train our models by using AdamW Optimizer (Loshchilov and Hutter, 2017) with a linear warmup-decay learning rate schedule. See Appendix A for more details. Our source code can be obtained from https://github.com/mhtang1995/BOPN." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.8, + 0.248, + 0.814 + ], + "angle": 0, + "content": "4.3 Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.487, + 0.919 + ], + "angle": 0, + "content": "We use strict evaluation metrics where a predicted entity is considered correct only when both the boundaries (after adding boundary offset) and type are accurately matched. The precision, recall and \\( F_{1} \\) scores are employed. We run our model for five times and report averaged values." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.605, + 0.724, + 0.621 + ], + "angle": 0, + "content": "5 Results and Analysis" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.636, + 0.661, + 0.65 + ], + "angle": 0, + "content": "5.1 Main Results" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.66, + 0.884, + 0.885 + ], + "angle": 0, + "content": "The performance of our proposed method and the baselines on English flat NER datasets is presented in Table 1. The experimental results demonstrate that our approach surpasses the previous state-of-the-art (SOTA) methods by \\(+0.12\\%\\) on the CoNLL 2003 dataset and \\(+0.20\\%\\) on the OntoNotes 5 dataset, achieving superior performance with \\(F_{1}\\) scores of \\(93.19\\%\\) and \\(91.16\\%\\), respectively. For Chinese flat NER datasets, we provide the results in Table 2. Similarly, our proposed method achieves SOTA performance in terms of \\(F_{1}\\) scores, surpassing the previous best method by \\(+0.13\\%\\), \\(+0.12\\%\\), and \\(+0.26\\%\\) in \\(F_{1}\\) scores on the MSRA, Resume NER, and Weibo NER datasets, respectively." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.888, + 0.884, + 0.92 + ], + "angle": 0, + "content": "The performance results on English nested NER datasets are presented in Table 3. Remarkably," + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14838" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.12, + 0.081, + 0.885, + 0.356 + ], + "angle": 0, + "content": "
ModelsACE 2004ACE 2005GENIA
PRF1PRF1PRF1
Sequence Labeling Methods
Layered (Ju et al., 2018)---74.270.372.278.571.374.7
Pyramid (Wang et al., 2020)86.0886.4886.2883.9585.3984.6679.4578.9479.19
Span-based Methods
Biaffine (Yu et al., 2020)87.386.086.785.285.685.478.278.278.2
Locate and Label (Shen et al., 2021)87.4487.3887.4186.0987.2786.6780.1980.8980.54
W2NER (Li et al., 2022)87.3387.7187.5285.0388.6286.7983.1079.7681.39
Triaffine (Yuan et al., 2022)87.1387.6887.6086.7086.9486.8280.4282.0681.23
Boundary Smooth (Zhu and Li, 2022)88.4387.5387.9886.2588.0787.15---
DiffusionNER (Shen et al., 2023a)88.1188.6688.3986.1587.7286.9382.1080.9781.53
Others
Seq2Seq (Straková et al., 2019)--84.33--83.42--78.20
BartNER (Yan et al., 2021)87.2786.4186.8483.1686.3884.7478.5779.3078.93
PIQN (Shen et al., 2022)88.4887.8188.1486.2788.6087.4283.2480.3581.77
PromptNER (Shen et al., 2023b)87.5888.7688.1686.0788.3887.21---
BOPN (Ours)89.1389.4089.2689.5691.2390.3982.1482.1682.14
" + }, + { + "type": "table_caption", + "bbox": [ + 0.21, + 0.365, + 0.786, + 0.379 + ], + "angle": 0, + "content": "Table 3: Results on English nested NER datasets ACE 2004, ACE 2004 and GENIA." + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.402, + 0.487, + 0.607 + ], + "angle": 0, + "content": "
CoNLL 2003Resume NERACE 2004
BOPN (Ours)93.1996.7889.26
- w/o Type Inp.92.8796.4188.83
- w/o Region Emb.92.7196.2288.71
- w/o BO92.7496.2688.62
- w/o 3DConv92.8796.4089.11
- MBO (S=1)93.1196.7589.14
- MBO (S=2)93.1596.7889.26
- MBO (S=3)93.1996.7189.22
- 3DConv (l=1)93.0896.6989.18
- 3DConv (l=2)93.1996.7589.26
- 3DConv (l=3)93.0596.7889.25
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.617, + 0.487, + 0.646 + ], + "angle": 0, + "content": "Table 4: Ablation Studies. MBO means the maximum boundary offset value." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.678, + 0.491, + 0.919 + ], + "angle": 0, + "content": "our proposed BOPN achieves substantial improvements in performance on these datasets, with \\( F_{1} \\) scores increasing by \\( +0.87\\% \\), \\( +2.97\\% \\), and \\( +0.37\\% \\) on ACE 2004, ACE 2005, and GENIA, respectively. These results align with our expectations, as the boundary features of nested entities are more intricate compared to flat entities. We attribute this improvement to two key factors: 1) Our method predicts the boundary information of various entity types in parallel, effectively avoiding nested boundary conflicts between different types of entities. 2) By predicting boundary offsets, our method expands the predictive range for each text span, allowing for more granular and precise identification of entity boundaries." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.405, + 0.687, + 0.419 + ], + "angle": 0, + "content": "5.2 Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.427, + 0.884, + 0.506 + ], + "angle": 0, + "content": "In order to assess the impact of each component in our method, we conduct ablation studies on the CoNLL 2003, ACE 2005, and Resume NER datasets. The results of these studies are presented in Table 4." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.517, + 0.885, + 0.79 + ], + "angle": 0, + "content": "Maximum Boundary Offset We investigate the impact of training the model with different maximum offset values \\( S \\) through our ablation studies. The hyperparameter \\( S \\) determines the annotation scope of non-entity spans with boundary offset. Specifically, the extreme scenario of setting \\( S \\) to 0 corresponds to a condition \"w/o BO\" (without Boundary Offset). The results indicate a significant decline in performance when employing \"w/o BO,\" confirming the usefulness of utilizing boundary offsets as supervision. However, we also observe that the optimal \\( S \\) value varies across different datasets. This could be attributed to the fact that a larger \\( S \\) value provides more boundary knowledge but also increases the label search space. Consequently, hyperparameter tuning for \\( S \\) becomes necessary to achieve the best performance in practice." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.791, + 0.884, + 0.919 + ], + "angle": 0, + "content": "In addition, we analyze the learning curves of our model with different maximum offset values. Figure 4 demonstrates that a larger \\( S \\) can accelerate the training process of the model. We think the reason may be that a larger \\( S \\) not only leads to an increase of positive samples but also results in a decrease of negative samples, thereby ultimately enhancing the trainability of the model." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14839" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.135, + 0.083, + 0.465, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.124, + 0.271, + 0.477, + 0.286 + ], + "angle": 0, + "content": "Figure 4: The learning curves on ACE 2004 dataset." + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.3, + 0.487, + 0.512 + ], + "angle": 0, + "content": "
LabelPRF1Support
-2S81.5182.0281.765029
-1S81.6282.9782.295292
1S79.5581.4780.503281
2S76.2779.5577.881438
-2E78.6477.1977.901464
-1E79.7980.5880.183254
1E82.2682.2082.235393
2E82.3780.7581.575113
081.9281.9581.935495
ALL79.2184.2281.645495
- w/ rules81.8582.5682.205495
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.522, + 0.487, + 0.566 + ], + "angle": 0, + "content": "Table 5: Performance of each boundary offset label on GENIA, where the maximum offset value is 2. The reported results is one out of five experiments." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.592, + 0.489, + 0.688 + ], + "angle": 0, + "content": "3D Convolution Layer \"w/o 3DConv\" indicates the 3D convolution layers are removed. As seen, the results show a decline in performance across all datasets, indicating the importance of 3D convolution layers in capturing the interactions between boundary offsets of adjacent text spans." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.7, + 0.489, + 0.795 + ], + "angle": 0, + "content": "Type Inputs \"w/o Type Inputs\" refers to a setting where the entity types encoded with the sentence are replaced, in which the randomly initialized entity type embeddings are fed into the biaffine classifier. The results obtained in this setting show a slight decline in performance." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.807, + 0.489, + 0.887 + ], + "angle": 0, + "content": "Region Embedding The results demonstrate a slight drop in performance across all datasets without region embeddings. This suggests that integrating sample distribution features can be a reasonable approach for enhancing text span representations." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.888, + 0.489, + 0.92 + ], + "angle": 0, + "content": "As the CLN layer and biaffine classifier serve as fundamental components in our approach for span" + }, + { + "type": "image", + "bbox": [ + 0.53, + 0.083, + 0.865, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.246, + 0.883, + 0.29 + ], + "angle": 0, + "content": "Figure 5: A comparison of F1-scores on entities of different lengths in GENIA dataset. Entity supports are in the parentheses." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.321, + 0.884, + 0.401 + ], + "angle": 0, + "content": "representation and classification, they cannot be evaluated independently. Nonetheless, our ablation studies demonstrate the effectiveness of learning boundary offset information and the usefulness of each composition in our model." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.421, + 0.693, + 0.436 + ], + "angle": 0, + "content": "5.3 Detailed Analysis" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.448, + 0.884, + 0.576 + ], + "angle": 0, + "content": "Performance on Different Offset Labels We investigate the performance of each boundary offset label, and the results are presented in Table 5. Notably, the offset label \"0\" has complete entity support and achieves an \\(F_{1}\\) score of \\(82.04\\%\\). Furthermore, we observed a positive correlation between the quantity of entity support and the performance of boundary offset labels." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.579, + 0.884, + 0.82 + ], + "angle": 0, + "content": "When a text span is not predicted as \"out-of-range\", its assigned label can be utilized to determine the position of its nearest entity. By aggregating all predictions of offset labels, we observe a sharp decrease in precision score, along with a significant increase in recall score, when compared to only considering the center span (with an offset label of \"0\"). This finding suggests that different offset labels provide distinct information that assists the model in recognizing additional entities. Nevertheless, this approach can introduce noisy predictions due to the model's inadequate performance on certain labels. Despite this limitation, it may have practical applicability in recall-sensitive applications." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.823, + 0.883, + 0.919 + ], + "angle": 0, + "content": "As discussed in Section 3.3, we devise two heuristic rules to remove improbable predictions. Our findings reveal that this approach enhances the precision score, with only a minor reduction in the recall score, leading to an overall improvement in the \\( F_{1} \\) score." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14840" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.134, + 0.083, + 0.465, + 0.263 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.275, + 0.49, + 0.318 + ], + "angle": 0, + "content": "Figure 6: Effect of varying percentage of training samples on GENIA. We train all models for 50 epochs and report their best performance." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.343, + 0.488, + 0.52 + ], + "angle": 0, + "content": "Performance on Entities with Varying Lengths We explore the model performance on entities of different lengths in GENIA. As shown in Figure 5, we compare the \\( F_{1} \\) scores of models which are training with different \\( S \\). The model achieves higher \\( F_{1} \\) scores across all columns when \\( S = 2 \\), with a more pronounced performance improvement for longer entities. The results highlight the usefulness of learning boundary offsets between nonentity and entity spans, which helps the model learn boundary features more effectively." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.529, + 0.49, + 0.722 + ], + "angle": 0, + "content": "Size of Training Data As the boundary offset labels contain more informative knowledge, we hypothesize that our proposed BOPN would perform better with limited training data. As shown in Figure 6, our model achieves impressive results, exhibiting only a \\(5.46\\%\\) decrease in performance when trained with a mere \\(12.5\\%\\) of the available training data. In contrast, when boundary information is not utilized during training, the model's performance declines rapidly as the amount of training data decreases, thus creating significant obstacles to effective training." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.733, + 0.27, + 0.749 + ], + "angle": 0, + "content": "6 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.759, + 0.49, + 0.919 + ], + "angle": 0, + "content": "In recent years, various paradigms for named entity recognition (NER) have been proposed, among which span-based methods have become one of the most mainstream approaches, treating NER as a text span classification problem. With the development of pre-trained language models, some works (Sohrab and Miwa, 2018; Luan et al., 2019; Wadden et al., 2019) obtain span representations by connecting boundary representations or aggregating token representations and feeding them into" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.198 + ], + "angle": 0, + "content": "a linear classifier for type prediction. Alternatively, Yu et al. (2020) utilizes a biaffine classifier to fuse start and end boundary representations directly for span classification. To further enhance span representation, several other methods (Wan et al., 2022; Yuan et al., 2022) propose fusing representations of token, boundary, and related entity spans." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.199, + 0.885, + 0.392 + ], + "angle": 0, + "content": "Meanwhile, some methods try to improve span-based methods by adding boundary supervision. Specifically, Zheng et al. (2019) and Tan et al. (2020) additionally detect entity boundaries with multi-task learning, while Shen et al. (2021) perform boundary regression after span prediction. Li et al. (2022) design two word-word relations for span classification. Compared with previous methods, our proposed method utilizes continuous boundary offset values to model text spans, which can capture both the boundary differences and connections between non-entity and entity spans." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.394, + 0.885, + 0.746 + ], + "angle": 0, + "content": "In addition to span-based methods, there are three widely-used NER methods. The traditional sequence labeling methods (Huang et al., 2015; Lample et al., 2016) assign each token a tag with a pre-designed tagging scheme (e.g., \\( BIO \\)). To address nested entities, some works (Ju et al., 2018; Wang et al., 2020; Rojas et al., 2022) add struggles or design special tagging schemes. Hypergraph-based methods (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018) represent the input sentence as a hypergraph for detecting nested entities, which must be carefully designed to avoid spurious structures. Sequence-to-sequence methods reformulate NER as a sequence generation problem. For example, Gillick et al. (2016) first apply the Seq2Seq model for NER, inputting the sentence and outputting start positions, entity lengths, and types. Straková et al. (2019) use the Seq2Seq model and enhanced BILOU scheme to address nested NER. Yan et al. (2021) treats NER as an entity span sequence generation problem with pointer network based on BART (Lewis et al., 2019)." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.763, + 0.642, + 0.779 + ], + "angle": 0, + "content": "7 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.791, + 0.884, + 0.919 + ], + "angle": 0, + "content": "In this paper, we introduce a novel approach for named entity recognition (NER) called the Boundary Offset Prediction Network (BOPN). BOPN predicts the boundary offsets between candidate spans and their nearest entities, leveraging entity types as inputs. By incorporating entity types, BOPN enables parallel prediction of type-aware boundary offsets, enhancing the model's ability to capture" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.524, + 0.941 + ], + "angle": 0, + "content": "14841" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.489, + 0.165 + ], + "angle": 0, + "content": "fine-grained entity boundaries. To capture the interactions between boundary offsets, we employ multiple 3D convolution layers, which refine the offset predictions and capture the inherent quantitative relationships between adjacent text spans." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.167, + 0.49, + 0.309 + ], + "angle": 0, + "content": "The experimental results demonstrate that our proposed method achieves state-of-the-art performance on eight widely-used datasets, including five English NER datasets and three Chinese NER datasets. Moreover, further analysis reveals a significant improvement in recall scores by utilizing boundary offset as supervision, showcasing the utility of our approach for recall-sensitive applications in NER." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.322, + 0.22, + 0.337 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.347, + 0.489, + 0.49 + ], + "angle": 0, + "content": "The proposed BOPN approach has certain limitations that should be acknowledged. Firstly, while BOPN treats boundary offsets as classification targets, it does not explicitly model the order relationship between offset values. Although the 3D convolution layers are employed to implicitly capture interactions between boundary offsets, they do not provide a strong constraint on the ordering of offset labels." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.492, + 0.489, + 0.62 + ], + "angle": 0, + "content": "Additionally, the method uses boundary offsets to convert some non-entity spans into positive samples, which leads to higher recall scores but potentially lower precision scores. To optimize prediction results, heuristic rules are applied to filter out unreasonable samples. However, these rules are based on observations and may not be comprehensive enough to handle all cases effectively." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.621, + 0.489, + 0.701 + ], + "angle": 0, + "content": "Therefore, there is still a need to explore more effective ways to integrate and optimize the offset predictions in order to address these limitations and enhance the overall performance of the BOPN approach." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.713, + 0.266, + 0.727 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.738, + 0.489, + 0.834 + ], + "angle": 0, + "content": "To address ethical concerns, we provide the two detailed description: 1) All experiments were conducted on existing datasets derived from public scientific papers. 2) Our work does not contain any personally identifiable information and does not harm anyone." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.847, + 0.287, + 0.862 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.871, + 0.489, + 0.919 + ], + "angle": 0, + "content": "This work was supported by Strategic Priority Research Program of Chinese Academy of Sciences (N0. XDC02040400)." + }, + { + "type": "title", + "bbox": [ + 0.511, + 0.084, + 0.61, + 0.099 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.106, + 0.884, + 0.212 + ], + "angle": 0, + "content": "Pei Chen, Haibo Ding, Jun Araki, and Ruihong Huang. 2021. Explicitly capturing relations between entity mentions via graph neural networks for domain-specific named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 735-742." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.22, + 0.884, + 0.286 + ], + "angle": 0, + "content": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3504-3514." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.295, + 0.885, + 0.4 + ], + "angle": 0, + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.408, + 0.884, + 0.489 + ], + "angle": 0, + "content": "Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296-1306." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.496, + 0.883, + 0.537 + ], + "angle": 0, + "content": "Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2022.Ptr: Prompt tuning with rules for text classification. AI Open, 3:182-192." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.545, + 0.883, + 0.584 + ], + "angle": 0, + "content": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.593, + 0.884, + 0.66 + ], + "angle": 0, + "content": "Feng Hou, Ruili Wang, Jun He, and Yi Zhou. 2020. Improving entity linking through semantic reinforced entity embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6843-6848." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.667, + 0.883, + 0.709 + ], + "angle": 0, + "content": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.716, + 0.883, + 0.757 + ], + "angle": 0, + "content": "Justin M Johnson and Taghi M Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Journal of Big Data, 6(1):1-54." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.765, + 0.884, + 0.845 + ], + "angle": 0, + "content": "Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.853, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.106, + 0.885, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14842" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.179 + ], + "angle": 0, + "content": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.19, + 0.488, + 0.256 + ], + "angle": 0, + "content": "Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1595-1604." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.268, + 0.488, + 0.333 + ], + "angle": 0, + "content": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.346, + 0.488, + 0.412 + ], + "angle": 0, + "content": "Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN workshop on Chinese language processing, pages 108-117." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.424, + 0.488, + 0.502 + ], + "angle": 0, + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.515, + 0.488, + 0.62 + ], + "angle": 0, + "content": "Fei Li, ZhiChao Lin, Meishan Zhang, and Donghong Ji. 2021a. A span-based model for joint overlapped and discontinuous named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4814-4828, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.632, + 0.488, + 0.71 + ], + "angle": 0, + "content": "Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022. Unified named entity recognition as word-word relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10965-10973." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.723, + 0.488, + 0.801 + ], + "angle": 0, + "content": "Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021b. Mrn: A locally and globally mention-based reasoning network for document-level relation extraction. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1359-1370." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.814, + 0.488, + 0.88 + ], + "angle": 0, + "content": "Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuan-Jing Huang. 2020. Flat: Chinese ner using flat-lattice transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6836-6842." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.892, + 0.488, + 0.919 + ], + "angle": 0, + "content": "Ruibo Liu, Jason Wei, Chenyan Jia, and Soroush Vosoughi. 2021. Modulating language models with" + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.529, + 0.086, + 0.882, + 0.125 + ], + "angle": 0, + "content": "emotions. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4332-4339." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.136, + 0.882, + 0.174 + ], + "angle": 0, + "content": "Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.185, + 0.882, + 0.249 + ], + "angle": 0, + "content": "Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857-867." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.26, + 0.882, + 0.378 + ], + "angle": 0, + "content": "Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.387, + 0.882, + 0.453 + ], + "angle": 0, + "content": "Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei, and Xuan-Jing Huang. 2020. Simplify the usage of lexicon in chinese ner. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5951-5960." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.463, + 0.882, + 0.529 + ], + "angle": 0, + "content": "Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.538, + 0.882, + 0.605 + ], + "angle": 0, + "content": "Tomoko Ohta, Yuka Tateisi, Jin-Dong Kim, Hideki Mima, and Junichi Tsujii. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the human language technology conference, pages 73-77. CiteSeer." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.613, + 0.882, + 0.68 + ], + "angle": 0, + "content": "Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 548-554." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.688, + 0.882, + 0.755 + ], + "angle": 0, + "content": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in onthonotes. In Joint conference on EMNLP and CoNLL-shared task, pages 1-40." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.765, + 0.882, + 0.843 + ], + "angle": 0, + "content": "Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203-5212." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.853, + 0.882, + 0.919 + ], + "angle": 0, + "content": "Matías Rojas, Felipe Bravo-Marquez, and Jocelyn Dunstan. 2022. Simple yet powerful: An overlooked architecture for nested named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2108-2117." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14843" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.153 + ], + "angle": 0, + "content": "Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.161, + 0.49, + 0.267 + ], + "angle": 0, + "content": "Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2782-2794." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.275, + 0.49, + 0.328 + ], + "angle": 0, + "content": "Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023a. Diffusion: Boundary diffusion for named entity recognition. arXiv preprint arXiv:2305.13298." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.337, + 0.49, + 0.404 + ], + "angle": 0, + "content": "Yongliang Shen, Zeqi Tan, Shuhui Wu, Wenqi Zhang, Rongsheng Zhang, Yadong Xi, Weiming Lu, and Yueting Zhuang. 2023b. Prompter: Prompt locating and typing for named entity recognition. arXiv preprint arXiv:2305.17104." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.412, + 0.49, + 0.504 + ], + "angle": 0, + "content": "Yongliang Shen, Xiaobin Wang, Zeqi Tan, Guangwei Xu, Pengjun Xie, Fei Huang, Weiming Lu, and Yueting Zhuang. 2022. Parallel instance query network for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 947-961." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.513, + 0.49, + 0.58 + ], + "angle": 0, + "content": "Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843-2849." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.588, + 0.49, + 0.654 + ], + "angle": 0, + "content": "Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested ner through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.663, + 0.49, + 0.73 + ], + "angle": 0, + "content": "Chuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, and Fei Huang. 2020. Boundary enhanced neural span classification for nested named entity recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 9016-9023." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.738, + 0.49, + 0.791 + ], + "angle": 0, + "content": "Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.8, + 0.49, + 0.919 + ], + "angle": 0, + "content": "David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789, Hong Kong, China. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.885, + 0.166 + ], + "angle": 0, + "content": "Juncheng Wan, Dongyu Ru, Weinan Zhang, and Yong Yu. 2022. Nested named entity recognition with span-level graphs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 892-903, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.175, + 0.885, + 0.241 + ], + "angle": 0, + "content": "Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204-214." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.252, + 0.885, + 0.318 + ], + "angle": 0, + "content": "Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.329, + 0.885, + 0.433 + ], + "angle": 0, + "content": "Shuang Wu, Xiaoning Song, and Zhenhua Feng. 2021. Mect: Multi-metadata embedding based cross-transformer for chinese named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1529-1539." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.444, + 0.885, + 0.537 + ], + "angle": 0, + "content": "Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various ner subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.547, + 0.885, + 0.612 + ], + "angle": 0, + "content": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.623, + 0.885, + 0.689 + ], + "angle": 0, + "content": "Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 3174-3186." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.699, + 0.885, + 0.754 + ], + "angle": 0, + "content": "Yue Zhang and Jie Yang. 2018. Chinese ner using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.763, + 0.885, + 0.881 + ], + "angle": 0, + "content": "Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357-366, Hong Kong, China. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.891, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the" + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.885, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14844" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.132, + 0.086, + 0.49, + 0.126 + ], + "angle": 0, + "content": "60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7096-7108." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.151, + 0.238, + 0.167 + ], + "angle": 0, + "content": "A Appendix" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.176, + 0.232, + 0.191 + ], + "angle": 0, + "content": "A.1 Datasets" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.197, + 0.49, + 0.293 + ], + "angle": 0, + "content": "We evaluate our method on eight datasets, including CoNLL 2003, OntoNotes 5, ACE 2004, ACE 2005, and GENIA for English NER datasets; MSRA, Resume NER and Weibo NER for Chinese NER datasets. Table 6 presents the detailed statistics of datasets." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.305, + 0.35, + 0.32 + ], + "angle": 0, + "content": "A.2 Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.325, + 0.49, + 0.501 + ], + "angle": 0, + "content": "We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021). Our model is implemented with PyTorch and trained with a NVIDIA RTX3090 GPU. We use a grid search to find the best hyperparameters which are tuned on the development set. The range of hyperparameters we used for eight datasets are listed in Table 7." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.513, + 0.238, + 0.527 + ], + "angle": 0, + "content": "A.3Baselines" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.534, + 0.481, + 0.549 + ], + "angle": 0, + "content": "We compare BOPN with the following baselines:" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.56, + 0.489, + 0.609 + ], + "angle": 0, + "content": "- BiLSTM-CRF (Miwa and Bansal, 2016) is a model for sequence labeling tasks that combines BiLSTM with CRF layers." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.619, + 0.49, + 0.683 + ], + "angle": 0, + "content": "- BERT-Tagger (Devlin et al., 2019) that utilizes the pre-trained language model BERT as a feature extractor and incorporates a tag classifier for fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.695, + 0.487, + 0.74 + ], + "angle": 0, + "content": "- Lattice (Zhang and Yang, 2018) proposed a lattice-structured LSTM model for Chinese NER." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.754, + 0.486, + 0.784 + ], + "angle": 0, + "content": "- Layered (Ju et al., 2018) dynamically stacks flat NER layers to solve nested NER task." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.796, + 0.489, + 0.86 + ], + "angle": 0, + "content": "- Flat (Li et al., 2020) proposes a flat-lattice transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.872, + 0.49, + 0.918 + ], + "angle": 0, + "content": "- Pyramid (Wang et al., 2020) designs pyramid layer and inverse pyramid layer to decode nested entities." + }, + { + "type": "list", + "bbox": [ + 0.137, + 0.56, + 0.49, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.085, + 0.885, + 0.149 + ], + "angle": 0, + "content": "- SoftLexicon (Ma et al., 2020) proposes a Chinese NER method in which lexicon information is introduced by simply adjusting the character representation layer." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.16, + 0.885, + 0.224 + ], + "angle": 0, + "content": "- MECT (Wu et al., 2021) uses multi-metadata embedding in a two-stream transformer to integrate Chinese character features with the radical-level embedding." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.235, + 0.884, + 0.282 + ], + "angle": 0, + "content": "- Biaffine (Yu et al., 2020) classifies text spans by a biaffine classifier between boundary representations." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.294, + 0.884, + 0.357 + ], + "angle": 0, + "content": "- Locate and Label (Shen et al., 2021) proposed a two-stage identifier of locating entities with boundary regression first and classifying them later." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.369, + 0.884, + 0.432 + ], + "angle": 0, + "content": "- W2NER (Li et al., 2022) models NER as word-word relation classification, including the next-neighboring-word and the tail-head-word relations." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.444, + 0.884, + 0.491 + ], + "angle": 0, + "content": "- Triaffine (Yuan et al., 2022) proposed a tri-affine mechanism to fuse information of inside tokens, boundaries, labels for NER." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.503, + 0.884, + 0.55 + ], + "angle": 0, + "content": "- Boundary Smooth (Zhu and Li, 2022) proposed boundary smoothing as a regularization technique for span-based neural NER models." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.562, + 0.884, + 0.624 + ], + "angle": 0, + "content": "- DiffusionNER (Shen et al., 2023a) formulates NER as a boundary-denoising diffusion process, which samples noisy spans from a Gaussian distribution." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.637, + 0.883, + 0.685 + ], + "angle": 0, + "content": "- Seq2Seq (Straková et al., 2019) converts the labels of nested entities into a sequence and then uses a seq2seq model to decode entities." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.696, + 0.884, + 0.759 + ], + "angle": 0, + "content": "- BartNER (Yan et al., 2021) formulates NER as an entity span sequence generation problem based on the pre-training Seq2Seq model BART (Lewis et al., 2019)." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.771, + 0.881, + 0.819 + ], + "angle": 0, + "content": "PIQN (Shen et al., 2022) sets up global and learnable instance queries to extract entities from a sentence in a parallel manner." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.83, + 0.884, + 0.894 + ], + "angle": 0, + "content": "- PromptNER (Shen et al., 2023b) unifies entity locating and entity typing in prompt learning for NER, which predicts all entities by filling position slots and type slots." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.085, + 0.885, + 0.894 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14845" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.12, + 0.082, + 0.885, + 0.248 + ], + "angle": 0, + "content": "
CoNLL 2003OntoNotes 5ACE 2004ACE 2005GENIAMSRAResumeWeibo
Types418775384
#Train.S172915992462007194166924647138191350
#Dev.S-8528745969--463270
#Test.S34538262812104718544376477270
Avg.Len.S14.3818.1122.6118.9725.4145.5431.1754.57
#Train.E294411287382220493895050974703134381855
#Dev.E-2035425141112--1497379
#Test.E56481258630351118550661811630409
Avg.Len.E1.451.832.502.281.973.245.882.60
" + }, + { + "type": "table_caption", + "bbox": [ + 0.115, + 0.257, + 0.882, + 0.273 + ], + "angle": 0, + "content": "Table 6: Dataset Statistics. \"#\" denotes the amount. \"S.\" and \"E.\" denote sentence and entity mentions, respectively." + }, + { + "type": "table", + "bbox": [ + 0.14, + 0.479, + 0.462, + 0.706 + ], + "angle": 0, + "content": "
ParameterValue
Epoch[50, 80]
Batch size[8, 16]
Learning rate (BERT)[5e-6, 3e-5]
Learning rate (Other)1e-3
LSTM hidden size d256
LSTM dropout0.5
Region embedding size de20
Biaffine hidden size db150
Biaffine dropout0.2
Maximum offset value S[1, 3]
Adam epsilon1e-8
Warm factor0.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.182, + 0.715, + 0.42, + 0.731 + ], + "angle": 0, + "content": "Table 7: Hyper-parameter settings." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14846" + } + ] +] \ No newline at end of file diff --git a/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_origin.pdf b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b81355dd4ee8b54e731c3968898c9a0253bfb389 --- /dev/null +++ b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/5304d5a7-baa1-46a8-bf30-4a5c29036879_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a53d15a37dc056ff66a5ecd0c9f6c1f9663644d405039bca4d579907f8772ac5 +size 811013 diff --git a/2023/A Boundary Offset Prediction Network for Named Entity Recognition/full.md b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b8fd670328c185cfd01840ada801bdcf683fc6f6 --- /dev/null +++ b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/full.md @@ -0,0 +1,356 @@ +# A Boundary Offset Prediction Network for Named Entity Recognition + +Minghao Tang $^{1,2}$ , Yongquan He $^{3}$ , Yongxiu Xu $^{1,2*}$ , Hongbo Xu $^{1}$ , Wenyuan Zhang $^{1,2}$ and Yang Lin $^{3}$ + +$^{1}$ Institute of Information Engineering, CAS, China + +$^{2}$ School of Cyber Security, UCAS, China + +3Meituan, China + +{tangminghao,xuyongxiu,hbxu}@ie.ac.cn, heyongquan@meituan.com + +# Abstract + +Named entity recognition (NER) is a fundamental task in natural language processing that aims to identify and classify named entities in text. However, span-based methods for NER typically assign entity types to text spans, resulting in an imbalanced sample space and neglecting the connections between non-entity and entity spans. To address these issues, we propose a novel approach for NER, named the Boundary Offset Prediction Network (BOPN), which predicts the boundary offsets between candidate spans and their nearest entity spans. By leveraging the guiding semantics of boundary offsets, BOPN establishes connections between non-entity and entity spans, enabling non-entity spans to function as additional positive samples for entity detection. Furthermore, our method integrates entity type and span representations to generate type-aware boundary offsets instead of using entity types as detection targets. We conduct experiments on eight widely-used NER datasets, and the results demonstrate that our proposed BOPN outperforms previous state-of-the-art methods. + +# 1 Introduction + +Named entity recognition (NER) is a fundamental task in natural language processing (NLP) that involves identifying and categorizing named entities in text, such as people, locations and organizations. It has drawn much attention from the community due to its relevance in various NLP applications, such as entity linking (Le and Titov, 2018; Hou et al., 2020) and relation extraction (Miwa and Bansal, 2016; Li et al., 2021b). + +Various paradigms have been proposed for NER, including the sequence labeling (Huang et al., 2015; Ju et al., 2018), hypergraph-based (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018), sequence-to-sequence (Gillick et al., 2016; Yan et al., 2021) and span-based methods (Sohrab + +![](images/afdc6e2991a1a66df20f5b60549e17670764767519c981aef32ce0850c1a9658.jpg) +protein protein cell_type HMG box containing transcription factors in lymphocyte differentiation +Figure 1: A sentence from GENIA dataset (Ohta et al., 2002), containing 8 words and 3 entities. The candidate spans covers the upper triangular region with a total of 36 samples of each matrix. There are 2 and 1 positive samples for "protein" and "cell type" entity types, respectively. + +and Miwa, 2018; Shen et al., 2021; Chen et al., 2021). Among these approaches, the span-based method has become the most popular due to its simplicity and effectiveness. It is straightforward that typically embeds all possible text spans and predicts their entity types, making it suitable for various NER subtasks (Li et al., 2021a, 2022). + +Despite significant progress made by span-based methods in NER, there remain two critical issues that require attention. Firstly, these methods often suffer from highly imbalanced sample spaces, as exemplified in Figure 1. Such imbalance can negatively impact the trainability and performance of deep neural networks (Johnson and Khoshgoftaar, 2019). Although some methods (Shen et al., 2021; Wan et al., 2022) mitigate this issue by restricting the maximum span length, such an approach can also constrain the model's predictive power. Secondly, current span-based methods primarily focus on learning the distinction between non-entities and entities, disregarding their relationships. While a model can identify whether "HMG box" is an entity, it may fail to recognize the connection be + +![](images/e231c942f392d6606bda85462443b42c741df015dc8fca7920e84c9d9c0c8541.jpg) +Figure 2: Text spans annotated with boundary offset. "1S" or "1E" represents a span has 1 offset from its nearest entity at the start or end boundary, and so on. + +tween "HMG" and "HMG box." To enhance the model's ability to recognize entities, it is crucial to explicitly capture both boundary differences and connections between non-entities and entities. + +In this paper, we intend to model text spans by utilizing boundary offset information as supervision, rather than predict their probability of belonging to entities. As shown in Figure 2, there could be two advantages for deep models when boundary offsets are learnable: i) The natural quantitative relationships between offset values enable the model to capture boundary differences and connections simultaneously. ii) Non-entity spans can have specific semantics that guide the positioning of entity spans, leading to an improved sample space with fewer negative samples. + +Based on this observation, we propose the Boundary Offset Prediction Network (BOPN) for NER. BOPN focuses on predicting boundary offsets between candidate spans and their nearest entities, providing a new perspective on modeling text spans. Specifically, our method follows the pipeline of first learning span representations and then classifying them for offset prediction. BERT (Devlin et al., 2019) and BiLSTM (Lample et al., 2016) are used to embed texts, followed by a Conditional Layer (Liu et al., 2021) for building span representations. Meanwhile, we also treat entity types as inputs rather than classification targets, which are fused with span representations to generate type-aware boundary offsets in parallel. Finally, we incorporate multiple 3D convolution layers to capture the natural quantitative relationships between the offset values. + +We evaluate our method on eight widely-used NER datasets, including five English NER datasets and three Chinese NER datasets. The experimental results demonstrate that our approach outperforms the existing state-of-the-art methods. Furthermore, a detailed examination reveals a significant im + +provement in recall scores when aggregating results across offset labels, which is particularly beneficial for recall-sensitive applications. + +# 2 Problem Definition + +Named Entity Recognition (NER) aims to identify of all entities within an input sentence $\mathrm{X} = \{x_{n}\}_{n = 1}^{N}$ , based on a pre-defined set of entity types $\mathrm{Y} = \{y_{m}\}_{m = 1}^{M}$ . Typically, an entity is specified by token boundaries and a entity types. + +Our proposed method focuses on predicting the boundary offset between each candidate text span and its nearest entity. Hence, we formulate each text span as a quadruple: $\{x_{i}, x_{j}, f_{s}, y_{m}\}$ , where $i$ and $j$ denote the start and end boundary indices of the span, $f_{s}$ represents the start or end boundary offset from its nearest entity of type $y_{m}$ . Note that an entity span is a special case with $f_{s} = 0$ . + +Annotation Guidelines To facilitate understanding, we present the essential boundary offset labels as follow: + +- Center Span: refers to an entity span with an offset label of "0". +- $\mathbf{\nabla}^{*}\mathbf{S}$ or $\mathbf{\nabla}^{*}\mathbf{E}$ : denotes the annotation of the start or end boundary offsets for non-entity spans. " $\ast$ " represents an offset value in the range of $[-S, \dots, -1, 1, \dots, S]$ , where $S$ denotes the maximum offset value. +- Out-of-Range: refers to the annotation of a non-entity span with an absolute boundary offset value from its nearest entity exceeding the maximum offset value $S$ . + +The annotation procedure for boundary offsets involves three steps. Initially, a 3-dimensional matrix $\mathcal{O} \in \mathbb{R}^{M \times N \times N}$ is constructed according to the input sentence $X$ , where $M$ denotes the number of entity types and $N$ represents the length of the sentence. Next, we annotate the center spans with the offset label "0" based on the golden entities present in $X$ . Entities of different types are assigned to their respective sub-matrices. Finally, for non-entity spans, we compute the start and end boundary offset values with respect to all center spans. Their annotation is determined by the absolute minimum offset value. If the absolute minimum offset value is less than $S$ , we annotate the corresponding *S or *E; otherwise, we label the span as "Out-of-Range". + +![](images/c3d5830016737eb1c91b120afd7c7b2898f072f40fbc8081c3a749cb57e2a4a5.jpg) +(a) Span Encoder +Figure 3: An overview architecture of our method, which mainly consists of two components: a Span Encoder and a Boundary Offset Predictor. + +# 3 Methods + +Figure 3 provides an overview of our method, which encompasses two primary components: a Span Encoder (Section 3.1) and a Boundary Offset Predictor (Section 3.2). The Span Encoder is responsible for encoding entity types and sentences, utilizing word representations to construct span representations. Subsequently, the entity type and span representations are inputted into the boundary offset predictor, facilitating type-aware offset classification. + +# 3.1 Span Encoder + +Drawing inspiration from the prompt-based methods (Qin and Eisner, 2021; Han et al., 2022), we consider entity types as task-oriented inputs, indicating the specific types of entities that the model needs to predict within a given sentence. + +To achieve this, we create a set of additional type tokens, denoted as $\mathrm{P} = \{p_m\}_{m=1}^M$ , where $p_m$ represents a learnable special token corresponding to entity type $y_m$ . Next, we concatenate the soft tokens $\mathrm{P}$ with the sentence $\mathrm{X}$ to form a single sequence, and employ BERT (Devlin et al., 2019) to encode them simultaneously. The output of BERT is then passed through a BiLSTM (Lample et al., 2016) to generate final embedding features $\mathrm{H} = \{h_1, h_2, \dots, h_{M+N}\} \in \mathbb{R}^{(M+N) \times d}$ , where $d$ is the hidden size. Finally, we split $\mathrm{H}$ to obtain entity type representations $\mathrm{H}^Y \in \mathbb{R}^{M \times d}$ and token representations $\mathrm{H}^X \in \mathbb{R}^{N \times d}$ , respectively. + +Span Representation Given the token representations $\mathrm{H}^X = \{h_1, h_2, \dots, h_N\}$ , the span representation $v_{ij}$ can be considered as a fusion of the boundary representations $(h_i, h_j)$ . Following Li et al. (2022), we adopt the Conditional Layer Normalization (CLN) (Liu et al., 2021) mechanism to build a high-quality span representation: + +$$ +\begin{array}{l} v _ {i j} = \operatorname {C L N} \left(h _ {i}, h _ {j}\right) \tag {1} \\ = \gamma_ {j} \otimes \operatorname {N o r m} \left(h _ {i}\right) + \lambda_ {j}, \\ \end{array} +$$ + +where $\mathrm{Norm}(\cdot)$ is the instance normalization function (Ulyanov et al., 2016), $\gamma_{j}$ and $\lambda_{j}$ are the condition parameters that are obtained by two different feedforward networks: $\gamma_{j} = \mathrm{FFN}(h_{j})$ and $\lambda_{j} = \mathrm{FFN}(h_{j})$ . + +While valid candidate spans are restricted to the upper triangular region of the adjacent text span matrix, a region embedding $\mathrm{E} = [e_{up}, e_{low}] \in \mathbb{R}^{2 \times d_e}$ are adapted to distinguish the positions of text spans. The final representation of each span is obtained as: $\hat{v}_{ij} = [v_{ij}, e_{up}]$ if $i \leq j$ ; $\hat{v}_{ij} = [v_{ij}, e_{low}]$ if $i > j$ . + +# 3.2 Boundary Offset Predictor + +As previously mentioned, we utilize the entity types as inputs to guide the model in generating type-aware boundary offsets, rather than categorizing each text span into a particular entity type. + +The biaffine classifier (Yu et al., 2020) is employed to fuse entity type representations and span representations. Specifically, given an entity type representation $h_m \in \mathbf{H}^Y$ and span representation + +$\hat{v}_{ij}\in \widehat{\mathbf{V}}$ , a scoring vector $c_{mij}\in \mathbb{R}^L$ can be computed as: + +$$ +h _ {y} ^ {\prime} = \operatorname {F F N} \left(h _ {y}\right), \quad \hat {v} _ {i j} ^ {\prime} = \operatorname {F F N} \left(\hat {v} _ {i j}\right), \tag {2} +$$ + +$$ +c _ {m i j} = \left(h _ {m} ^ {\prime}\right) ^ {T} U \hat {v} _ {i j} ^ {\prime} + W \left(h _ {m} ^ {\prime} \oplus v _ {i j} ^ {\prime}\right) + b, \tag {3} +$$ + +where $L$ is the number of offset labels $^1$ ; $U \in \mathbb{R}^{L \times d_b \times d_b}$ , $W \in \mathbb{R}^{L \times 2d_b}$ and $b \in \mathbb{R}^L$ are learnable parameters, $d_b$ is the biaffine hidden size. + +3D Convolution Layer Furthermore, we utilize multiple 3-dimensional convolution (3DConv) layers to capture the inherent quantitative relationships between the boundary offsets of adjacent text spans. As depicted in Figure 3(b), the 3D convolution kernels traverse the complete score matrix $C$ in three directions, thereby aggregating offset predictions for adjacent text spans across all entity types. The computation in a single convolution layer can be expressed as: + +$$ +\mathrm {Q} = \sigma (\mathrm {3 D C o n v} (\mathrm {C})), \tag {4} +$$ + +where $\mathbf{Q} \in \mathbb{R}^{M \times N \times N \times L}$ , $\sigma$ is the GELU activation function (Hendrycks and Gimpel, 2016). We assign different dilation rates to each convolution layer, and then concatenate their outputs followed by a linear to calculate final prediction scores: + +$$ +\hat {\mathrm {Q}} = \operatorname {L i n e a r} \left(\mathrm {Q} _ {1} \oplus \mathrm {Q} _ {2} \oplus \mathrm {Q} _ {3}\right), \tag {5} +$$ + +To obtain the probability distribution of span $(i,j)$ over the offset labels, $\hat{q}_{mij} \in \hat{\mathbf{Q}}$ is fed into a softmax layer: + +$$ +\hat {o} _ {m i j} = \operatorname {s o f t m a x} \left(\hat {q} _ {m i j}\right), \tag {6} +$$ + +# 3.3 Training and Inference + +Learning Objective In our method, the learning objective is to accurately assign a boundary offset to each text span, which can be treated as a multiclass classification problem and optimized using cross-entropy loss: + +$$ +\mathcal {L} = - \frac {1}{M N ^ {2}} \sum_ {m} ^ {M} \sum_ {i} ^ {N} \sum_ {j} ^ {N} o _ {m i j} ^ {T} \log \left(\hat {o} _ {m i j}\right) \tag {7} +$$ + +where $o_{mij} \in \mathbb{R}^D$ represents the ground-truth, which is an one-hot vector encoded from the annotated adjacent text span matrix $\mathcal{O}$ . + +Inference with Boundary offsets During the inference process, decoding entities based on predicted boundary offsets is a straightforward procedure. The output of our method is a matrix of size $M \times N \times N$ , where each cell represents a potential entity and contains information about its boundaries and type. For example, a cell with coordinates $(m, i, j)$ and the prediction "-1E" indicates an entity of type $y_{m}$ with a start boundary at $x_{i}$ and an end boundary at $x_{j+1}$ . Conversely, if the predicted value is "out-of-range," it implies that the cell does not correspond to any entity. + +However, blindly accepting all predicted boundary offsets may result in sub-optimal outcomes as it disregards the quantitative relationship between boundary offsets. Therefore, we introduce two heuristic rules to identify unreasonable predictions: i) Predicted boundary offsets that do not align with their nearest center span. ii) Predicted boundary offsets that do not adhere to a sequential order with neighboring spans. + +# 4 Experimental Settings + +# 4.1 Datasets + +To evaluate our method, we conducted experiments on five English NER datasets, including CoNLL 2003 (Sang and De Meulder, 2003), OntoNotes $5^{2}$ , ACE $2004^{3}$ , ACE $2005^{4}$ and GENIA (Ohta et al., 2002); and three Chinese NER datasets, including MSRA (Levow, 2006), Resume NER (Zhang and Yang, 2018) and Weibo NER (Peng and Dredze, 2015). Note that ACE 2004, ACE 2005 and GENIA are nested NER datasets, others are flat datasets. + +For OntoNotes 5, we take the same train/dev/test as used in CoNLL 2012 shared task (Pradhan et al., 2012). For ACE 2004 and ACE 2005, we use the same data split as Lu and Roth (2015). For GENIA, we follow Katiyar and Cardie (2018) to split train/test as 9:1. For other datasets, we employ the same settings in previous works (Ma et al., 2020; Yan et al., 2021; Zhu and Li, 2022). + +# 4.2 Implementation Details + +We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese + +
ModelsCoNLL 2003OntoNotes 5
PRF1PRF1
Sequence Labeling Methods
BiLSTM-CRF (Miwa and Bansal, 2016)--91.0386.0486.5386.28
BERT-Tagger (Devlin et al., 2019)--92.8090.0188.3589.16
Span-based Methods
Biaffine (Yu et al., 2020)*†92.4692.6792.5589.9489.8189.88
W2NER (Li et al., 2022)92.7193.4493.0790.0390.9790.50
Boundary Smooth (Zhu and Li, 2022)*†92.8993.2093.0490.4290.8190.61
DiffusionNER (Shen et al., 2023a)92.9992.5692.7890.3191.0290.66
Others
Seq2Seq (Straková et al., 2019)--92.98---
BartNER (Yan et al., 2021)†92.5793.5393.0589.6590.8790.26
PIQN (Shen et al., 2022)93.2992.4692.8791.4390.7390.96
PromptNER (Shen et al., 2023b)92.4892.3392.41---
BOPN (Ours)93.2293.1593.1990.9391.4091.16
+ +Table 1: Results on English flat NER datasets CoNLL 2003 and OntoNotes 5. † means our re-implementation via their code. * denotes a fair comparison that their BERT encoder is consistent with our model. + +
ModelsMSRAResume NERWeibo NER
PRF1PRF1PRF1
Sequence Labeling Methods
Lattice (Zhang and Yang, 2018)93.5792.7993.1894.8194.1194.4653.0462.2558.79
Flat (Li et al., 2020)--96.09--95.86--68.55
SoftLexicon (Ma et al., 2020)95.7595.1095.4296.0896.1396.1170.9467.0270.50
MECT (Wu et al., 2021)--96.24--95.98--70.43
Span-based Methods
W2NER (Li et al., 2022)96.1296.0896.1096.9696.3596.6570.8473.8772.32
Boundary Smooth (Zhu and Li, 2022)96.3796.1596.2696.6396.6996.6670.1675.3672.66
DiffusionNER (Shen et al., 2023a)95.7194.1194.91------
BOPN (Ours)96.4496.3496.3996.7396.8396.7871.7973.9072.92
+ +Table 2: Results on Chinese flat NER datasets MSRA, Resume and Weibo. + +corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021). + +The BiLSTM has one layer and 256 hidden size with dropout rate of 0.5. The size of region embedding $d_{e}$ is 20. The maximum offset value $S$ is selected in $\{1,2,3\}$ . For all datasets, we train our models by using AdamW Optimizer (Loshchilov and Hutter, 2017) with a linear warmup-decay learning rate schedule. See Appendix A for more details. Our source code can be obtained from https://github.com/mhtang1995/BOPN. + +# 4.3 Evaluation + +We use strict evaluation metrics where a predicted entity is considered correct only when both the boundaries (after adding boundary offset) and type are accurately matched. The precision, recall and $F_{1}$ scores are employed. We run our model for five times and report averaged values. + +# 5 Results and Analysis + +# 5.1 Main Results + +The performance of our proposed method and the baselines on English flat NER datasets is presented in Table 1. The experimental results demonstrate that our approach surpasses the previous state-of-the-art (SOTA) methods by $+0.12\%$ on the CoNLL 2003 dataset and $+0.20\%$ on the OntoNotes 5 dataset, achieving superior performance with $F_{1}$ scores of $93.19\%$ and $91.16\%$ , respectively. For Chinese flat NER datasets, we provide the results in Table 2. Similarly, our proposed method achieves SOTA performance in terms of $F_{1}$ scores, surpassing the previous best method by $+0.13\%$ , $+0.12\%$ , and $+0.26\%$ in $F_{1}$ scores on the MSRA, Resume NER, and Weibo NER datasets, respectively. + +The performance results on English nested NER datasets are presented in Table 3. Remarkably, + +
ModelsACE 2004ACE 2005GENIA
PRF1PRF1PRF1
Sequence Labeling Methods
Layered (Ju et al., 2018)---74.270.372.278.571.374.7
Pyramid (Wang et al., 2020)86.0886.4886.2883.9585.3984.6679.4578.9479.19
Span-based Methods
Biaffine (Yu et al., 2020)87.386.086.785.285.685.478.278.278.2
Locate and Label (Shen et al., 2021)87.4487.3887.4186.0987.2786.6780.1980.8980.54
W2NER (Li et al., 2022)87.3387.7187.5285.0388.6286.7983.1079.7681.39
Triaffine (Yuan et al., 2022)87.1387.6887.6086.7086.9486.8280.4282.0681.23
Boundary Smooth (Zhu and Li, 2022)88.4387.5387.9886.2588.0787.15---
DiffusionNER (Shen et al., 2023a)88.1188.6688.3986.1587.7286.9382.1080.9781.53
Others
Seq2Seq (Straková et al., 2019)--84.33--83.42--78.20
BartNER (Yan et al., 2021)87.2786.4186.8483.1686.3884.7478.5779.3078.93
PIQN (Shen et al., 2022)88.4887.8188.1486.2788.6087.4283.2480.3581.77
PromptNER (Shen et al., 2023b)87.5888.7688.1686.0788.3887.21---
BOPN (Ours)89.1389.4089.2689.5691.2390.3982.1482.1682.14
+ +Table 3: Results on English nested NER datasets ACE 2004, ACE 2004 and GENIA. + +
CoNLL 2003Resume NERACE 2004
BOPN (Ours)93.1996.7889.26
- w/o Type Inp.92.8796.4188.83
- w/o Region Emb.92.7196.2288.71
- w/o BO92.7496.2688.62
- w/o 3DConv92.8796.4089.11
- MBO (S=1)93.1196.7589.14
- MBO (S=2)93.1596.7889.26
- MBO (S=3)93.1996.7189.22
- 3DConv (l=1)93.0896.6989.18
- 3DConv (l=2)93.1996.7589.26
- 3DConv (l=3)93.0596.7889.25
+ +Table 4: Ablation Studies. MBO means the maximum boundary offset value. + +our proposed BOPN achieves substantial improvements in performance on these datasets, with $F_{1}$ scores increasing by $+0.87\%$ , $+2.97\%$ , and $+0.37\%$ on ACE 2004, ACE 2005, and GENIA, respectively. These results align with our expectations, as the boundary features of nested entities are more intricate compared to flat entities. We attribute this improvement to two key factors: 1) Our method predicts the boundary information of various entity types in parallel, effectively avoiding nested boundary conflicts between different types of entities. 2) By predicting boundary offsets, our method expands the predictive range for each text span, allowing for more granular and precise identification of entity boundaries. + +# 5.2 Ablation Studies + +In order to assess the impact of each component in our method, we conduct ablation studies on the CoNLL 2003, ACE 2005, and Resume NER datasets. The results of these studies are presented in Table 4. + +Maximum Boundary Offset We investigate the impact of training the model with different maximum offset values $S$ through our ablation studies. The hyperparameter $S$ determines the annotation scope of non-entity spans with boundary offset. Specifically, the extreme scenario of setting $S$ to 0 corresponds to a condition "w/o BO" (without Boundary Offset). The results indicate a significant decline in performance when employing "w/o BO," confirming the usefulness of utilizing boundary offsets as supervision. However, we also observe that the optimal $S$ value varies across different datasets. This could be attributed to the fact that a larger $S$ value provides more boundary knowledge but also increases the label search space. Consequently, hyperparameter tuning for $S$ becomes necessary to achieve the best performance in practice. + +In addition, we analyze the learning curves of our model with different maximum offset values. Figure 4 demonstrates that a larger $S$ can accelerate the training process of the model. We think the reason may be that a larger $S$ not only leads to an increase of positive samples but also results in a decrease of negative samples, thereby ultimately enhancing the trainability of the model. + +![](images/80743a7388518887de4cddd798286acf3542bc8074bcaf3df54dbf23b325835e.jpg) +Figure 4: The learning curves on ACE 2004 dataset. + +
LabelPRF1Support
-2S81.5182.0281.765029
-1S81.6282.9782.295292
1S79.5581.4780.503281
2S76.2779.5577.881438
-2E78.6477.1977.901464
-1E79.7980.5880.183254
1E82.2682.2082.235393
2E82.3780.7581.575113
081.9281.9581.935495
ALL79.2184.2281.645495
- w/ rules81.8582.5682.205495
+ +Table 5: Performance of each boundary offset label on GENIA, where the maximum offset value is 2. The reported results is one out of five experiments. + +3D Convolution Layer "w/o 3DConv" indicates the 3D convolution layers are removed. As seen, the results show a decline in performance across all datasets, indicating the importance of 3D convolution layers in capturing the interactions between boundary offsets of adjacent text spans. + +Type Inputs "w/o Type Inputs" refers to a setting where the entity types encoded with the sentence are replaced, in which the randomly initialized entity type embeddings are fed into the biaffine classifier. The results obtained in this setting show a slight decline in performance. + +Region Embedding The results demonstrate a slight drop in performance across all datasets without region embeddings. This suggests that integrating sample distribution features can be a reasonable approach for enhancing text span representations. + +As the CLN layer and biaffine classifier serve as fundamental components in our approach for span + +![](images/8053ec891e88602ef839556b5f26ee561c5811df50fa4443201d11aa38f9185d.jpg) +Figure 5: A comparison of F1-scores on entities of different lengths in GENIA dataset. Entity supports are in the parentheses. + +representation and classification, they cannot be evaluated independently. Nonetheless, our ablation studies demonstrate the effectiveness of learning boundary offset information and the usefulness of each composition in our model. + +# 5.3 Detailed Analysis + +Performance on Different Offset Labels We investigate the performance of each boundary offset label, and the results are presented in Table 5. Notably, the offset label "0" has complete entity support and achieves an $F_{1}$ score of $82.04\%$ . Furthermore, we observed a positive correlation between the quantity of entity support and the performance of boundary offset labels. + +When a text span is not predicted as "out-of-range", its assigned label can be utilized to determine the position of its nearest entity. By aggregating all predictions of offset labels, we observe a sharp decrease in precision score, along with a significant increase in recall score, when compared to only considering the center span (with an offset label of "0"). This finding suggests that different offset labels provide distinct information that assists the model in recognizing additional entities. Nevertheless, this approach can introduce noisy predictions due to the model's inadequate performance on certain labels. Despite this limitation, it may have practical applicability in recall-sensitive applications. + +As discussed in Section 3.3, we devise two heuristic rules to remove improbable predictions. Our findings reveal that this approach enhances the precision score, with only a minor reduction in the recall score, leading to an overall improvement in the $F_{1}$ score. + +![](images/7ecb4838dceef94fe33a44e5fd507039fdc00eaa747ec70b3febb19d1bbb31c2.jpg) +Figure 6: Effect of varying percentage of training samples on GENIA. We train all models for 50 epochs and report their best performance. + +Performance on Entities with Varying Lengths We explore the model performance on entities of different lengths in GENIA. As shown in Figure 5, we compare the $F_{1}$ scores of models which are training with different $S$ . The model achieves higher $F_{1}$ scores across all columns when $S = 2$ , with a more pronounced performance improvement for longer entities. The results highlight the usefulness of learning boundary offsets between nonentity and entity spans, which helps the model learn boundary features more effectively. + +Size of Training Data As the boundary offset labels contain more informative knowledge, we hypothesize that our proposed BOPN would perform better with limited training data. As shown in Figure 6, our model achieves impressive results, exhibiting only a $5.46\%$ decrease in performance when trained with a mere $12.5\%$ of the available training data. In contrast, when boundary information is not utilized during training, the model's performance declines rapidly as the amount of training data decreases, thus creating significant obstacles to effective training. + +# 6 Related Work + +In recent years, various paradigms for named entity recognition (NER) have been proposed, among which span-based methods have become one of the most mainstream approaches, treating NER as a text span classification problem. With the development of pre-trained language models, some works (Sohrab and Miwa, 2018; Luan et al., 2019; Wadden et al., 2019) obtain span representations by connecting boundary representations or aggregating token representations and feeding them into + +a linear classifier for type prediction. Alternatively, Yu et al. (2020) utilizes a biaffine classifier to fuse start and end boundary representations directly for span classification. To further enhance span representation, several other methods (Wan et al., 2022; Yuan et al., 2022) propose fusing representations of token, boundary, and related entity spans. + +Meanwhile, some methods try to improve span-based methods by adding boundary supervision. Specifically, Zheng et al. (2019) and Tan et al. (2020) additionally detect entity boundaries with multi-task learning, while Shen et al. (2021) perform boundary regression after span prediction. Li et al. (2022) design two word-word relations for span classification. Compared with previous methods, our proposed method utilizes continuous boundary offset values to model text spans, which can capture both the boundary differences and connections between non-entity and entity spans. + +In addition to span-based methods, there are three widely-used NER methods. The traditional sequence labeling methods (Huang et al., 2015; Lample et al., 2016) assign each token a tag with a pre-designed tagging scheme (e.g., $BIO$ ). To address nested entities, some works (Ju et al., 2018; Wang et al., 2020; Rojas et al., 2022) add struggles or design special tagging schemes. Hypergraph-based methods (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018) represent the input sentence as a hypergraph for detecting nested entities, which must be carefully designed to avoid spurious structures. Sequence-to-sequence methods reformulate NER as a sequence generation problem. For example, Gillick et al. (2016) first apply the Seq2Seq model for NER, inputting the sentence and outputting start positions, entity lengths, and types. Straková et al. (2019) use the Seq2Seq model and enhanced BILOU scheme to address nested NER. Yan et al. (2021) treats NER as an entity span sequence generation problem with pointer network based on BART (Lewis et al., 2019). + +# 7 Conclusion + +In this paper, we introduce a novel approach for named entity recognition (NER) called the Boundary Offset Prediction Network (BOPN). BOPN predicts the boundary offsets between candidate spans and their nearest entities, leveraging entity types as inputs. By incorporating entity types, BOPN enables parallel prediction of type-aware boundary offsets, enhancing the model's ability to capture + +fine-grained entity boundaries. To capture the interactions between boundary offsets, we employ multiple 3D convolution layers, which refine the offset predictions and capture the inherent quantitative relationships between adjacent text spans. + +The experimental results demonstrate that our proposed method achieves state-of-the-art performance on eight widely-used datasets, including five English NER datasets and three Chinese NER datasets. Moreover, further analysis reveals a significant improvement in recall scores by utilizing boundary offset as supervision, showcasing the utility of our approach for recall-sensitive applications in NER. + +# Limitations + +The proposed BOPN approach has certain limitations that should be acknowledged. Firstly, while BOPN treats boundary offsets as classification targets, it does not explicitly model the order relationship between offset values. Although the 3D convolution layers are employed to implicitly capture interactions between boundary offsets, they do not provide a strong constraint on the ordering of offset labels. + +Additionally, the method uses boundary offsets to convert some non-entity spans into positive samples, which leads to higher recall scores but potentially lower precision scores. To optimize prediction results, heuristic rules are applied to filter out unreasonable samples. However, these rules are based on observations and may not be comprehensive enough to handle all cases effectively. + +Therefore, there is still a need to explore more effective ways to integrate and optimize the offset predictions in order to address these limitations and enhance the overall performance of the BOPN approach. + +# Ethics Statement + +To address ethical concerns, we provide the two detailed description: 1) All experiments were conducted on existing datasets derived from public scientific papers. 2) Our work does not contain any personally identifiable information and does not harm anyone. + +# Acknowledgements + +This work was supported by Strategic Priority Research Program of Chinese Academy of Sciences (N0. XDC02040400). + +# References + +Pei Chen, Haibo Ding, Jun Araki, and Ruihong Huang. 2021. Explicitly capturing relations between entity mentions via graph neural networks for domain-specific named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 735-742. +Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3504-3514. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296-1306. +Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2022.Ptr: Prompt tuning with rules for text classification. AI Open, 3:182-192. +Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. +Feng Hou, Ruili Wang, Jun He, and Yi Zhou. 2020. Improving entity linking through semantic reinforced entity embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6843-6848. +Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. +Justin M Johnson and Taghi M Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Journal of Big Data, 6(1):1-54. +Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459. +Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1. + +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270. +Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1595-1604. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240. +Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN workshop on Chinese language processing, pages 108-117. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. +Fei Li, ZhiChao Lin, Meishan Zhang, and Donghong Ji. 2021a. A span-based model for joint overlapped and discontinuous named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4814-4828, Online. Association for Computational Linguistics. +Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022. Unified named entity recognition as word-word relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10965-10973. +Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021b. Mrn: A locally and globally mention-based reasoning network for document-level relation extraction. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1359-1370. +Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuan-Jing Huang. 2020. Flat: Chinese ner using flat-lattice transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6836-6842. +Ruibo Liu, Jason Wei, Chenyan Jia, and Soroush Vosoughi. 2021. Modulating language models with + +emotions. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4332-4339. +Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. +Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857-867. +Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics. +Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei, and Xuan-Jing Huang. 2020. Simplify the usage of lexicon in chinese ner. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5951-5960. +Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116. +Tomoko Ohta, Yuka Tateisi, Jin-Dong Kim, Hideki Mima, and Junichi Tsujii. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the human language technology conference, pages 73-77. CiteSeer. +Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 548-554. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in onthonotes. In Joint conference on EMNLP and CoNLL-shared task, pages 1-40. +Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203-5212. +Matías Rojas, Felipe Bravo-Marquez, and Jocelyn Dunstan. 2022. Simple yet powerful: An overlooked architecture for nested named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2108-2117. + +Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147. +Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2782-2794. +Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023a. Diffusion: Boundary diffusion for named entity recognition. arXiv preprint arXiv:2305.13298. +Yongliang Shen, Zeqi Tan, Shuhui Wu, Wenqi Zhang, Rongsheng Zhang, Yadong Xi, Weiming Lu, and Yueting Zhuang. 2023b. Prompter: Prompt locating and typing for named entity recognition. arXiv preprint arXiv:2305.17104. +Yongliang Shen, Xiaobin Wang, Zeqi Tan, Guangwei Xu, Pengjun Xie, Fei Huang, Weiming Lu, and Yueting Zhuang. 2022. Parallel instance query network for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 947-961. +Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843-2849. +Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested ner through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331. +Chuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, and Fei Huang. 2020. Boundary enhanced neural span classification for nested named entity recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 9016-9023. +Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. +David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789, Hong Kong, China. Association for Computational Linguistics. + +Juncheng Wan, Dongyu Ru, Weinan Zhang, and Yong Yu. 2022. Nested named entity recognition with span-level graphs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 892-903, Dublin, Ireland. Association for Computational Linguistics. +Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204-214. +Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928. +Shuang Wu, Xiaoning Song, and Zhenhua Feng. 2021. Mect: Multi-metadata embedding based cross-transformer for chinese named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1529-1539. +Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various ner subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822. +Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476. +Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 3174-3186. +Yue Zhang and Jie Yang. 2018. Chinese ner using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564. +Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357-366, Hong Kong, China. Association for Computational Linguistics. +Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the + +60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7096-7108. + +# A Appendix + +# A.1 Datasets + +We evaluate our method on eight datasets, including CoNLL 2003, OntoNotes 5, ACE 2004, ACE 2005, and GENIA for English NER datasets; MSRA, Resume NER and Weibo NER for Chinese NER datasets. Table 6 presents the detailed statistics of datasets. + +# A.2 Implementation Details + +We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021). Our model is implemented with PyTorch and trained with a NVIDIA RTX3090 GPU. We use a grid search to find the best hyperparameters which are tuned on the development set. The range of hyperparameters we used for eight datasets are listed in Table 7. + +# A.3Baselines + +We compare BOPN with the following baselines: + +- BiLSTM-CRF (Miwa and Bansal, 2016) is a model for sequence labeling tasks that combines BiLSTM with CRF layers. +- BERT-Tagger (Devlin et al., 2019) that utilizes the pre-trained language model BERT as a feature extractor and incorporates a tag classifier for fine-tuning. +- Lattice (Zhang and Yang, 2018) proposed a lattice-structured LSTM model for Chinese NER. +- Layered (Ju et al., 2018) dynamically stacks flat NER layers to solve nested NER task. +- Flat (Li et al., 2020) proposes a flat-lattice transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans. +- Pyramid (Wang et al., 2020) designs pyramid layer and inverse pyramid layer to decode nested entities. + +- SoftLexicon (Ma et al., 2020) proposes a Chinese NER method in which lexicon information is introduced by simply adjusting the character representation layer. +- MECT (Wu et al., 2021) uses multi-metadata embedding in a two-stream transformer to integrate Chinese character features with the radical-level embedding. +- Biaffine (Yu et al., 2020) classifies text spans by a biaffine classifier between boundary representations. +- Locate and Label (Shen et al., 2021) proposed a two-stage identifier of locating entities with boundary regression first and classifying them later. +- W2NER (Li et al., 2022) models NER as word-word relation classification, including the next-neighboring-word and the tail-head-word relations. +- Triaffine (Yuan et al., 2022) proposed a tri-affine mechanism to fuse information of inside tokens, boundaries, labels for NER. +- Boundary Smooth (Zhu and Li, 2022) proposed boundary smoothing as a regularization technique for span-based neural NER models. +- DiffusionNER (Shen et al., 2023a) formulates NER as a boundary-denoising diffusion process, which samples noisy spans from a Gaussian distribution. +- Seq2Seq (Straková et al., 2019) converts the labels of nested entities into a sequence and then uses a seq2seq model to decode entities. +- BartNER (Yan et al., 2021) formulates NER as an entity span sequence generation problem based on the pre-training Seq2Seq model BART (Lewis et al., 2019). +PIQN (Shen et al., 2022) sets up global and learnable instance queries to extract entities from a sentence in a parallel manner. +- PromptNER (Shen et al., 2023b) unifies entity locating and entity typing in prompt learning for NER, which predicts all entities by filling position slots and type slots. + +
CoNLL 2003OntoNotes 5ACE 2004ACE 2005GENIAMSRAResumeWeibo
Types418775384
#Train.S172915992462007194166924647138191350
#Dev.S-8528745969--463270
#Test.S34538262812104718544376477270
Avg.Len.S14.3818.1122.6118.9725.4145.5431.1754.57
#Train.E294411287382220493895050974703134381855
#Dev.E-2035425141112--1497379
#Test.E56481258630351118550661811630409
Avg.Len.E1.451.832.502.281.973.245.882.60
+ +Table 6: Dataset Statistics. "#" denotes the amount. "S." and "E." denote sentence and entity mentions, respectively. + +
ParameterValue
Epoch[50, 80]
Batch size[8, 16]
Learning rate (BERT)[5e-6, 3e-5]
Learning rate (Other)1e-3
LSTM hidden size d256
LSTM dropout0.5
Region embedding size de20
Biaffine hidden size db150
Biaffine dropout0.2
Maximum offset value S[1, 3]
Adam epsilon1e-8
Warm factor0.1
+ +Table 7: Hyper-parameter settings. \ No newline at end of file diff --git a/2023/A Boundary Offset Prediction Network for Named Entity Recognition/images.zip b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a03fa056dd1f5fdba897dab0bb34afc8aa3e2066 --- /dev/null +++ b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f88265836a3cad064790b75cd75c5b24415ec1dd2a86f8f3b4c25058afd6aec0 +size 740495 diff --git a/2023/A Boundary Offset Prediction Network for Named Entity Recognition/layout.json b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b302612f3815936c7f3e173e319b597ef5592f2d --- /dev/null +++ b/2023/A Boundary Offset Prediction Network for Named Entity Recognition/layout.json @@ -0,0 +1,10159 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 81, + 75, + 512, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 75, + 512, + 94 + ], + "spans": [ + { + "bbox": [ + 81, + 75, + 512, + 94 + ], + "type": "text", + "content": "A Boundary Offset Prediction Network for Named Entity Recognition" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "spans": [ + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "text", + "content": "Minghao Tang" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "text", + "content": ", Yongquan He" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "text", + "content": ", Yongxiu Xu" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "inline_equation", + "content": "^{1,2*}" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "text", + "content": ", Hongbo Xu" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "text", + "content": ", Wenyuan Zhang" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "text", + "content": " and Yang Lin" + }, + { + "bbox": [ + 91, + 104, + 509, + 133 + ], + "type": "inline_equation", + "content": "^{3}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 176, + 134, + 420, + 147 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 176, + 134, + 420, + 147 + ], + "spans": [ + { + "bbox": [ + 176, + 134, + 420, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 176, + 134, + 420, + 147 + ], + "type": "text", + "content": "Institute of Information Engineering, CAS, China" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 198, + 148, + 398, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 198, + 148, + 398, + 161 + ], + "spans": [ + { + "bbox": [ + 198, + 148, + 398, + 161 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 198, + 148, + 398, + 161 + ], + "type": "text", + "content": "School of Cyber Security, UCAS, China" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 258, + 162, + 339, + 174 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 258, + 162, + 339, + 174 + ], + "spans": [ + { + "bbox": [ + 258, + 162, + 339, + 174 + ], + "type": "text", + "content": "3Meituan, China" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 111, + 176, + 485, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 176, + 485, + 190 + ], + "spans": [ + { + "bbox": [ + 111, + 176, + 485, + 190 + ], + "type": "text", + "content": "{tangminghao,xuyongxiu,hbxu}@ie.ac.cn, heyongquan@meituan.com" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 84, + 234, + 274, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 234, + 274, + 521 + ], + "spans": [ + { + "bbox": [ + 84, + 234, + 274, + 521 + ], + "type": "text", + "content": "Named entity recognition (NER) is a fundamental task in natural language processing that aims to identify and classify named entities in text. However, span-based methods for NER typically assign entity types to text spans, resulting in an imbalanced sample space and neglecting the connections between non-entity and entity spans. To address these issues, we propose a novel approach for NER, named the Boundary Offset Prediction Network (BOPN), which predicts the boundary offsets between candidate spans and their nearest entity spans. By leveraging the guiding semantics of boundary offsets, BOPN establishes connections between non-entity and entity spans, enabling non-entity spans to function as additional positive samples for entity detection. Furthermore, our method integrates entity type and span representations to generate type-aware boundary offsets instead of using entity types as detection targets. We conduct experiments on eight widely-used NER datasets, and the results demonstrate that our proposed BOPN outperforms previous state-of-the-art methods." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 531, + 154, + 543 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 531, + 154, + 543 + ], + "spans": [ + { + "bbox": [ + 68, + 531, + 154, + 543 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 552, + 290, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 552, + 290, + 673 + ], + "spans": [ + { + "bbox": [ + 67, + 552, + 290, + 673 + ], + "type": "text", + "content": "Named entity recognition (NER) is a fundamental task in natural language processing (NLP) that involves identifying and categorizing named entities in text, such as people, locations and organizations. It has drawn much attention from the community due to its relevance in various NLP applications, such as entity linking (Le and Titov, 2018; Hou et al., 2020) and relation extraction (Miwa and Bansal, 2016; Li et al., 2021b)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 674, + 291, + 756 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 674, + 291, + 756 + ], + "spans": [ + { + "bbox": [ + 67, + 674, + 291, + 756 + ], + "type": "text", + "content": "Various paradigms have been proposed for NER, including the sequence labeling (Huang et al., 2015; Ju et al., 2018), hypergraph-based (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018), sequence-to-sequence (Gillick et al., 2016; Yan et al., 2021) and span-based methods (Sohrab" + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 307, + 243, + 522, + 348 + ], + "blocks": [ + { + "bbox": [ + 312, + 219, + 514, + 240 + ], + "lines": [ + { + "bbox": [ + 312, + 219, + 514, + 240 + ], + "spans": [ + { + "bbox": [ + 312, + 219, + 514, + 240 + ], + "type": "text", + "content": "protein protein cell_type HMG box containing transcription factors in lymphocyte differentiation" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 307, + 243, + 522, + 348 + ], + "lines": [ + { + "bbox": [ + 307, + 243, + 522, + 348 + ], + "spans": [ + { + "bbox": [ + 307, + 243, + 522, + 348 + ], + "type": "image", + "image_path": "afdc6e2991a1a66df20f5b60549e17670764767519c981aef32ce0850c1a9658.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 356, + 527, + 429 + ], + "lines": [ + { + "bbox": [ + 302, + 356, + 527, + 429 + ], + "spans": [ + { + "bbox": [ + 302, + 356, + 527, + 429 + ], + "type": "text", + "content": "Figure 1: A sentence from GENIA dataset (Ohta et al., 2002), containing 8 words and 3 entities. The candidate spans covers the upper triangular region with a total of 36 samples of each matrix. There are 2 and 1 positive samples for \"protein\" and \"cell type\" entity types, respectively." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 458, + 525, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 458, + 525, + 552 + ], + "spans": [ + { + "bbox": [ + 302, + 458, + 525, + 552 + ], + "type": "text", + "content": "and Miwa, 2018; Shen et al., 2021; Chen et al., 2021). Among these approaches, the span-based method has become the most popular due to its simplicity and effectiveness. It is straightforward that typically embeds all possible text spans and predicts their entity types, making it suitable for various NER subtasks (Li et al., 2021a, 2022)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 557, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 557, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 557, + 526, + 772 + ], + "type": "text", + "content": "Despite significant progress made by span-based methods in NER, there remain two critical issues that require attention. Firstly, these methods often suffer from highly imbalanced sample spaces, as exemplified in Figure 1. Such imbalance can negatively impact the trainability and performance of deep neural networks (Johnson and Khoshgoftaar, 2019). Although some methods (Shen et al., 2021; Wan et al., 2022) mitigate this issue by restricting the maximum span length, such an approach can also constrain the model's predictive power. Secondly, current span-based methods primarily focus on learning the distinction between non-entities and entities, disregarding their relationships. While a model can identify whether \"HMG box\" is an entity, it may fail to recognize the connection be" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 84, + 761, + 236, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 761, + 236, + 772 + ], + "spans": [ + { + "bbox": [ + 84, + 761, + 236, + 772 + ], + "type": "text", + "content": "* Yongxiu Xu is the corresponding author" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "text", + "content": "14834" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 124, + 795, + 468, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 795, + 468, + 818 + ], + "spans": [ + { + "bbox": [ + 124, + 795, + 468, + 818 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14834-14846 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 73, + 68, + 287, + 160 + ], + "blocks": [ + { + "bbox": [ + 73, + 68, + 287, + 160 + ], + "lines": [ + { + "bbox": [ + 73, + 68, + 287, + 160 + ], + "spans": [ + { + "bbox": [ + 73, + 68, + 287, + 160 + ], + "type": "image", + "image_path": "e231c942f392d6606bda85462443b42c741df015dc8fca7920e84c9d9c0c8541.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 169, + 291, + 206 + ], + "lines": [ + { + "bbox": [ + 67, + 169, + 291, + 206 + ], + "spans": [ + { + "bbox": [ + 67, + 169, + 291, + 206 + ], + "type": "text", + "content": "Figure 2: Text spans annotated with boundary offset. \"1S\" or \"1E\" represents a span has 1 offset from its nearest entity at the start or end boundary, and so on." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 228, + 290, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 228, + 290, + 282 + ], + "spans": [ + { + "bbox": [ + 67, + 228, + 290, + 282 + ], + "type": "text", + "content": "tween \"HMG\" and \"HMG box.\" To enhance the model's ability to recognize entities, it is crucial to explicitly capture both boundary differences and connections between non-entities and entities." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 284, + 291, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 284, + 291, + 445 + ], + "spans": [ + { + "bbox": [ + 67, + 284, + 291, + 445 + ], + "type": "text", + "content": "In this paper, we intend to model text spans by utilizing boundary offset information as supervision, rather than predict their probability of belonging to entities. As shown in Figure 2, there could be two advantages for deep models when boundary offsets are learnable: i) The natural quantitative relationships between offset values enable the model to capture boundary differences and connections simultaneously. ii) Non-entity spans can have specific semantics that guide the positioning of entity spans, leading to an improved sample space with fewer negative samples." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 448, + 291, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 448, + 291, + 689 + ], + "spans": [ + { + "bbox": [ + 67, + 448, + 291, + 689 + ], + "type": "text", + "content": "Based on this observation, we propose the Boundary Offset Prediction Network (BOPN) for NER. BOPN focuses on predicting boundary offsets between candidate spans and their nearest entities, providing a new perspective on modeling text spans. Specifically, our method follows the pipeline of first learning span representations and then classifying them for offset prediction. BERT (Devlin et al., 2019) and BiLSTM (Lample et al., 2016) are used to embed texts, followed by a Conditional Layer (Liu et al., 2021) for building span representations. Meanwhile, we also treat entity types as inputs rather than classification targets, which are fused with span representations to generate type-aware boundary offsets in parallel. Finally, we incorporate multiple 3D convolution layers to capture the natural quantitative relationships between the offset values." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "content": "We evaluate our method on eight widely-used NER datasets, including five English NER datasets and three Chinese NER datasets. The experimental results demonstrate that our approach outperforms the existing state-of-the-art methods. Furthermore, a detailed examination reveals a significant im" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "type": "text", + "content": "provement in recall scores when aggregating results across offset labels, which is particularly beneficial for recall-sensitive applications." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 121, + 422, + 134 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 121, + 422, + 134 + ], + "spans": [ + { + "bbox": [ + 302, + 121, + 422, + 134 + ], + "type": "text", + "content": "2 Problem Definition" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 143, + 526, + 211 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 143, + 526, + 211 + ], + "spans": [ + { + "bbox": [ + 302, + 143, + 526, + 211 + ], + "type": "text", + "content": "Named Entity Recognition (NER) aims to identify of all entities within an input sentence " + }, + { + "bbox": [ + 302, + 143, + 526, + 211 + ], + "type": "inline_equation", + "content": "\\mathrm{X} = \\{x_{n}\\}_{n = 1}^{N}" + }, + { + "bbox": [ + 302, + 143, + 526, + 211 + ], + "type": "text", + "content": ", based on a pre-defined set of entity types " + }, + { + "bbox": [ + 302, + 143, + 526, + 211 + ], + "type": "inline_equation", + "content": "\\mathrm{Y} = \\{y_{m}\\}_{m = 1}^{M}" + }, + { + "bbox": [ + 302, + 143, + 526, + 211 + ], + "type": "text", + "content": ". Typically, an entity is specified by token boundaries and a entity types." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "spans": [ + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "content": "Our proposed method focuses on predicting the boundary offset between each candidate text span and its nearest entity. Hence, we formulate each text span as a quadruple: " + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "inline_equation", + "content": "\\{x_{i}, x_{j}, f_{s}, y_{m}\\}" + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "content": " denote the start and end boundary indices of the span, " + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "inline_equation", + "content": "f_{s}" + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "content": " represents the start or end boundary offset from its nearest entity of type " + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "inline_equation", + "content": "y_{m}" + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "content": ". Note that an entity span is a special case with " + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "inline_equation", + "content": "f_{s} = 0" + }, + { + "bbox": [ + 302, + 211, + 525, + 320 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 327, + 525, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 327, + 525, + 366 + ], + "spans": [ + { + "bbox": [ + 302, + 327, + 525, + 366 + ], + "type": "text", + "content": "Annotation Guidelines To facilitate understanding, we present the essential boundary offset labels as follow:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 316, + 379, + 524, + 544 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 316, + 379, + 523, + 404 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 379, + 523, + 404 + ], + "spans": [ + { + "bbox": [ + 316, + 379, + 523, + 404 + ], + "type": "text", + "content": "- Center Span: refers to an entity span with an offset label of \"0\"." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "spans": [ + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "text", + "content": "- " + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "inline_equation", + "content": "\\mathbf{\\nabla}^{*}\\mathbf{S}" + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "inline_equation", + "content": "\\mathbf{\\nabla}^{*}\\mathbf{E}" + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "text", + "content": ": denotes the annotation of the start or end boundary offsets for non-entity spans. \" " + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "inline_equation", + "content": "\\ast" + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "text", + "content": " \" represents an offset value in the range of " + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "inline_equation", + "content": "[-S, \\dots, -1, 1, \\dots, S]" + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 316, + 415, + 524, + 481 + ], + "type": "text", + "content": " denotes the maximum offset value." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 316, + 492, + 524, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 492, + 524, + 544 + ], + "spans": [ + { + "bbox": [ + 316, + 492, + 524, + 544 + ], + "type": "text", + "content": "- Out-of-Range: refers to the annotation of a non-entity span with an absolute boundary offset value from its nearest entity exceeding the maximum offset value " + }, + { + "bbox": [ + 316, + 492, + 524, + 544 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 316, + 492, + 524, + 544 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "content": "The annotation procedure for boundary offsets involves three steps. Initially, a 3-dimensional matrix " + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "inline_equation", + "content": "\\mathcal{O} \\in \\mathbb{R}^{M \\times N \\times N}" + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "content": " is constructed according to the input sentence " + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "content": " denotes the number of entity types and " + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "content": " represents the length of the sentence. Next, we annotate the center spans with the offset label \"0\" based on the golden entities present in " + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "content": ". Entities of different types are assigned to their respective sub-matrices. Finally, for non-entity spans, we compute the start and end boundary offset values with respect to all center spans. Their annotation is determined by the absolute minimum offset value. If the absolute minimum offset value is less than " + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 556, + 526, + 772 + ], + "type": "text", + "content": ", we annotate the corresponding *S or *E; otherwise, we label the span as \"Out-of-Range\"." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14835" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 81, + 524, + 273 + ], + "blocks": [ + { + "bbox": [ + 105, + 71, + 164, + 80 + ], + "lines": [ + { + "bbox": [ + 105, + 71, + 164, + 80 + ], + "spans": [ + { + "bbox": [ + 105, + 71, + 164, + 80 + ], + "type": "text", + "content": "(a) Span Encoder" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 69, + 81, + 524, + 273 + ], + "lines": [ + { + "bbox": [ + 69, + 81, + 524, + 273 + ], + "spans": [ + { + "bbox": [ + 69, + 81, + 524, + 273 + ], + "type": "image", + "image_path": "c3d5830016737eb1c91b120afd7c7b2898f072f40fbc8081c3a749cb57e2a4a5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 280, + 524, + 305 + ], + "lines": [ + { + "bbox": [ + 67, + 280, + 524, + 305 + ], + "spans": [ + { + "bbox": [ + 67, + 280, + 524, + 305 + ], + "type": "text", + "content": "Figure 3: An overview architecture of our method, which mainly consists of two components: a Span Encoder and a Boundary Offset Predictor." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 326, + 133, + 338 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 326, + 133, + 338 + ], + "spans": [ + { + "bbox": [ + 68, + 326, + 133, + 338 + ], + "type": "text", + "content": "3 Methods" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 353, + 291, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 353, + 291, + 488 + ], + "spans": [ + { + "bbox": [ + 67, + 353, + 291, + 488 + ], + "type": "text", + "content": "Figure 3 provides an overview of our method, which encompasses two primary components: a Span Encoder (Section 3.1) and a Boundary Offset Predictor (Section 3.2). The Span Encoder is responsible for encoding entity types and sentences, utilizing word representations to construct span representations. Subsequently, the entity type and span representations are inputted into the boundary offset predictor, facilitating type-aware offset classification." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 505, + 162, + 518 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 505, + 162, + 518 + ], + "spans": [ + { + "bbox": [ + 67, + 505, + 162, + 518 + ], + "type": "text", + "content": "3.1 Span Encoder" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 527, + 290, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 527, + 290, + 594 + ], + "spans": [ + { + "bbox": [ + 67, + 527, + 290, + 594 + ], + "type": "text", + "content": "Drawing inspiration from the prompt-based methods (Qin and Eisner, 2021; Han et al., 2022), we consider entity types as task-oriented inputs, indicating the specific types of entities that the model needs to predict within a given sentence." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": "To achieve this, we create a set of additional type tokens, denoted as " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathrm{P} = \\{p_m\\}_{m=1}^M" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "p_m" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " represents a learnable special token corresponding to entity type " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "y_m" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": ". Next, we concatenate the soft tokens " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathrm{P}" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " with the sentence " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathrm{X}" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " to form a single sequence, and employ BERT (Devlin et al., 2019) to encode them simultaneously. The output of BERT is then passed through a BiLSTM (Lample et al., 2016) to generate final embedding features " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathrm{H} = \\{h_1, h_2, \\dots, h_{M+N}\\} \\in \\mathbb{R}^{(M+N) \\times d}" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " is the hidden size. Finally, we split " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathrm{H}" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " to obtain entity type representations " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathrm{H}^Y \\in \\mathbb{R}^{M \\times d}" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " and token representations " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathrm{H}^X \\in \\mathbb{R}^{N \\times d}" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": ", respectively." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "spans": [ + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "text", + "content": "Span Representation Given the token representations " + }, + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "inline_equation", + "content": "\\mathrm{H}^X = \\{h_1, h_2, \\dots, h_N\\}" + }, + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "text", + "content": ", the span representation " + }, + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "inline_equation", + "content": "v_{ij}" + }, + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "text", + "content": " can be considered as a fusion of the boundary representations " + }, + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "inline_equation", + "content": "(h_i, h_j)" + }, + { + "bbox": [ + 302, + 326, + 526, + 422 + ], + "type": "text", + "content": ". Following Li et al. (2022), we adopt the Conditional Layer Normalization (CLN) (Liu et al., 2021) mechanism to build a high-quality span representation:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 351, + 432, + 525, + 463 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 351, + 432, + 525, + 463 + ], + "spans": [ + { + "bbox": [ + 351, + 432, + 525, + 463 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} v _ {i j} = \\operatorname {C L N} \\left(h _ {i}, h _ {j}\\right) \\tag {1} \\\\ = \\gamma_ {j} \\otimes \\operatorname {N o r m} \\left(h _ {i}\\right) + \\lambda_ {j}, \\\\ \\end{array}", + "image_path": "83a33c749c36cbc1d85f5b20d7fd857029de0a4a7f3a72391bd0022ff6bd14d7.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "spans": [ + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "inline_equation", + "content": "\\mathrm{Norm}(\\cdot)" + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "text", + "content": " is the instance normalization function (Ulyanov et al., 2016), " + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "inline_equation", + "content": "\\gamma_{j}" + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "inline_equation", + "content": "\\lambda_{j}" + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "text", + "content": " are the condition parameters that are obtained by two different feedforward networks: " + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "inline_equation", + "content": "\\gamma_{j} = \\mathrm{FFN}(h_{j})" + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "inline_equation", + "content": "\\lambda_{j} = \\mathrm{FFN}(h_{j})" + }, + { + "bbox": [ + 302, + 473, + 526, + 542 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "spans": [ + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "text", + "content": "While valid candidate spans are restricted to the upper triangular region of the adjacent text span matrix, a region embedding " + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "inline_equation", + "content": "\\mathrm{E} = [e_{up}, e_{low}] \\in \\mathbb{R}^{2 \\times d_e}" + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "text", + "content": " are adapted to distinguish the positions of text spans. The final representation of each span is obtained as: " + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "inline_equation", + "content": "\\hat{v}_{ij} = [v_{ij}, e_{up}]" + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "text", + "content": " if " + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "inline_equation", + "content": "i \\leq j" + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "text", + "content": "; " + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "inline_equation", + "content": "\\hat{v}_{ij} = [v_{ij}, e_{low}]" + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "text", + "content": " if " + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "inline_equation", + "content": "i > j" + }, + { + "bbox": [ + 302, + 542, + 525, + 636 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 646, + 456, + 660 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 646, + 456, + 660 + ], + "spans": [ + { + "bbox": [ + 302, + 646, + 456, + 660 + ], + "type": "text", + "content": "3.2 Boundary Offset Predictor" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 665, + 526, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 665, + 526, + 719 + ], + "spans": [ + { + "bbox": [ + 302, + 665, + 526, + 719 + ], + "type": "text", + "content": "As previously mentioned, we utilize the entity types as inputs to guide the model in generating type-aware boundary offsets, rather than categorizing each text span into a particular entity type." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 719, + 525, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 525, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 525, + 773 + ], + "type": "text", + "content": "The biaffine classifier (Yu et al., 2020) is employed to fuse entity type representations and span representations. Specifically, given an entity type representation " + }, + { + "bbox": [ + 302, + 719, + 525, + 773 + ], + "type": "inline_equation", + "content": "h_m \\in \\mathbf{H}^Y" + }, + { + "bbox": [ + 302, + 719, + 525, + 773 + ], + "type": "text", + "content": " and span representation" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14836" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 70, + 291, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 70, + 291, + 98 + ], + "spans": [ + { + "bbox": [ + 67, + 70, + 291, + 98 + ], + "type": "inline_equation", + "content": "\\hat{v}_{ij}\\in \\widehat{\\mathbf{V}}" + }, + { + "bbox": [ + 67, + 70, + 291, + 98 + ], + "type": "text", + "content": " , a scoring vector " + }, + { + "bbox": [ + 67, + 70, + 291, + 98 + ], + "type": "inline_equation", + "content": "c_{mij}\\in \\mathbb{R}^L" + }, + { + "bbox": [ + 67, + 70, + 291, + 98 + ], + "type": "text", + "content": " can be computed as:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 101, + 106, + 290, + 124 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 101, + 106, + 290, + 124 + ], + "spans": [ + { + "bbox": [ + 101, + 106, + 290, + 124 + ], + "type": "interline_equation", + "content": "h _ {y} ^ {\\prime} = \\operatorname {F F N} \\left(h _ {y}\\right), \\quad \\hat {v} _ {i j} ^ {\\prime} = \\operatorname {F F N} \\left(\\hat {v} _ {i j}\\right), \\tag {2}", + "image_path": "45d346993d8f3b0d447ede7b5842463c3f4fb247b876f0cae50ebced19ad4597.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 82, + 132, + 290, + 151 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 132, + 290, + 151 + ], + "spans": [ + { + "bbox": [ + 82, + 132, + 290, + 151 + ], + "type": "interline_equation", + "content": "c _ {m i j} = \\left(h _ {m} ^ {\\prime}\\right) ^ {T} U \\hat {v} _ {i j} ^ {\\prime} + W \\left(h _ {m} ^ {\\prime} \\oplus v _ {i j} ^ {\\prime}\\right) + b, \\tag {3}", + "image_path": "e6ba82cc0c253e8dc2be13169895658e84ea103899a215e452f7b1a965ae3393.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "spans": [ + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "content": " is the number of offset labels" + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "inline_equation", + "content": "^1" + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "content": "; " + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "inline_equation", + "content": "U \\in \\mathbb{R}^{L \\times d_b \\times d_b}" + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "inline_equation", + "content": "W \\in \\mathbb{R}^{L \\times 2d_b}" + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "inline_equation", + "content": "b \\in \\mathbb{R}^L" + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "content": " are learnable parameters, " + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "inline_equation", + "content": "d_b" + }, + { + "bbox": [ + 67, + 155, + 291, + 196 + ], + "type": "text", + "content": " is the biaffine hidden size." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 203, + 291, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 203, + 291, + 338 + ], + "spans": [ + { + "bbox": [ + 67, + 203, + 291, + 338 + ], + "type": "text", + "content": "3D Convolution Layer Furthermore, we utilize multiple 3-dimensional convolution (3DConv) layers to capture the inherent quantitative relationships between the boundary offsets of adjacent text spans. As depicted in Figure 3(b), the 3D convolution kernels traverse the complete score matrix " + }, + { + "bbox": [ + 67, + 203, + 291, + 338 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 67, + 203, + 291, + 338 + ], + "type": "text", + "content": " in three directions, thereby aggregating offset predictions for adjacent text spans across all entity types. The computation in a single convolution layer can be expressed as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 349, + 290, + 364 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 349, + 290, + 364 + ], + "spans": [ + { + "bbox": [ + 130, + 349, + 290, + 364 + ], + "type": "interline_equation", + "content": "\\mathrm {Q} = \\sigma (\\mathrm {3 D C o n v} (\\mathrm {C})), \\tag {4}", + "image_path": "16489be7753f8369eb78307652d3f26ff206e828ca07a22815174a6d67717a60.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 372, + 290, + 442 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 372, + 290, + 442 + ], + "spans": [ + { + "bbox": [ + 67, + 372, + 290, + 442 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 372, + 290, + 442 + ], + "type": "inline_equation", + "content": "\\mathbf{Q} \\in \\mathbb{R}^{M \\times N \\times N \\times L}" + }, + { + "bbox": [ + 67, + 372, + 290, + 442 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 372, + 290, + 442 + ], + "type": "inline_equation", + "content": "\\sigma" + }, + { + "bbox": [ + 67, + 372, + 290, + 442 + ], + "type": "text", + "content": " is the GELU activation function (Hendrycks and Gimpel, 2016). We assign different dilation rates to each convolution layer, and then concatenate their outputs followed by a linear to calculate final prediction scores:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 113, + 450, + 290, + 467 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 450, + 290, + 467 + ], + "spans": [ + { + "bbox": [ + 113, + 450, + 290, + 467 + ], + "type": "interline_equation", + "content": "\\hat {\\mathrm {Q}} = \\operatorname {L i n e a r} \\left(\\mathrm {Q} _ {1} \\oplus \\mathrm {Q} _ {2} \\oplus \\mathrm {Q} _ {3}\\right), \\tag {5}", + "image_path": "e22416ec14ba85061c71c86d8773eb01cbe291bc21f59a4f4bc4f4a067852a37.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 476, + 290, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 476, + 290, + 516 + ], + "spans": [ + { + "bbox": [ + 67, + 476, + 290, + 516 + ], + "type": "text", + "content": "To obtain the probability distribution of span " + }, + { + "bbox": [ + 67, + 476, + 290, + 516 + ], + "type": "inline_equation", + "content": "(i,j)" + }, + { + "bbox": [ + 67, + 476, + 290, + 516 + ], + "type": "text", + "content": " over the offset labels, " + }, + { + "bbox": [ + 67, + 476, + 290, + 516 + ], + "type": "inline_equation", + "content": "\\hat{q}_{mij} \\in \\hat{\\mathbf{Q}}" + }, + { + "bbox": [ + 67, + 476, + 290, + 516 + ], + "type": "text", + "content": " is fed into a softmax layer:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 126, + 528, + 290, + 542 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 528, + 290, + 542 + ], + "spans": [ + { + "bbox": [ + 126, + 528, + 290, + 542 + ], + "type": "interline_equation", + "content": "\\hat {o} _ {m i j} = \\operatorname {s o f t m a x} \\left(\\hat {q} _ {m i j}\\right), \\tag {6}", + "image_path": "2965f15ed2fc3c3480b6e1b51c77061b4ea6b15f1bee22071dbff64f629d2413.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 551, + 203, + 565 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 551, + 203, + 565 + ], + "spans": [ + { + "bbox": [ + 67, + 551, + 203, + 565 + ], + "type": "text", + "content": "3.3 Training and Inference" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 569, + 290, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 569, + 290, + 636 + ], + "spans": [ + { + "bbox": [ + 67, + 569, + 290, + 636 + ], + "type": "text", + "content": "Learning Objective In our method, the learning objective is to accurately assign a boundary offset to each text span, which can be treated as a multiclass classification problem and optimized using cross-entropy loss:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 83, + 645, + 290, + 683 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 645, + 290, + 683 + ], + "spans": [ + { + "bbox": [ + 83, + 645, + 290, + 683 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = - \\frac {1}{M N ^ {2}} \\sum_ {m} ^ {M} \\sum_ {i} ^ {N} \\sum_ {j} ^ {N} o _ {m i j} ^ {T} \\log \\left(\\hat {o} _ {m i j}\\right) \\tag {7}", + "image_path": "fa7555b4527de553fa3db0858644e7dc456ec1b137e81b2278c0052d1ec224c1.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 693, + 291, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 693, + 291, + 734 + ], + "spans": [ + { + "bbox": [ + 67, + 693, + 291, + 734 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 693, + 291, + 734 + ], + "type": "inline_equation", + "content": "o_{mij} \\in \\mathbb{R}^D" + }, + { + "bbox": [ + 67, + 693, + 291, + 734 + ], + "type": "text", + "content": " represents the ground-truth, which is an one-hot vector encoded from the annotated adjacent text span matrix " + }, + { + "bbox": [ + 67, + 693, + 291, + 734 + ], + "type": "inline_equation", + "content": "\\mathcal{O}" + }, + { + "bbox": [ + 67, + 693, + 291, + 734 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "text", + "content": "Inference with Boundary offsets During the inference process, decoding entities based on predicted boundary offsets is a straightforward procedure. The output of our method is a matrix of size " + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "inline_equation", + "content": "M \\times N \\times N" + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "text", + "content": ", where each cell represents a potential entity and contains information about its boundaries and type. For example, a cell with coordinates " + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "inline_equation", + "content": "(m, i, j)" + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "text", + "content": " and the prediction \"-1E\" indicates an entity of type " + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "inline_equation", + "content": "y_{m}" + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "text", + "content": " with a start boundary at " + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "inline_equation", + "content": "x_{i}" + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "text", + "content": " and an end boundary at " + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "inline_equation", + "content": "x_{j+1}" + }, + { + "bbox": [ + 302, + 71, + 526, + 233 + ], + "type": "text", + "content": ". Conversely, if the predicted value is \"out-of-range,\" it implies that the cell does not correspond to any entity." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 235, + 526, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 235, + 526, + 356 + ], + "spans": [ + { + "bbox": [ + 302, + 235, + 526, + 356 + ], + "type": "text", + "content": "However, blindly accepting all predicted boundary offsets may result in sub-optimal outcomes as it disregards the quantitative relationship between boundary offsets. Therefore, we introduce two heuristic rules to identify unreasonable predictions: i) Predicted boundary offsets that do not align with their nearest center span. ii) Predicted boundary offsets that do not adhere to a sequential order with neighboring spans." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 370, + 437, + 384 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 370, + 437, + 384 + ], + "spans": [ + { + "bbox": [ + 302, + 370, + 437, + 384 + ], + "type": "text", + "content": "4 Experimental Settings" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 393, + 371, + 405 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 393, + 371, + 405 + ], + "spans": [ + { + "bbox": [ + 302, + 393, + 371, + 405 + ], + "type": "text", + "content": "4.1 Datasets" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "spans": [ + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "text", + "content": "To evaluate our method, we conducted experiments on five English NER datasets, including CoNLL 2003 (Sang and De Meulder, 2003), OntoNotes " + }, + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "inline_equation", + "content": "5^{2}" + }, + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "text", + "content": ", ACE " + }, + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "inline_equation", + "content": "2004^{3}" + }, + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "text", + "content": ", ACE " + }, + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "inline_equation", + "content": "2005^{4}" + }, + { + "bbox": [ + 302, + 412, + 525, + 533 + ], + "type": "text", + "content": " and GENIA (Ohta et al., 2002); and three Chinese NER datasets, including MSRA (Levow, 2006), Resume NER (Zhang and Yang, 2018) and Weibo NER (Peng and Dredze, 2015). Note that ACE 2004, ACE 2005 and GENIA are nested NER datasets, others are flat datasets." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 535, + 525, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 535, + 525, + 643 + ], + "spans": [ + { + "bbox": [ + 302, + 535, + 525, + 643 + ], + "type": "text", + "content": "For OntoNotes 5, we take the same train/dev/test as used in CoNLL 2012 shared task (Pradhan et al., 2012). For ACE 2004 and ACE 2005, we use the same data split as Lu and Roth (2015). For GENIA, we follow Katiyar and Cardie (2018) to split train/test as 9:1. For other datasets, we employ the same settings in previous works (Ma et al., 2020; Yan et al., 2021; Zhu and Li, 2022)." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 655, + 441, + 668 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 655, + 441, + 668 + ], + "spans": [ + { + "bbox": [ + 302, + 655, + 441, + 668 + ], + "type": "text", + "content": "4.2 Implementation Details" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 674, + 525, + 729 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 674, + 525, + 729 + ], + "spans": [ + { + "bbox": [ + 302, + 674, + 525, + 729 + ], + "type": "text", + "content": "We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": "Given a maximum offset " + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "inline_equation", + "content": "L = 4S + 2" + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": " when considering both start and end boundary offset; " + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "inline_equation", + "content": "L = 2S + 2" + }, + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": " when only considering start or end boundary offset." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 315, + 739, + 479, + 761 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 739, + 479, + 761 + ], + "spans": [ + { + "bbox": [ + 315, + 739, + 479, + 761 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 315, + 739, + 479, + 761 + ], + "type": "text", + "content": "https://catalog.ldc.upenn.edu/LDC2005T09 \n" + }, + { + "bbox": [ + 315, + 739, + 479, + 761 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 315, + 739, + 479, + 761 + ], + "type": "text", + "content": "https://catalog.ldc.upenn.edu/LDC2005T09" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 315, + 761, + 478, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 761, + 478, + 772 + ], + "spans": [ + { + "bbox": [ + 315, + 761, + 478, + 772 + ], + "type": "text", + "content": "4https://catalog.ldc.upenn.edu/LDC2006T06" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14837" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 120, + 68, + 474, + 268 + ], + "blocks": [ + { + "bbox": [ + 120, + 68, + 474, + 268 + ], + "lines": [ + { + "bbox": [ + 120, + 68, + 474, + 268 + ], + "spans": [ + { + "bbox": [ + 120, + 68, + 474, + 268 + ], + "type": "table", + "html": "
ModelsCoNLL 2003OntoNotes 5
PRF1PRF1
Sequence Labeling Methods
BiLSTM-CRF (Miwa and Bansal, 2016)--91.0386.0486.5386.28
BERT-Tagger (Devlin et al., 2019)--92.8090.0188.3589.16
Span-based Methods
Biaffine (Yu et al., 2020)*†92.4692.6792.5589.9489.8189.88
W2NER (Li et al., 2022)92.7193.4493.0790.0390.9790.50
Boundary Smooth (Zhu and Li, 2022)*†92.8993.2093.0490.4290.8190.61
DiffusionNER (Shen et al., 2023a)92.9992.5692.7890.3191.0290.66
Others
Seq2Seq (Straková et al., 2019)--92.98---
BartNER (Yan et al., 2021)†92.5793.5393.0589.6590.8790.26
PIQN (Shen et al., 2022)93.2992.4692.8791.4390.7390.96
PromptNER (Shen et al., 2023b)92.4892.3392.41---
BOPN (Ours)93.2293.1593.1990.9391.4091.16
", + "image_path": "51a8df1d54a01656e26c278e13c8423906cc8511a43b3e4105312ce1cebe8872.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 71, + 311, + 525, + 466 + ], + "blocks": [ + { + "bbox": [ + 67, + 275, + 525, + 300 + ], + "lines": [ + { + "bbox": [ + 67, + 275, + 525, + 300 + ], + "spans": [ + { + "bbox": [ + 67, + 275, + 525, + 300 + ], + "type": "text", + "content": "Table 1: Results on English flat NER datasets CoNLL 2003 and OntoNotes 5. † means our re-implementation via their code. * denotes a fair comparison that their BERT encoder is consistent with our model." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 71, + 311, + 525, + 466 + ], + "lines": [ + { + "bbox": [ + 71, + 311, + 525, + 466 + ], + "spans": [ + { + "bbox": [ + 71, + 311, + 525, + 466 + ], + "type": "table", + "html": "
ModelsMSRAResume NERWeibo NER
PRF1PRF1PRF1
Sequence Labeling Methods
Lattice (Zhang and Yang, 2018)93.5792.7993.1894.8194.1194.4653.0462.2558.79
Flat (Li et al., 2020)--96.09--95.86--68.55
SoftLexicon (Ma et al., 2020)95.7595.1095.4296.0896.1396.1170.9467.0270.50
MECT (Wu et al., 2021)--96.24--95.98--70.43
Span-based Methods
W2NER (Li et al., 2022)96.1296.0896.1096.9696.3596.6570.8473.8772.32
Boundary Smooth (Zhu and Li, 2022)96.3796.1596.2696.6396.6996.6670.1675.3672.66
DiffusionNER (Shen et al., 2023a)95.7194.1194.91------
BOPN (Ours)96.4496.3496.3996.7396.8396.7871.7973.9072.92
", + "image_path": "95bc12ee2f9aafbfc8ab866b4cf6f2aface10670f739ee9baa72c4e8aa505dc2.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 145, + 475, + 447, + 487 + ], + "lines": [ + { + "bbox": [ + 145, + 475, + 447, + 487 + ], + "spans": [ + { + "bbox": [ + 145, + 475, + 447, + 487 + ], + "type": "text", + "content": "Table 2: Results on Chinese flat NER datasets MSRA, Resume and Weibo." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 509, + 289, + 535 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 509, + 289, + 535 + ], + "spans": [ + { + "bbox": [ + 67, + 509, + 289, + 535 + ], + "type": "text", + "content": "corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "spans": [ + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "text", + "content": "The BiLSTM has one layer and 256 hidden size with dropout rate of 0.5. The size of region embedding " + }, + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "inline_equation", + "content": "d_{e}" + }, + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "text", + "content": " is 20. The maximum offset value " + }, + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "text", + "content": " is selected in " + }, + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "inline_equation", + "content": "\\{1,2,3\\}" + }, + { + "bbox": [ + 67, + 537, + 289, + 659 + ], + "type": "text", + "content": ". For all datasets, we train our models by using AdamW Optimizer (Loshchilov and Hutter, 2017) with a linear warmup-decay learning rate schedule. See Appendix A for more details. Our source code can be obtained from https://github.com/mhtang1995/BOPN." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 672, + 147, + 684 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 672, + 147, + 684 + ], + "spans": [ + { + "bbox": [ + 67, + 672, + 147, + 684 + ], + "type": "text", + "content": "4.3 Evaluation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 692, + 289, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 289, + 772 + ], + "type": "text", + "content": "We use strict evaluation metrics where a predicted entity is considered correct only when both the boundaries (after adding boundary offset) and type are accurately matched. The precision, recall and " + }, + { + "bbox": [ + 67, + 692, + 289, + 772 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 67, + 692, + 289, + 772 + ], + "type": "text", + "content": " scores are employed. We run our model for five times and report averaged values." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 508, + 430, + 522 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 508, + 430, + 522 + ], + "spans": [ + { + "bbox": [ + 302, + 508, + 430, + 522 + ], + "type": "text", + "content": "5 Results and Analysis" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 534, + 393, + 546 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 534, + 393, + 546 + ], + "spans": [ + { + "bbox": [ + 302, + 534, + 393, + 546 + ], + "type": "text", + "content": "5.1 Main Results" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "spans": [ + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": "The performance of our proposed method and the baselines on English flat NER datasets is presented in Table 1. The experimental results demonstrate that our approach surpasses the previous state-of-the-art (SOTA) methods by " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "+0.12\\%" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": " on the CoNLL 2003 dataset and " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "+0.20\\%" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": " on the OntoNotes 5 dataset, achieving superior performance with " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": " scores of " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "93.19\\%" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "91.16\\%" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": ", respectively. For Chinese flat NER datasets, we provide the results in Table 2. Similarly, our proposed method achieves SOTA performance in terms of " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": " scores, surpassing the previous best method by " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "+0.13\\%" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "+0.12\\%" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "+0.26\\%" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": " in " + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 301, + 555, + 525, + 744 + ], + "type": "text", + "content": " scores on the MSRA, Resume NER, and Weibo NER datasets, respectively." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 746, + 525, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 773 + ], + "type": "text", + "content": "The performance results on English nested NER datasets are presented in Table 3. Remarkably," + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14838" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 68, + 526, + 299 + ], + "blocks": [ + { + "bbox": [ + 71, + 68, + 526, + 299 + ], + "lines": [ + { + "bbox": [ + 71, + 68, + 526, + 299 + ], + "spans": [ + { + "bbox": [ + 71, + 68, + 526, + 299 + ], + "type": "table", + "html": "
ModelsACE 2004ACE 2005GENIA
PRF1PRF1PRF1
Sequence Labeling Methods
Layered (Ju et al., 2018)---74.270.372.278.571.374.7
Pyramid (Wang et al., 2020)86.0886.4886.2883.9585.3984.6679.4578.9479.19
Span-based Methods
Biaffine (Yu et al., 2020)87.386.086.785.285.685.478.278.278.2
Locate and Label (Shen et al., 2021)87.4487.3887.4186.0987.2786.6780.1980.8980.54
W2NER (Li et al., 2022)87.3387.7187.5285.0388.6286.7983.1079.7681.39
Triaffine (Yuan et al., 2022)87.1387.6887.6086.7086.9486.8280.4282.0681.23
Boundary Smooth (Zhu and Li, 2022)88.4387.5387.9886.2588.0787.15---
DiffusionNER (Shen et al., 2023a)88.1188.6688.3986.1587.7286.9382.1080.9781.53
Others
Seq2Seq (Straková et al., 2019)--84.33--83.42--78.20
BartNER (Yan et al., 2021)87.2786.4186.8483.1686.3884.7478.5779.3078.93
PIQN (Shen et al., 2022)88.4887.8188.1486.2788.6087.4283.2480.3581.77
PromptNER (Shen et al., 2023b)87.5888.7688.1686.0788.3887.21---
BOPN (Ours)89.1389.4089.2689.5691.2390.3982.1482.1682.14
", + "image_path": "fa2bba559ca22fd4759b58d7f6b9102e9206b9c08053052018dc26058c40e7ae.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 71, + 338, + 289, + 510 + ], + "blocks": [ + { + "bbox": [ + 124, + 306, + 467, + 318 + ], + "lines": [ + { + "bbox": [ + 124, + 306, + 467, + 318 + ], + "spans": [ + { + "bbox": [ + 124, + 306, + 467, + 318 + ], + "type": "text", + "content": "Table 3: Results on English nested NER datasets ACE 2004, ACE 2004 and GENIA." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 71, + 338, + 289, + 510 + ], + "lines": [ + { + "bbox": [ + 71, + 338, + 289, + 510 + ], + "spans": [ + { + "bbox": [ + 71, + 338, + 289, + 510 + ], + "type": "table", + "html": "
CoNLL 2003Resume NERACE 2004
BOPN (Ours)93.1996.7889.26
- w/o Type Inp.92.8796.4188.83
- w/o Region Emb.92.7196.2288.71
- w/o BO92.7496.2688.62
- w/o 3DConv92.8796.4089.11
- MBO (S=1)93.1196.7589.14
- MBO (S=2)93.1596.7889.26
- MBO (S=3)93.1996.7189.22
- 3DConv (l=1)93.0896.6989.18
- 3DConv (l=2)93.1996.7589.26
- 3DConv (l=3)93.0596.7889.25
", + "image_path": "2f8978854f1e34a1d256ff1d158bd7686e6740c1f55e0a214c8ffcb8517f8bc4.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 518, + 289, + 543 + ], + "lines": [ + { + "bbox": [ + 67, + 518, + 289, + 543 + ], + "spans": [ + { + "bbox": [ + 67, + 518, + 289, + 543 + ], + "type": "text", + "content": "Table 4: Ablation Studies. MBO means the maximum boundary offset value." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "text", + "content": "our proposed BOPN achieves substantial improvements in performance on these datasets, with " + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "text", + "content": " scores increasing by " + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "inline_equation", + "content": "+0.87\\%" + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "inline_equation", + "content": "+2.97\\%" + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "inline_equation", + "content": "+0.37\\%" + }, + { + "bbox": [ + 67, + 570, + 292, + 772 + ], + "type": "text", + "content": " on ACE 2004, ACE 2005, and GENIA, respectively. These results align with our expectations, as the boundary features of nested entities are more intricate compared to flat entities. We attribute this improvement to two key factors: 1) Our method predicts the boundary information of various entity types in parallel, effectively avoiding nested boundary conflicts between different types of entities. 2) By predicting boundary offsets, our method expands the predictive range for each text span, allowing for more granular and precise identification of entity boundaries." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 340, + 408, + 352 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 340, + 408, + 352 + ], + "spans": [ + { + "bbox": [ + 302, + 340, + 408, + 352 + ], + "type": "text", + "content": "5.2 Ablation Studies" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 359, + 525, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 359, + 525, + 425 + ], + "spans": [ + { + "bbox": [ + 302, + 359, + 525, + 425 + ], + "type": "text", + "content": "In order to assess the impact of each component in our method, we conduct ablation studies on the CoNLL 2003, ACE 2005, and Resume NER datasets. The results of these studies are presented in Table 4." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "spans": [ + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "content": "Maximum Boundary Offset We investigate the impact of training the model with different maximum offset values " + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "content": " through our ablation studies. The hyperparameter " + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "content": " determines the annotation scope of non-entity spans with boundary offset. Specifically, the extreme scenario of setting " + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "content": " to 0 corresponds to a condition \"w/o BO\" (without Boundary Offset). The results indicate a significant decline in performance when employing \"w/o BO,\" confirming the usefulness of utilizing boundary offsets as supervision. However, we also observe that the optimal " + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "content": " value varies across different datasets. This could be attributed to the fact that a larger " + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "content": " value provides more boundary knowledge but also increases the label search space. Consequently, hyperparameter tuning for " + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 434, + 526, + 664 + ], + "type": "text", + "content": " becomes necessary to achieve the best performance in practice." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": "In addition, we analyze the learning curves of our model with different maximum offset values. Figure 4 demonstrates that a larger " + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": " can accelerate the training process of the model. We think the reason may be that a larger " + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": " not only leads to an increase of positive samples but also results in a decrease of negative samples, thereby ultimately enhancing the trainability of the model." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14839" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 80, + 69, + 276, + 218 + ], + "blocks": [ + { + "bbox": [ + 80, + 69, + 276, + 218 + ], + "lines": [ + { + "bbox": [ + 80, + 69, + 276, + 218 + ], + "spans": [ + { + "bbox": [ + 80, + 69, + 276, + 218 + ], + "type": "image", + "image_path": "80743a7388518887de4cddd798286acf3542bc8074bcaf3df54dbf23b325835e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 73, + 227, + 283, + 240 + ], + "lines": [ + { + "bbox": [ + 73, + 227, + 283, + 240 + ], + "spans": [ + { + "bbox": [ + 73, + 227, + 283, + 240 + ], + "type": "text", + "content": "Figure 4: The learning curves on ACE 2004 dataset." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 71, + 252, + 289, + 430 + ], + "blocks": [ + { + "bbox": [ + 71, + 252, + 289, + 430 + ], + "lines": [ + { + "bbox": [ + 71, + 252, + 289, + 430 + ], + "spans": [ + { + "bbox": [ + 71, + 252, + 289, + 430 + ], + "type": "table", + "html": "
LabelPRF1Support
-2S81.5182.0281.765029
-1S81.6282.9782.295292
1S79.5581.4780.503281
2S76.2779.5577.881438
-2E78.6477.1977.901464
-1E79.7980.5880.183254
1E82.2682.2082.235393
2E82.3780.7581.575113
081.9281.9581.935495
ALL79.2184.2281.645495
- w/ rules81.8582.5682.205495
", + "image_path": "9525a18639f669a1f475103602f75af1476495c53c686972228cffcd243de4c9.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 439, + 289, + 476 + ], + "lines": [ + { + "bbox": [ + 67, + 439, + 289, + 476 + ], + "spans": [ + { + "bbox": [ + 67, + 439, + 289, + 476 + ], + "type": "text", + "content": "Table 5: Performance of each boundary offset label on GENIA, where the maximum offset value is 2. The reported results is one out of five experiments." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 497, + 290, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 497, + 290, + 578 + ], + "spans": [ + { + "bbox": [ + 67, + 497, + 290, + 578 + ], + "type": "text", + "content": "3D Convolution Layer \"w/o 3DConv\" indicates the 3D convolution layers are removed. As seen, the results show a decline in performance across all datasets, indicating the importance of 3D convolution layers in capturing the interactions between boundary offsets of adjacent text spans." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 588, + 290, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 588, + 290, + 668 + ], + "spans": [ + { + "bbox": [ + 67, + 588, + 290, + 668 + ], + "type": "text", + "content": "Type Inputs \"w/o Type Inputs\" refers to a setting where the entity types encoded with the sentence are replaced, in which the randomly initialized entity type embeddings are fed into the biaffine classifier. The results obtained in this setting show a slight decline in performance." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 678, + 290, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 678, + 290, + 745 + ], + "spans": [ + { + "bbox": [ + 67, + 678, + 290, + 745 + ], + "type": "text", + "content": "Region Embedding The results demonstrate a slight drop in performance across all datasets without region embeddings. This suggests that integrating sample distribution features can be a reasonable approach for enhancing text span representations." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 746, + 290, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 290, + 773 + ], + "type": "text", + "content": "As the CLN layer and biaffine classifier serve as fundamental components in our approach for span" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 315, + 69, + 514, + 199 + ], + "blocks": [ + { + "bbox": [ + 315, + 69, + 514, + 199 + ], + "lines": [ + { + "bbox": [ + 315, + 69, + 514, + 199 + ], + "spans": [ + { + "bbox": [ + 315, + 69, + 514, + 199 + ], + "type": "image", + "image_path": "8053ec891e88602ef839556b5f26ee561c5811df50fa4443201d11aa38f9185d.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 206, + 525, + 243 + ], + "lines": [ + { + "bbox": [ + 302, + 206, + 525, + 243 + ], + "spans": [ + { + "bbox": [ + 302, + 206, + 525, + 243 + ], + "type": "text", + "content": "Figure 5: A comparison of F1-scores on entities of different lengths in GENIA dataset. Entity supports are in the parentheses." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 269, + 525, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 269, + 525, + 337 + ], + "spans": [ + { + "bbox": [ + 302, + 269, + 525, + 337 + ], + "type": "text", + "content": "representation and classification, they cannot be evaluated independently. Nonetheless, our ablation studies demonstrate the effectiveness of learning boundary offset information and the usefulness of each composition in our model." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 354, + 412, + 366 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 354, + 412, + 366 + ], + "spans": [ + { + "bbox": [ + 302, + 354, + 412, + 366 + ], + "type": "text", + "content": "5.3 Detailed Analysis" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 376, + 525, + 484 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 376, + 525, + 484 + ], + "spans": [ + { + "bbox": [ + 302, + 376, + 525, + 484 + ], + "type": "text", + "content": "Performance on Different Offset Labels We investigate the performance of each boundary offset label, and the results are presented in Table 5. Notably, the offset label \"0\" has complete entity support and achieves an " + }, + { + "bbox": [ + 302, + 376, + 525, + 484 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 302, + 376, + 525, + 484 + ], + "type": "text", + "content": " score of " + }, + { + "bbox": [ + 302, + 376, + 525, + 484 + ], + "type": "inline_equation", + "content": "82.04\\%" + }, + { + "bbox": [ + 302, + 376, + 525, + 484 + ], + "type": "text", + "content": ". Furthermore, we observed a positive correlation between the quantity of entity support and the performance of boundary offset labels." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 486, + 525, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 486, + 525, + 689 + ], + "spans": [ + { + "bbox": [ + 302, + 486, + 525, + 689 + ], + "type": "text", + "content": "When a text span is not predicted as \"out-of-range\", its assigned label can be utilized to determine the position of its nearest entity. By aggregating all predictions of offset labels, we observe a sharp decrease in precision score, along with a significant increase in recall score, when compared to only considering the center span (with an offset label of \"0\"). This finding suggests that different offset labels provide distinct information that assists the model in recognizing additional entities. Nevertheless, this approach can introduce noisy predictions due to the model's inadequate performance on certain labels. Despite this limitation, it may have practical applicability in recall-sensitive applications." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "content": "As discussed in Section 3.3, we devise two heuristic rules to remove improbable predictions. Our findings reveal that this approach enhances the precision score, with only a minor reduction in the recall score, leading to an overall improvement in the " + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "content": " score." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14840" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 79, + 69, + 276, + 221 + ], + "blocks": [ + { + "bbox": [ + 79, + 69, + 276, + 221 + ], + "lines": [ + { + "bbox": [ + 79, + 69, + 276, + 221 + ], + "spans": [ + { + "bbox": [ + 79, + 69, + 276, + 221 + ], + "type": "image", + "image_path": "7ecb4838dceef94fe33a44e5fd507039fdc00eaa747ec70b3febb19d1bbb31c2.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 231, + 291, + 267 + ], + "lines": [ + { + "bbox": [ + 67, + 231, + 291, + 267 + ], + "spans": [ + { + "bbox": [ + 67, + 231, + 291, + 267 + ], + "type": "text", + "content": "Figure 6: Effect of varying percentage of training samples on GENIA. We train all models for 50 epochs and report their best performance." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "spans": [ + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "text", + "content": "Performance on Entities with Varying Lengths We explore the model performance on entities of different lengths in GENIA. As shown in Figure 5, we compare the " + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "text", + "content": " scores of models which are training with different " + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "text", + "content": ". The model achieves higher " + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "inline_equation", + "content": "F_{1}" + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "text", + "content": " scores across all columns when " + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "inline_equation", + "content": "S = 2" + }, + { + "bbox": [ + 67, + 288, + 290, + 437 + ], + "type": "text", + "content": ", with a more pronounced performance improvement for longer entities. The results highlight the usefulness of learning boundary offsets between nonentity and entity spans, which helps the model learn boundary features more effectively." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 444, + 291, + 607 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 444, + 291, + 607 + ], + "spans": [ + { + "bbox": [ + 67, + 444, + 291, + 607 + ], + "type": "text", + "content": "Size of Training Data As the boundary offset labels contain more informative knowledge, we hypothesize that our proposed BOPN would perform better with limited training data. As shown in Figure 6, our model achieves impressive results, exhibiting only a " + }, + { + "bbox": [ + 67, + 444, + 291, + 607 + ], + "type": "inline_equation", + "content": "5.46\\%" + }, + { + "bbox": [ + 67, + 444, + 291, + 607 + ], + "type": "text", + "content": " decrease in performance when trained with a mere " + }, + { + "bbox": [ + 67, + 444, + 291, + 607 + ], + "type": "inline_equation", + "content": "12.5\\%" + }, + { + "bbox": [ + 67, + 444, + 291, + 607 + ], + "type": "text", + "content": " of the available training data. In contrast, when boundary information is not utilized during training, the model's performance declines rapidly as the amount of training data decreases, thus creating significant obstacles to effective training." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 616, + 160, + 629 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 616, + 160, + 629 + ], + "spans": [ + { + "bbox": [ + 67, + 616, + 160, + 629 + ], + "type": "text", + "content": "6 Related Work" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 638, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 291, + 772 + ], + "type": "text", + "content": "In recent years, various paradigms for named entity recognition (NER) have been proposed, among which span-based methods have become one of the most mainstream approaches, treating NER as a text span classification problem. With the development of pre-trained language models, some works (Sohrab and Miwa, 2018; Luan et al., 2019; Wadden et al., 2019) obtain span representations by connecting boundary representations or aggregating token representations and feeding them into" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": "a linear classifier for type prediction. Alternatively, Yu et al. (2020) utilizes a biaffine classifier to fuse start and end boundary representations directly for span classification. To further enhance span representation, several other methods (Wan et al., 2022; Yuan et al., 2022) propose fusing representations of token, boundary, and related entity spans." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 167, + 526, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 167, + 526, + 329 + ], + "spans": [ + { + "bbox": [ + 302, + 167, + 526, + 329 + ], + "type": "text", + "content": "Meanwhile, some methods try to improve span-based methods by adding boundary supervision. Specifically, Zheng et al. (2019) and Tan et al. (2020) additionally detect entity boundaries with multi-task learning, while Shen et al. (2021) perform boundary regression after span prediction. Li et al. (2022) design two word-word relations for span classification. Compared with previous methods, our proposed method utilizes continuous boundary offset values to model text spans, which can capture both the boundary differences and connections between non-entity and entity spans." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 331, + 526, + 627 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 331, + 526, + 627 + ], + "spans": [ + { + "bbox": [ + 302, + 331, + 526, + 627 + ], + "type": "text", + "content": "In addition to span-based methods, there are three widely-used NER methods. The traditional sequence labeling methods (Huang et al., 2015; Lample et al., 2016) assign each token a tag with a pre-designed tagging scheme (e.g., " + }, + { + "bbox": [ + 302, + 331, + 526, + 627 + ], + "type": "inline_equation", + "content": "BIO" + }, + { + "bbox": [ + 302, + 331, + 526, + 627 + ], + "type": "text", + "content": "). To address nested entities, some works (Ju et al., 2018; Wang et al., 2020; Rojas et al., 2022) add struggles or design special tagging schemes. Hypergraph-based methods (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018) represent the input sentence as a hypergraph for detecting nested entities, which must be carefully designed to avoid spurious structures. Sequence-to-sequence methods reformulate NER as a sequence generation problem. For example, Gillick et al. (2016) first apply the Seq2Seq model for NER, inputting the sentence and outputting start positions, entity lengths, and types. Straková et al. (2019) use the Seq2Seq model and enhanced BILOU scheme to address nested NER. Yan et al. (2021) treats NER as an entity span sequence generation problem with pointer network based on BART (Lewis et al., 2019)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 303, + 641, + 381, + 655 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 641, + 381, + 655 + ], + "spans": [ + { + "bbox": [ + 303, + 641, + 381, + 655 + ], + "type": "text", + "content": "7 Conclusion" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 665, + 525, + 772 + ], + "type": "text", + "content": "In this paper, we introduce a novel approach for named entity recognition (NER) called the Boundary Offset Prediction Network (BOPN). BOPN predicts the boundary offsets between candidate spans and their nearest entities, leveraging entity types as inputs. By incorporating entity types, BOPN enables parallel prediction of type-aware boundary offsets, enhancing the model's ability to capture" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 311, + 791 + ], + "type": "text", + "content": "14841" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "content": "fine-grained entity boundaries. To capture the interactions between boundary offsets, we employ multiple 3D convolution layers, which refine the offset predictions and capture the inherent quantitative relationships between adjacent text spans." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 140, + 291, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 140, + 291, + 259 + ], + "spans": [ + { + "bbox": [ + 69, + 140, + 291, + 259 + ], + "type": "text", + "content": "The experimental results demonstrate that our proposed method achieves state-of-the-art performance on eight widely-used datasets, including five English NER datasets and three Chinese NER datasets. Moreover, further analysis reveals a significant improvement in recall scores by utilizing boundary offset as supervision, showcasing the utility of our approach for recall-sensitive applications in NER." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 270, + 130, + 283 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 270, + 130, + 283 + ], + "spans": [ + { + "bbox": [ + 67, + 270, + 130, + 283 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 291, + 290, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 291, + 290, + 412 + ], + "spans": [ + { + "bbox": [ + 67, + 291, + 290, + 412 + ], + "type": "text", + "content": "The proposed BOPN approach has certain limitations that should be acknowledged. Firstly, while BOPN treats boundary offsets as classification targets, it does not explicitly model the order relationship between offset values. Although the 3D convolution layers are employed to implicitly capture interactions between boundary offsets, they do not provide a strong constraint on the ordering of offset labels." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 413, + 290, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 413, + 290, + 521 + ], + "spans": [ + { + "bbox": [ + 67, + 413, + 290, + 521 + ], + "type": "text", + "content": "Additionally, the method uses boundary offsets to convert some non-entity spans into positive samples, which leads to higher recall scores but potentially lower precision scores. To optimize prediction results, heuristic rules are applied to filter out unreasonable samples. However, these rules are based on observations and may not be comprehensive enough to handle all cases effectively." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 522, + 290, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 522, + 290, + 589 + ], + "spans": [ + { + "bbox": [ + 67, + 522, + 290, + 589 + ], + "type": "text", + "content": "Therefore, there is still a need to explore more effective ways to integrate and optimize the offset predictions in order to address these limitations and enhance the overall performance of the BOPN approach." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 599, + 158, + 611 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 599, + 158, + 611 + ], + "spans": [ + { + "bbox": [ + 67, + 599, + 158, + 611 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 620, + 290, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 620, + 290, + 701 + ], + "spans": [ + { + "bbox": [ + 67, + 620, + 290, + 701 + ], + "type": "text", + "content": "To address ethical concerns, we provide the two detailed description: 1) All experiments were conducted on existing datasets derived from public scientific papers. 2) Our work does not contain any personally identifiable information and does not harm anyone." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 712, + 170, + 724 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 712, + 170, + 724 + ], + "spans": [ + { + "bbox": [ + 67, + 712, + 170, + 724 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 732, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 732, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 732, + 290, + 772 + ], + "type": "text", + "content": "This work was supported by Strategic Priority Research Program of Chinese Academy of Sciences (N0. XDC02040400)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 70, + 362, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 70, + 362, + 83 + ], + "spans": [ + { + "bbox": [ + 304, + 70, + 362, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 89, + 526, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 304, + 89, + 525, + 178 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 89, + 525, + 178 + ], + "spans": [ + { + "bbox": [ + 304, + 89, + 525, + 178 + ], + "type": "text", + "content": "Pei Chen, Haibo Ding, Jun Araki, and Ruihong Huang. 2021. Explicitly capturing relations between entity mentions via graph neural networks for domain-specific named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 735-742." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 185, + 525, + 240 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 185, + 525, + 240 + ], + "spans": [ + { + "bbox": [ + 304, + 185, + 525, + 240 + ], + "type": "text", + "content": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3504-3514." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 248, + 526, + 336 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 248, + 526, + 336 + ], + "spans": [ + { + "bbox": [ + 304, + 248, + 526, + 336 + ], + "type": "text", + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 343, + 525, + 411 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 343, + 525, + 411 + ], + "spans": [ + { + "bbox": [ + 304, + 343, + 525, + 411 + ], + "type": "text", + "content": "Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296-1306." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 417, + 525, + 451 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 417, + 525, + 451 + ], + "spans": [ + { + "bbox": [ + 304, + 417, + 525, + 451 + ], + "type": "text", + "content": "Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2022.Ptr: Prompt tuning with rules for text classification. AI Open, 3:182-192." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 458, + 525, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 458, + 525, + 491 + ], + "spans": [ + { + "bbox": [ + 304, + 458, + 525, + 491 + ], + "type": "text", + "content": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 498, + 525, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 498, + 525, + 555 + ], + "spans": [ + { + "bbox": [ + 304, + 498, + 525, + 555 + ], + "type": "text", + "content": "Feng Hou, Ruili Wang, Jun He, and Yi Zhou. 2020. Improving entity linking through semantic reinforced entity embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6843-6848." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 560, + 525, + 596 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 560, + 525, + 596 + ], + "spans": [ + { + "bbox": [ + 304, + 560, + 525, + 596 + ], + "type": "text", + "content": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 602, + 525, + 636 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 602, + 525, + 636 + ], + "spans": [ + { + "bbox": [ + 304, + 602, + 525, + 636 + ], + "type": "text", + "content": "Justin M Johnson and Taghi M Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Journal of Big Data, 6(1):1-54." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 643, + 525, + 710 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 643, + 525, + 710 + ], + "spans": [ + { + "bbox": [ + 304, + 643, + 525, + 710 + ], + "type": "text", + "content": "Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 717, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 717, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 717, + 525, + 772 + ], + "type": "text", + "content": "Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14842" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 150 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 150 + ], + "type": "text", + "content": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 159, + 290, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 159, + 290, + 215 + ], + "spans": [ + { + "bbox": [ + 69, + 159, + 290, + 215 + ], + "type": "text", + "content": "Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1595-1604." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 225, + 290, + 280 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 225, + 290, + 280 + ], + "spans": [ + { + "bbox": [ + 69, + 225, + 290, + 280 + ], + "type": "text", + "content": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 290, + 290, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 290, + 290, + 346 + ], + "spans": [ + { + "bbox": [ + 69, + 290, + 290, + 346 + ], + "type": "text", + "content": "Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN workshop on Chinese language processing, pages 108-117." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 356, + 290, + 422 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 356, + 290, + 422 + ], + "spans": [ + { + "bbox": [ + 69, + 356, + 290, + 422 + ], + "type": "text", + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 433, + 290, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 433, + 290, + 521 + ], + "spans": [ + { + "bbox": [ + 69, + 433, + 290, + 521 + ], + "type": "text", + "content": "Fei Li, ZhiChao Lin, Meishan Zhang, and Donghong Ji. 2021a. A span-based model for joint overlapped and discontinuous named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4814-4828, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 531, + 290, + 597 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 531, + 290, + 597 + ], + "spans": [ + { + "bbox": [ + 69, + 531, + 290, + 597 + ], + "type": "text", + "content": "Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022. Unified named entity recognition as word-word relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10965-10973." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 608, + 290, + 673 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 608, + 290, + 673 + ], + "spans": [ + { + "bbox": [ + 69, + 608, + 290, + 673 + ], + "type": "text", + "content": "Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021b. Mrn: A locally and globally mention-based reasoning network for document-level relation extraction. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1359-1370." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 684, + 290, + 740 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 684, + 290, + 740 + ], + "spans": [ + { + "bbox": [ + 69, + 684, + 290, + 740 + ], + "type": "text", + "content": "Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuan-Jing Huang. 2020. Flat: Chinese ner using flat-lattice transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6836-6842." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 750, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 750, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 750, + 290, + 772 + ], + "type": "text", + "content": "Ruibo Liu, Jason Wei, Chenyan Jia, and Soroush Vosoughi. 2021. Modulating language models with" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 314, + 72, + 524, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 524, + 105 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 524, + 105 + ], + "type": "text", + "content": "emotions. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4332-4339." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 114, + 524, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 114, + 524, + 146 + ], + "spans": [ + { + "bbox": [ + 304, + 114, + 524, + 146 + ], + "type": "text", + "content": "Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 155, + 524, + 209 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 155, + 524, + 209 + ], + "spans": [ + { + "bbox": [ + 304, + 155, + 524, + 209 + ], + "type": "text", + "content": "Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857-867." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 218, + 524, + 317 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 218, + 524, + 317 + ], + "spans": [ + { + "bbox": [ + 304, + 218, + 524, + 317 + ], + "type": "text", + "content": "Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 325, + 524, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 325, + 524, + 380 + ], + "spans": [ + { + "bbox": [ + 304, + 325, + 524, + 380 + ], + "type": "text", + "content": "Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei, and Xuan-Jing Huang. 2020. Simplify the usage of lexicon in chinese ner. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5951-5960." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 389, + 524, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 389, + 524, + 444 + ], + "spans": [ + { + "bbox": [ + 304, + 389, + 524, + 444 + ], + "type": "text", + "content": "Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 452, + 524, + 508 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 452, + 524, + 508 + ], + "spans": [ + { + "bbox": [ + 304, + 452, + 524, + 508 + ], + "type": "text", + "content": "Tomoko Ohta, Yuka Tateisi, Jin-Dong Kim, Hideki Mima, and Junichi Tsujii. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the human language technology conference, pages 73-77. CiteSeer." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 515, + 524, + 571 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 515, + 524, + 571 + ], + "spans": [ + { + "bbox": [ + 304, + 515, + 524, + 571 + ], + "type": "text", + "content": "Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 548-554." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 578, + 524, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 578, + 524, + 634 + ], + "spans": [ + { + "bbox": [ + 304, + 578, + 524, + 634 + ], + "type": "text", + "content": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in onthonotes. In Joint conference on EMNLP and CoNLL-shared task, pages 1-40." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 643, + 524, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 643, + 524, + 708 + ], + "spans": [ + { + "bbox": [ + 304, + 643, + 524, + 708 + ], + "type": "text", + "content": "Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203-5212." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 717, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 717, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 717, + 524, + 772 + ], + "type": "text", + "content": "Matías Rojas, Felipe Bravo-Marquez, and Jocelyn Dunstan. 2022. Simple yet powerful: An overlooked architecture for nested named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2108-2117." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14843" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 128 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 128 + ], + "type": "text", + "content": "Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 135, + 291, + 224 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 135, + 291, + 224 + ], + "spans": [ + { + "bbox": [ + 69, + 135, + 291, + 224 + ], + "type": "text", + "content": "Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2782-2794." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 231, + 291, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 231, + 291, + 275 + ], + "spans": [ + { + "bbox": [ + 69, + 231, + 291, + 275 + ], + "type": "text", + "content": "Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023a. Diffusion: Boundary diffusion for named entity recognition. arXiv preprint arXiv:2305.13298." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 283, + 291, + 339 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 283, + 291, + 339 + ], + "spans": [ + { + "bbox": [ + 69, + 283, + 291, + 339 + ], + "type": "text", + "content": "Yongliang Shen, Zeqi Tan, Shuhui Wu, Wenqi Zhang, Rongsheng Zhang, Yadong Xi, Weiming Lu, and Yueting Zhuang. 2023b. Prompter: Prompt locating and typing for named entity recognition. arXiv preprint arXiv:2305.17104." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 346, + 291, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 346, + 291, + 423 + ], + "spans": [ + { + "bbox": [ + 69, + 346, + 291, + 423 + ], + "type": "text", + "content": "Yongliang Shen, Xiaobin Wang, Zeqi Tan, Guangwei Xu, Pengjun Xie, Fei Huang, Weiming Lu, and Yueting Zhuang. 2022. Parallel instance query network for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 947-961." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 431, + 291, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 431, + 291, + 487 + ], + "spans": [ + { + "bbox": [ + 69, + 431, + 291, + 487 + ], + "type": "text", + "content": "Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843-2849." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 494, + 291, + 550 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 494, + 291, + 550 + ], + "spans": [ + { + "bbox": [ + 69, + 494, + 291, + 550 + ], + "type": "text", + "content": "Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested ner through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 557, + 291, + 613 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 557, + 291, + 613 + ], + "spans": [ + { + "bbox": [ + 69, + 557, + 291, + 613 + ], + "type": "text", + "content": "Chuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, and Fei Huang. 2020. Boundary enhanced neural span classification for nested named entity recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 9016-9023." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 620, + 291, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 620, + 291, + 665 + ], + "spans": [ + { + "bbox": [ + 69, + 620, + 291, + 665 + ], + "type": "text", + "content": "Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 672, + 291, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 672, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 672, + 291, + 772 + ], + "type": "text", + "content": "David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789, Hong Kong, China. Association for Computational Linguistics." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 526, + 772 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 304, + 72, + 526, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 526, + 139 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 526, + 139 + ], + "type": "text", + "content": "Juncheng Wan, Dongyu Ru, Weinan Zhang, and Yong Yu. 2022. Nested named entity recognition with span-level graphs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 892-903, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 147, + 526, + 202 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 147, + 526, + 202 + ], + "spans": [ + { + "bbox": [ + 304, + 147, + 526, + 202 + ], + "type": "text", + "content": "Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204-214." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 211, + 526, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 211, + 526, + 267 + ], + "spans": [ + { + "bbox": [ + 304, + 211, + 526, + 267 + ], + "type": "text", + "content": "Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020. Pyramid: A layered model for nested named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918-5928." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 276, + 526, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 276, + 526, + 364 + ], + "spans": [ + { + "bbox": [ + 304, + 276, + 526, + 364 + ], + "type": "text", + "content": "Shuang Wu, Xiaoning Song, and Zhenhua Feng. 2021. Mect: Multi-metadata embedding based cross-transformer for chinese named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1529-1539." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 373, + 526, + 451 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 373, + 526, + 451 + ], + "spans": [ + { + "bbox": [ + 304, + 373, + 526, + 451 + ], + "type": "text", + "content": "Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various ner subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808-5822." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 460, + 526, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 460, + 526, + 514 + ], + "spans": [ + { + "bbox": [ + 304, + 460, + 526, + 514 + ], + "type": "text", + "content": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470-6476." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 523, + 526, + 579 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 523, + 526, + 579 + ], + "spans": [ + { + "bbox": [ + 304, + 523, + 526, + 579 + ], + "type": "text", + "content": "Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 3174-3186." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 587, + 526, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 587, + 526, + 634 + ], + "spans": [ + { + "bbox": [ + 304, + 587, + 526, + 634 + ], + "type": "text", + "content": "Yue Zhang and Jie Yang. 2018. Chinese ner using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 641, + 526, + 740 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 641, + 526, + 740 + ], + "spans": [ + { + "bbox": [ + 304, + 641, + 526, + 740 + ], + "type": "text", + "content": "Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357-366, Hong Kong, China. Association for Computational Linguistics." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 749, + 526, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 749, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 749, + 526, + 772 + ], + "type": "text", + "content": "Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14844" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 78, + 72, + 291, + 105 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 72, + 291, + 105 + ], + "spans": [ + { + "bbox": [ + 78, + 72, + 291, + 105 + ], + "type": "text", + "content": "60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7096-7108." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 126, + 141, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 126, + 141, + 140 + ], + "spans": [ + { + "bbox": [ + 68, + 126, + 141, + 140 + ], + "type": "text", + "content": "A Appendix" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 148, + 138, + 160 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 148, + 138, + 160 + ], + "spans": [ + { + "bbox": [ + 68, + 148, + 138, + 160 + ], + "type": "text", + "content": "A.1 Datasets" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 165, + 291, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 165, + 291, + 246 + ], + "spans": [ + { + "bbox": [ + 67, + 165, + 291, + 246 + ], + "type": "text", + "content": "We evaluate our method on eight datasets, including CoNLL 2003, OntoNotes 5, ACE 2004, ACE 2005, and GENIA for English NER datasets; MSRA, Resume NER and Weibo NER for Chinese NER datasets. Table 6 presents the detailed statistics of datasets." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 256, + 208, + 269 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 256, + 208, + 269 + ], + "spans": [ + { + "bbox": [ + 68, + 256, + 208, + 269 + ], + "type": "text", + "content": "A.2 Implementation Details" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 273, + 291, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 273, + 291, + 421 + ], + "spans": [ + { + "bbox": [ + 67, + 273, + 291, + 421 + ], + "type": "text", + "content": "We use BioBERT-v1.1 (Lee et al., 2020) as the contextual embedding in GENIA. For other English corpora, we BERT-large-cased (Devlin et al., 2019) as the contextual embedding. For Chinese corpora, we use the BERT pre-trained with whole word masking (Cui et al., 2021). Our model is implemented with PyTorch and trained with a NVIDIA RTX3090 GPU. We use a grid search to find the best hyperparameters which are tuned on the development set. The range of hyperparameters we used for eight datasets are listed in Table 7." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 431, + 141, + 443 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 431, + 141, + 443 + ], + "spans": [ + { + "bbox": [ + 68, + 431, + 141, + 443 + ], + "type": "text", + "content": "A.3Baselines" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 449, + 286, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 449, + 286, + 461 + ], + "spans": [ + { + "bbox": [ + 67, + 449, + 286, + 461 + ], + "type": "text", + "content": "We compare BOPN with the following baselines:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 81, + 470, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 81, + 470, + 290, + 512 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 470, + 290, + 512 + ], + "spans": [ + { + "bbox": [ + 81, + 470, + 290, + 512 + ], + "type": "text", + "content": "- BiLSTM-CRF (Miwa and Bansal, 2016) is a model for sequence labeling tasks that combines BiLSTM with CRF layers." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 81, + 520, + 291, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 520, + 291, + 574 + ], + "spans": [ + { + "bbox": [ + 81, + 520, + 291, + 574 + ], + "type": "text", + "content": "- BERT-Tagger (Devlin et al., 2019) that utilizes the pre-trained language model BERT as a feature extractor and incorporates a tag classifier for fine-tuning." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 81, + 584, + 289, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 584, + 289, + 622 + ], + "spans": [ + { + "bbox": [ + 81, + 584, + 289, + 622 + ], + "type": "text", + "content": "- Lattice (Zhang and Yang, 2018) proposed a lattice-structured LSTM model for Chinese NER." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 81, + 634, + 289, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 634, + 289, + 659 + ], + "spans": [ + { + "bbox": [ + 81, + 634, + 289, + 659 + ], + "type": "text", + "content": "- Layered (Ju et al., 2018) dynamically stacks flat NER layers to solve nested NER task." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 81, + 669, + 290, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 669, + 290, + 723 + ], + "spans": [ + { + "bbox": [ + 81, + 669, + 290, + 723 + ], + "type": "text", + "content": "- Flat (Li et al., 2020) proposes a flat-lattice transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 81, + 733, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 733, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 81, + 733, + 291, + 772 + ], + "type": "text", + "content": "- Pyramid (Wang et al., 2020) designs pyramid layer and inverse pyramid layer to decode nested entities." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 316, + 71, + 526, + 751 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 71, + 526, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 71, + 526, + 125 + ], + "spans": [ + { + "bbox": [ + 316, + 71, + 526, + 125 + ], + "type": "text", + "content": "- SoftLexicon (Ma et al., 2020) proposes a Chinese NER method in which lexicon information is introduced by simply adjusting the character representation layer." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 134, + 526, + 188 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 134, + 526, + 188 + ], + "spans": [ + { + "bbox": [ + 316, + 134, + 526, + 188 + ], + "type": "text", + "content": "- MECT (Wu et al., 2021) uses multi-metadata embedding in a two-stream transformer to integrate Chinese character features with the radical-level embedding." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 197, + 525, + 237 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 197, + 525, + 237 + ], + "spans": [ + { + "bbox": [ + 316, + 197, + 525, + 237 + ], + "type": "text", + "content": "- Biaffine (Yu et al., 2020) classifies text spans by a biaffine classifier between boundary representations." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 247, + 525, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 247, + 525, + 300 + ], + "spans": [ + { + "bbox": [ + 316, + 247, + 525, + 300 + ], + "type": "text", + "content": "- Locate and Label (Shen et al., 2021) proposed a two-stage identifier of locating entities with boundary regression first and classifying them later." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 310, + 525, + 363 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 310, + 525, + 363 + ], + "spans": [ + { + "bbox": [ + 316, + 310, + 525, + 363 + ], + "type": "text", + "content": "- W2NER (Li et al., 2022) models NER as word-word relation classification, including the next-neighboring-word and the tail-head-word relations." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 373, + 525, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 373, + 525, + 412 + ], + "spans": [ + { + "bbox": [ + 316, + 373, + 525, + 412 + ], + "type": "text", + "content": "- Triaffine (Yuan et al., 2022) proposed a tri-affine mechanism to fuse information of inside tokens, boundaries, labels for NER." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 423, + 525, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 423, + 525, + 462 + ], + "spans": [ + { + "bbox": [ + 316, + 423, + 525, + 462 + ], + "type": "text", + "content": "- Boundary Smooth (Zhu and Li, 2022) proposed boundary smoothing as a regularization technique for span-based neural NER models." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 472, + 525, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 472, + 525, + 524 + ], + "spans": [ + { + "bbox": [ + 316, + 472, + 525, + 524 + ], + "type": "text", + "content": "- DiffusionNER (Shen et al., 2023a) formulates NER as a boundary-denoising diffusion process, which samples noisy spans from a Gaussian distribution." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 535, + 525, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 535, + 525, + 576 + ], + "spans": [ + { + "bbox": [ + 316, + 535, + 525, + 576 + ], + "type": "text", + "content": "- Seq2Seq (Straková et al., 2019) converts the labels of nested entities into a sequence and then uses a seq2seq model to decode entities." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 585, + 525, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 585, + 525, + 638 + ], + "spans": [ + { + "bbox": [ + 316, + 585, + 525, + 638 + ], + "type": "text", + "content": "- BartNER (Yan et al., 2021) formulates NER as an entity span sequence generation problem based on the pre-training Seq2Seq model BART (Lewis et al., 2019)." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 648, + 524, + 688 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 648, + 524, + 688 + ], + "spans": [ + { + "bbox": [ + 316, + 648, + 524, + 688 + ], + "type": "text", + "content": "PIQN (Shen et al., 2022) sets up global and learnable instance queries to extract entities from a sentence in a parallel manner." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 698, + 525, + 751 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 698, + 525, + 751 + ], + "spans": [ + { + "bbox": [ + 316, + 698, + 525, + 751 + ], + "type": "text", + "content": "- PromptNER (Shen et al., 2023b) unifies entity locating and entity typing in prompt learning for NER, which predicts all entities by filling position slots and type slots." + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14845" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 68, + 526, + 208 + ], + "blocks": [ + { + "bbox": [ + 71, + 68, + 526, + 208 + ], + "lines": [ + { + "bbox": [ + 71, + 68, + 526, + 208 + ], + "spans": [ + { + "bbox": [ + 71, + 68, + 526, + 208 + ], + "type": "table", + "html": "
CoNLL 2003OntoNotes 5ACE 2004ACE 2005GENIAMSRAResumeWeibo
Types418775384
#Train.S172915992462007194166924647138191350
#Dev.S-8528745969--463270
#Test.S34538262812104718544376477270
Avg.Len.S14.3818.1122.6118.9725.4145.5431.1754.57
#Train.E294411287382220493895050974703134381855
#Dev.E-2035425141112--1497379
#Test.E56481258630351118550661811630409
Avg.Len.E1.451.832.502.281.973.245.882.60
", + "image_path": "5dffea8ce0cd1df58b1b2e1979097de0347a1fe50c0ec75bfd57ecae3a4bd1a4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 83, + 402, + 274, + 593 + ], + "blocks": [ + { + "bbox": [ + 68, + 216, + 524, + 229 + ], + "lines": [ + { + "bbox": [ + 68, + 216, + 524, + 229 + ], + "spans": [ + { + "bbox": [ + 68, + 216, + 524, + 229 + ], + "type": "text", + "content": "Table 6: Dataset Statistics. \"#\" denotes the amount. \"S.\" and \"E.\" denote sentence and entity mentions, respectively." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 83, + 402, + 274, + 593 + ], + "lines": [ + { + "bbox": [ + 83, + 402, + 274, + 593 + ], + "spans": [ + { + "bbox": [ + 83, + 402, + 274, + 593 + ], + "type": "table", + "html": "
ParameterValue
Epoch[50, 80]
Batch size[8, 16]
Learning rate (BERT)[5e-6, 3e-5]
Learning rate (Other)1e-3
LSTM hidden size d256
LSTM dropout0.5
Region embedding size de20
Biaffine hidden size db150
Biaffine dropout0.2
Maximum offset value S[1, 3]
Adam epsilon1e-8
Warm factor0.1
", + "image_path": "a371baf555b1f0288923db625c9c7bd334695def450301addcfc8eab21880179.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 108, + 601, + 249, + 614 + ], + "lines": [ + { + "bbox": [ + 108, + 601, + 249, + 614 + ], + "spans": [ + { + "bbox": [ + 108, + 601, + 249, + 614 + ], + "type": "text", + "content": "Table 7: Hyper-parameter settings." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14846" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_content_list.json b/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ba7f0ad2b642d7f087e34297cc3293d2f2f1e68a --- /dev/null +++ b/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_content_list.json @@ -0,0 +1,1906 @@ +[ + { + "type": "text", + "text": "A Causal View of Entity Bias in (Large) Language Models", + "text_level": 1, + "bbox": [ + 196, + 90, + 800, + 111 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Fei Wang† Wenjie Mo† Yiwei Wang‡ Wenxuan Zhou† Muhao Chen†#", + "bbox": [ + 186, + 129, + 818, + 147 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "†University of Southern California; ‡University of California, Los Angeles;", + "bbox": [ + 198, + 148, + 803, + 164 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{\\#}$ University of California, Davis", + "bbox": [ + 371, + 165, + 630, + 180 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{fwang598, jackymo, zhouwenx}@usc.edu; wangyw.evan@gmail.com;", + "bbox": [ + 203, + 181, + 796, + 198 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "muhchen@ucdavis.edu", + "bbox": [ + 403, + 199, + 598, + 212 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 266 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Entity bias widely affects pretrained (large) language models, causing them to rely on (biased) parametric knowledge to make unfaithful predictions. Although causality-inspired methods have shown great potential to mitigate entity bias, it is hard to precisely estimate the parameters of underlying causal models in practice. The rise of black-box LLMs also makes the situation even worse, because of their inaccessible parameters and uncalibrated logits. To address these problems, we propose a specific structured causal model (SCM) whose parameters are comparatively easier to estimate. Building upon this SCM, we propose causal intervention techniques to mitigate entity bias for both white-box and black-box settings. The proposed causal intervention perturbs the original entity with neighboring entities. This intervention reduces specific biasing information pertaining to the original entity while still preserving sufficient semantic information from similar entities. Under the white-box setting, our training-time intervention improves OOD performance of PLMs on relation extraction (RE) and machine reading comprehension (MRC) by 5.7 points and by 9.1 points, respectively. Under the black-box setting, our in-context intervention effectively reduces the entity-based knowledge conflicts of GPT-3.5, achieving up to 20.5 points of improvement of exact match accuracy on MRC and up to 17.6 points of reduction in memorization ratio on RE.1", + "bbox": [ + 141, + 279, + 460, + 734 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 746, + 258, + 762 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Entity bias (Longpre et al., 2021; Wang et al., 2022; Xu et al., 2022; Peng et al., 2020; Qian et al., 2021b; Hermann et al., 2015) refers to an undesirable phenomenon where models overly rely on prediction shortcuts triggered by specific entities to make spurious predictions. For example, given the sentence \"Bill Gates went to Microsoft Building 99,\" models", + "bbox": [ + 110, + 771, + 487, + 883 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Context: Bill Gates went to Microsoft Building 99.", + "bbox": [ + 522, + 263, + 806, + 275 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Question: What's the relation between Bill Gates and", + "bbox": [ + 524, + 277, + 823, + 287 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Microsoft in the given context?", + "bbox": [ + 526, + 288, + 702, + 300 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Option: founder, visitor.", + "bbox": [ + 526, + 300, + 658, + 312 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Answer with one word: founder (GPT-3.5) X", + "bbox": [ + 526, + 313, + 781, + 326 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Assume subject_entity can be any of Bill Gates, Jeff Bezos, and Steve Jobs, while object-entity can be any of Google, Microsoft, and Meta.", + "bbox": [ + 524, + 348, + 857, + 385 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Context: subject entity went to object entity Building 99.", + "bbox": [ + 527, + 387, + 850, + 398 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Question: What's the relation between subject_entity and object-entity in the given context?", + "bbox": [ + 527, + 399, + 853, + 423 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Option: founder, visitor.", + "bbox": [ + 527, + 424, + 660, + 435 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Answer with one word: visitor (GPT-3.5)", + "bbox": [ + 527, + 437, + 774, + 449 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Figure 1: An example of entity bias in GPT-3.5. Our in-context intervention mitigates the conflicts between parametric knowledge and contextual knowledge.", + "bbox": [ + 507, + 468, + 882, + 512 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "may be misled by their memory of the entities Bill Gates and Microsoft, saying the relation between them in this context is founder rather than visitor, as shown in Fig. 1. Recent studies show that entity bias widely affects pretrained (large) language models (LLMs; Longpre et al. 2021; Yan et al. 2022; Zhou et al. 2023). These models have a tendency to disregard contextual information that contradicts or is infrequently reported in the pretrained corpus, while excessively relying on (biased) parametric knowledge (Longpre et al., 2021) to make unfaithful predictions and perpetuate bias.", + "bbox": [ + 507, + 541, + 884, + 734 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Prior studies have proposed multiple causality-inspired methods to mitigate entity bias (Zhang et al., 2017; Nan et al., 2021; Wang et al., 2022; Zhu et al., 2022). Despite their potential, the causal models underlying these methods are flawed in practice, primarily because of imprecise parameter estimation. For example, some causal models necessitate estimating the probability distribution", + "bbox": [ + 507, + 736, + 882, + 865 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "1Our code is available at https://github.com/ luka-group/Causal-View-of-Entity-Bias", + "bbox": [ + 112, + 891, + 487, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "2Although Zhang et al. (2017) do not mention causal theory, the proposed entity masking does follow a relevant principle to cut off causal links between specific entities and labels.", + "bbox": [ + 507, + 879, + 882, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "15173", + "bbox": [ + 475, + 927, + 524, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15173-15184", + "bbox": [ + 208, + 945, + 786, + 958 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 959, + 719, + 971 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "over labels when given a sentence that is devoid of entities or contextual information (Zhang et al., 2017; Wang et al., 2022). These methods either lose predictive information about entities, or are prone to erroneous representation without contextualization. The other critical problem is the difficulty of applying these methods to black-box LLMs, of which parameters are inaccessible and logits are uncalibrated.", + "bbox": [ + 110, + 84, + 487, + 227 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address the aforementioned problems, the first contribution of this paper is a causal analysis of entity bias mitigation methods (§3.1). We examine and compare the structured causal models (SCMs) behind existing methods. We find that, among the theoretically equivalent causal models (Verma and Pearl, 1990), there exists a specific SCM whose parameters are comparatively easier to estimate. As shown in Fig. 2, the proposed SCM only requires to intervene input entities to mitigate the presence of spurious features before passing them to the subsequent neural layers. Moreover, it retains the entity type information3 at an appropriate level of granularity without requiring explicit entity typing.", + "bbox": [ + 114, + 230, + 489, + 470 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The second contribution of this paper is a training-time causal intervention technique for mitigating entity bias based on the proposed SCM (§3.2). Specifically, we identify entities that are likely to share similar predictive information with the given entity. During training, we perturb embedding of the given entity within a convex hull constructed by embeddings of similar entities. During inference, we represent the entity with the center of the convex hull. Taking advantage of the continuous nature of the embedding space, this intervention does not rely on models specifically trained on natural language to estimate the label distribution of unnatural text, nor does it sacrifice predictive entity or contextual information.", + "bbox": [ + 110, + 472, + 489, + 712 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The third contribution of this paper is to transform the training-time intervention into in-context intervention for black-box LLMs whose parameters are inaccessible, and logits are uncalibrated (§3.3). A significant advantage of the proposed SCM is that the causal intervention is carried out at the input layer, enabling its implementation within an in-context setting. Specifically, we replace entities with placeholders and define each placeholder", + "bbox": [ + 112, + 713, + 489, + 859 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/bf9d06eb92efa725a6034a50a4f065110b9aba5f3115575234f82e257a19ac56.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 514, + 82, + 678, + 173 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/822404bcab44be672f773ca48fbde5c0e883b30f7145a30b4351ca44fa7346de.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 695, + 80, + 863, + 173 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/fa721ff35c2731ef19e22b55cd75093b559fef91bc441146d20f4632827f9845.jpg", + "image_caption": [ + "Figure 2: Structured causal models revealing entity bias." + ], + "image_footnote": [], + "bbox": [ + 512, + 174, + 884, + 252 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "by examples - a set of similar entities. For example, we can replace Bill Gates in Fig. 1 with subject_entity and presuppend the prompt, \"Assume that subject-entity can be any of Steve Jobs, Bill Gates, and Jeff Bezos\", to the input. This in-context intervention can be applied to any black-box LLM without additional cost.", + "bbox": [ + 507, + 303, + 884, + 413 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Experiments on relation extraction (RE) and machine reading comprehension (MRC) show that the proposed causal intervention techniques are effective for both white-box and black-box LLMs. Under the white-box setting ( $\\S 4$ ), our training-time intervention significantly improves out-of-distribution performance of RoBERTa (Liu et al., 2019) on RE by 5.7 points and SpanBERT (Joshi et al., 2020) on MRC by 9.1 points, comparing with the vanilla version. Under the black-box setting ( $\\S 5$ ), our in-context intervention effectively reduces the entity-based knowledge conflicts (Long-pre et al., 2021) and improves the task performance of GPT-3.5. Specifically, our method outperforms the best baseline by up to 20.5 points of exact match accuracy on MRC and reduces the memorization ratio by up to 17.6 points on RE. Further analyses reveal the crucial role of the number of neighboring entities $k$ in balancing the predictive information and biasing information from entities, and the necessity of entity placeholder definition for in-context intervention.", + "bbox": [ + 507, + 414, + 884, + 769 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 509, + 782, + 665, + 797 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Entity Bias in LLMs. LLMs memorize factual knowledge in their parameters during pretraining (Roberts et al., 2020; Jiang et al., 2020) and show promising results in answering factual questions (Petroni et al., 2019; Brown et al., 2020; Wei", + "bbox": [ + 507, + 814, + 884, + 894 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "3Entity type information plays a crucial role in entity-driven tasks. For example, without knowing a more specific location type, it is impossible to differentiate between relations born_in_city and born_in_country.", + "bbox": [ + 112, + 868, + 487, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "4https://platform.openai.com/docs/models/gpt-3-5", + "bbox": [ + 529, + 903, + 836, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "15174", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "et al., 2022). However, the parametric knowledge may be inaccurate due to the misinformation in the training corpus (Lin et al., 2022) or outdated as the world evolves (Liska et al., 2022; Kasai et al., 2022). In such scenarios, it is critical for LLMs to update their predictions when provided with contextual evidence. However, previous studies (Longpre et al., 2021; Qian et al., 2021b; Yan et al., 2022) observe that language models may take entities as shortcuts, leading to spurious predictions based solely on parametric knowledge. This bias becomes more prominent when the evidence contains infrequent or conflicting knowledge compared to the training corpus.", + "bbox": [ + 110, + 84, + 492, + 311 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To mitigate this bias, previous work (Longpre et al., 2021; Chen et al., 2022; Li et al., 2022; Zhou et al., 2023) introduces the entity substitution technique, which involves constructing counterfactual data by randomly replacing the entities, and updating the language models either by finetuning or in-context learning. Although showing improved results, these techniques are empirical and lack theoretical backgrounds. In this paper, we theoretically analyze the entity bias problem from a causal view. Furthermore, we propose a causal intervention method that surpasses the performance of entity substitution.", + "bbox": [ + 110, + 312, + 490, + 521 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Debiasing with Causal Intervention. LLMs have been revealed with bias problems, for which literature has paid much attention in order to mitigate their adverse effects (Sweeney and Najafian, 2019; Zhang et al., 2020b; Venkit and Wilson, 2021; Lalor et al., 2022). Recent debiasing techniques incorporate the concept of counterfactual inference, and have been applied in various tasks for bias mitigation (Niu and Zhang, 2021; Qian et al., 2021a; Wang et al., 2022). One dominant technique is based on causal mediation analysis (Udomcharoenchaikit et al., 2022), which involves decomposing the total effect into pure direct effect and total indirect effect. In this context, Wang et al. (2022) utilize total direct effect and total effect to debias the relation extraction. Apart from debiasing, causal mediation analysis can be used to analyze biases in LLMs (Vig et al., 2020; Finlayson et al., 2021).", + "bbox": [ + 110, + 530, + 490, + 819 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In addition to intervening causal mediator, previous studies have also explored confounder analysis (Keith et al., 2020; Qian et al., 2021a; Feder et al., 2022; Weld et al., 2022). A confounder is a variable that influences both the input and the output, causing a spurious correlation between them.", + "bbox": [ + 110, + 822, + 490, + 920 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Typically, the de-confounder process applies the do-calculus (Pearl, 2012) to compute the prediction assuming that the value of the confounder variable is not the observed one but follows its natural distribution (Zhang et al., 2020a; Tian et al., 2022). Our approach is also based on confounder analysis. While nearly all the aforementioned approaches request a white-box accessibility of the model with at least logits of predictions, this work represents a pilot study of deconfounder method that applies to purely black-box LLMs.", + "bbox": [ + 507, + 84, + 885, + 262 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 507, + 275, + 611, + 290 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we first analyze methods for mitigating entity bias in a causal view and propose an easy-to-estimate SCM as a theoretical basis (§3.1). Based on the proposed SCM, we design a training-time intervention technique for white-box LLMs (§3.2) and an in-context intervention technique for black-box LLMs (§3.3).", + "bbox": [ + 507, + 302, + 885, + 414 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Causal Analysis of Entity Bias", + "text_level": 1, + "bbox": [ + 507, + 429, + 794, + 444 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To compare existing methods in the same context, we analyze the structured causal models (SCMs) behind them. Fig. 2 shows two typical SCMs for entity bias mitigation methods, where $X$ refers to the raw input, $E$ refers to entities, and $Y$ refers to the label. The links $X \\rightarrow Y \\leftarrow E$ show that LLMs rely on both predictive information from the whole input and the biasing information from specific entities to make the prediction. The links $E \\rightarrow X$ and $X \\rightarrow E$ assume that the context is written down with the entity in mind or vice versa. As discussed by Verma and Pearl (1990), we cannot differentiate between these two directions merely based on statistical observations. Indeed, the two SCMs with opposite links between $X$ and $E$ are equivalent according to the Bayes' theorem:", + "bbox": [ + 507, + 451, + 885, + 708 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} P (X) P (E | X) P (Y | X, E) \\\\ = P (Y, X, E) \\\\ = P (E) P (X \\mid E) P (Y \\mid X, E) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 589, + 724, + 801, + 778 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As revealed by these SCMs, entity bias exists in LLMs because entities serve as either confounders or mediators. Thus, the bias can be mitigated through causal intervention, such as backdoor adjustment", + "bbox": [ + 507, + 795, + 885, + 875 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nP (Y | d o (X)) = \\sum_ {E} P (Y | X, E) P (E),\n$$\n", + "text_format": "latex", + "bbox": [ + 549, + 889, + 842, + 921 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "15175", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/8e8cd6f645fb1ebe10dd5f451789ba5103994691b8b9e54976f1e508ad3d7713.jpg", + "image_caption": [ + "Figure 3: Left: Training-time intervention with $k = 4$ . Right: Example of predictive and biasing information." + ], + "image_footnote": [], + "bbox": [ + 117, + 80, + 527, + 244 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/5559d0c93cb9298a3db4eb6479d03fe4fabaa1a8fd6a0b0f1c71dd44810e15ea.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 559, + 80, + 885, + 244 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "which eliminates the influence of a specific variable (in this context, $E$ ) by assigning values to this variable. However, previous SCM-based debiasing methods exhibit divergent performances, since they estimate different (conditional) probabilities using different surrogates when performing the causal intervention. For example, counterfactual analysis by Wang et al. (2022) estimates and deducts the biasing effect of entities on labels by masking the context, while Zhang et al. (2017) and Longpre et al. (2021) directly remove the effect of entities by entity masking or substitution. None of them estimates the causal effects of entity names precisely, due to the highly complex architectures of LLMs, which account for their unsatisfactory performance on mitigating entity bias.", + "bbox": [ + 112, + 293, + 489, + 551 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this work, we consider the SCM in Fig. 2, whose parameters are much easier to estimate in practice. Since most LLMs follow a sequential structure by stacking neural layers, mitigating the entity bias in one layer will also mitigate the entity bias in subsequent layers. The underlying logic is simple - if we block the spurious features in the input, there will be no spurious correlations to capture. Therefore, we propose to mitigate the entity bias in the input layer $M$ , which could be an embedding layer or a prompt layer. Obviously, $P(M|X,E)$ can be estimated more accurately and efficiently than $P(Y|X,E)$ , because there is no need to run the whole model, ensuring less error propagation and computational cost. To further improve the estimation by retaining as much predictive information as possible, we propose to estimate $P(M|do(X))$ by perturbing the entity with similar entities rather than masking it. In the following sections, we will show how to realize the proposed causal intervention on both white-box and black-box LLMs.", + "bbox": [ + 112, + 565, + 489, + 917 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2 Training-time Intervention", + "text_level": 1, + "bbox": [ + 507, + 293, + 766, + 307 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For white-box models of which the parameters are accessible, we can effectively address their internal bias through training-time intervention. In the case of entity bias identified by the proposed SCM, we realize the causal intervention by perturbing the input entities or entity tokens using their neighboring counterparts in the embedding space, as shown in Fig. 3 (Left). For each entity presented in the input text, we first find its top $k$ nearest neighbors according to embedding distance. Then we construct the smallest convex hull5 to cover the original entity and neighboring entities. Due to the continuous nature of the embedding space, the embeddings within the convex hull approximately represent the same predictive information as a whole. The entity-specific biasing information, which has the potential to trigger spurious shortcuts, gradually diminishes from the original entity towards the border of the convex hull.", + "bbox": [ + 505, + 313, + 884, + 618 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "During training, we introduce perturbations to the entity embedding by replacing it with a random embedding selected from within the convex hull. In this way, the convex hull bounded the predictive information, while random sampling further introduces noises and increases the diversity of data for robust training. During inference, we replace the original entity embedding with the center of the convex hull, in order to balance the trade-off between predictive and biasing information. Fig. 3 (Right) provides an example of the information preserved through such intervention. By replacing the entity Bill Gates with the center of the convex hull, encompassed by its neighboring entities, such as Steve Jobs and Jeff Bezos, we effectively retain the", + "bbox": [ + 507, + 620, + 884, + 860 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "5This convex hull-bounded perturbation is inspired by Dong et al. (2021), where perturbation within a convex hull formed by synonyms is used to improve model robustness against word substitutions.", + "bbox": [ + 507, + 869, + 882, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "15176", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/7c8d3d8e5550de1e1a4c33f69110be8de990706e2138b587b755297749d3f07d.jpg", + "image_caption": [ + "1. Replace entities with placeholders" + ], + "image_footnote": [], + "bbox": [ + 124, + 99, + 547, + 280 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/403da6bc914c98f8dd45d3130dbb3a8385c8be8e9d55c263417ed0fedccb6b0f.jpg", + "image_caption": [ + "3. Define placeholders with examples", + "Figure 4: In-context intervention for black-box LLMs. We take relation extraction as an example." + ], + "image_footnote": [], + "bbox": [ + 571, + 99, + 880, + 280 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "shared predictive information (e.g., person), while mitigating the biasing information (e.g., founder of Microsoft). That is to say, the convex hull-bounded perturbation serves as an effective estimation of $P(M|do(X))$ .", + "bbox": [ + 112, + 331, + 487, + 412 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3 In-context Intervention", + "text_level": 1, + "bbox": [ + 114, + 453, + 342, + 467 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The rise of Web services powered by black-box LLMs, such as GPT-3.5, introduces new challenges for mitigating entity bias, demanding debiasing methods that do not require accessible model weights and prediction logits. As discussed in §3.1, a key advantage of our SCM is that the deconfounder operation is merely on the input layer. In the context of black-box LLMs, the input is the user-provided prompt. Thus, we perform the causal intervention solely through modifying prompts to resolve entity bias. We propose a four-step (test-time) in-context intervention technique for black-box LLMs. Fig. 4 shows the whole process.", + "bbox": [ + 112, + 491, + 489, + 701 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "First, we replace the original entity mention in the input with abstract placeholders (e.g., [ENTITY]). This step effectively mitigates any biasing information from the original entity names, because the placeholders are semantic-neutral. However, this step also eliminates predictive information from entities. We show in §5.3 that, without proper definition for the placeholder, models can easily fail to answer questions. In the next two steps, we construct definitions to provide predictive information for each placeholder while introducing minimal additional biasing information. Second, we query the LLM to name $k$ entities similar to the", + "bbox": [ + 112, + 709, + 489, + 917 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "original one (e.g., $E_{o}$ ). These generated entities (e.g., $E_{a}$ and $E_{b}$ ) present similar predictive information as the original entity, and are able to fulfill the same function as neighboring entities in §3.2. Third, we define the placeholder with the original entity and generated entities. For example, we can verbalize the definition as \"Assume [ENTITY] can be any of $E_{o}$ , $E_{a}$ and $E_{b}$ \". This definition encourages the LLM to find common properties of given entities rather than relying on biasing information of one specific entity. The resulting placeholder along with its definition serves as an effective estimation of $P(M|do(X))$ . Finally, we presuppose the placeholder definition to be modified context and question, and query the LLM with the new prompt. This four-step adjustment ensures that the resulting prompt is free of specific biasing information pertaining to the original entity while still preserving sufficient predictive information by considering given entity examples as a whole.", + "bbox": [ + 507, + 330, + 884, + 653 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4 White-Box Experiments", + "text_level": 1, + "bbox": [ + 507, + 664, + 752, + 682 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this section, we evaluate our training-time intervention under the white-box setting.", + "bbox": [ + 507, + 690, + 882, + 722 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1 Experimental Setup", + "text_level": 1, + "bbox": [ + 507, + 733, + 714, + 749 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets and Metrics. We evaluate our methods on relation extraction (RE) and machine reading comprehension (MRC). For both tasks, we fine-tune models on an in-distribution (ID) training set and evaluate models on both ID and out-of-distribution (OOD) test sets. For RE, we adopt TACRED (Zhang et al., 2017) as the ID dataset and", + "bbox": [ + 507, + 759, + 882, + 872 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "Here, we rely on the entity knowledge possessed by LLMs. However, it is possible to replace the LLM with external databases or tools in this step.", + "bbox": [ + 507, + 879, + 882, + 917 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "15177", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/bfed6b16267ab5669a18bb0a93289a68731bd6628eaeb168d33cc6461d8768dd.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
RE (F1)MRC (EM)
IDOODΔIDOODΔ
Vanilla Model71.1±0.962.3±0.6-12.4%79.1†±0.163.1†±0.8-20.2%
+ Continual Pretraining (Yan et al., 2022)*---79.6†±0.665.9†±1.1-17.2%
+ CoRE (Wang et al., 2022)71.3±0.361.2±0.6-14.2%---
+ Entity Mask (Zhang et al., 2017)61.4±0.561.9±0.5+0.9%75.7±0.662.9±0.4-16.9%
+ Entity Substitution (Longpre et al., 2021)66.6±0.665.8±0.3-1.2%76.4±0.870.8±1.5-7.3%
+ Ours70.8±0.368.0±0.3-3.9%77.0±0.772.2±0.5-6.2%
", + "bbox": [ + 134, + 80, + 860, + 225 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1: Results under white-box setting. We report the average F1/EM score and standard deviation of three runs. $\\Delta$ shows the relative performance change between ID and OOD. The best number of each column is in bold. * Continual pretraining is not directly comparable to finetuning methods. † Numbers copied from Yan et al. (2022).", + "bbox": [ + 112, + 228, + 882, + 272 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "EntRED (Wang et al., 2023) as the OOD dataset, and report micro-F1 score. In both datasets, entities in each sentence are given. For MRC, we adopt TriviaQA (Joshi et al., 2017) as the ID dataset and its answer-substituted version (Yan et al., 2022) as the OOD dataset, and report exact match (EM) score. Following Yan et al. (2022), we hold out $10\\%$ of the training data for development and evaluate models on the original development set. We use the DBName version of their OOD dataset. For all metrics, we report the average score with standard deviation of three runs.", + "bbox": [ + 112, + 297, + 489, + 491 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baselines. We compare our methods with the following baselines. Entity Mask (Zhang et al., 2017) masks the subject and object entities in the sentence with special tokens. Entity Substitution (Longpre et al., 2021) randomly selects an entity of the same type to substitute the original entity. CoRE (Wang et al., 2022) applies counterfactual inference by computing the difference between the prediction made with the entire sentence and the prediction made with only the entities observed. Continual Pretraining (Yan et al., 2022) introduces an intermediate pretraining stage to the backbone model with the objective of recovering masked entities.", + "bbox": [ + 112, + 504, + 489, + 713 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation Details. For RE, we apply RoBERTa (Liu et al., 2019) as the backbone model following previous works (Zhou and Chen, 2022; Wang et al., 2022). We use the entity Marker_punct input format from Zhou and Chen (2022) in main experiments, in order to mitigate the impact of explicit entity type information on our analysis of entity bias. For MRC, we apply SpanBERT (Joshi et al., 2020) as the backbone model following Yan et al. (2022). Since entities are not given in MRC datasets, we use the same named entity recognition tool used by Yan et al. to", + "bbox": [ + 112, + 726, + 489, + 917 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "extract entities. Since the detected entities could be noisy and incomplete, we perform our method upon answer-substituted training set ensuring all answer entities are perturbed as strong as Entity Substitution. Since RoBERTa and SpanBERT lack entity-level embeddings, we apply our causal intervention to each token embedding within the entity mention instead. To construct convex hull, We select neighboring tokens based on their Euclidean distance to the original token in the embedding space. For both tasks, we perform training-time intervention on each entity token with $k = 3$ . While further data augmentation is always possible, for a fair comparison, we finetune all the models with the same amount of data. More implementation details are in Appx. §A.1.", + "bbox": [ + 507, + 297, + 884, + 554 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2 Results", + "text_level": 1, + "bbox": [ + 507, + 567, + 613, + 581 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "As shown in Tab. 1, the vanilla RoBERTa and Span-BERT experiences significant declines in performance on RE $(-12.4\\%)$ and MRC $(-20.2\\%)$ when evaluated on OOD test sets. For both tasks, the OOD test set exhibits lower entity bias, achieving better performance on it suggests that the model relies less on entity bias as a predictive factor.", + "bbox": [ + 507, + 589, + 882, + 700 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "CoRE and Continual Pretraining are the only baselines that improve the ID performance. CoRE leads to a slight performance decrease on the OOD test set of RE in exchange, while Continual Pretraining further increases the OOD performance on MRC. Entity Mask successfully narrows down or even reverses the relative performance drop under OOD setting on the two tasks. However, its absolute performance decreases significantly due", + "bbox": [ + 507, + 702, + 884, + 847 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "This is because CoRE is designed for a class-balanced setting, but this experiment emphasizes the performance on the raw class distribution. Moreover, we search its bias mitigation weight on the ID development set, which has a notably different entity distribution compared with the OOD test set.", + "bbox": [ + 507, + 857, + 882, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "15178", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/1e5e73f04a91dfac533284c7fb68b82174d8773d9d033b143674b9b8251048c4.jpg", + "image_caption": [ + "Figure 5: F1 score of training-time intervention with different $k$ on RE." + ], + "image_footnote": [], + "bbox": [ + 117, + 84, + 482, + 237 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "to the loss of predictive information from entities. Moreover, its effectiveness is dependent on the task property. Unlike MRC, entities are given and are not answers in RE, so the gap between ID and OOD performance of Entity Mask are much smaller. Entity Substitution stands out among all the baselines in terms of the OOD performance, with an absolute improvement of 3.5 points on RE and 7.7 points on MRC. However, its ID performance suffers a lot from the distribution shift of entities during training.", + "bbox": [ + 112, + 307, + 489, + 483 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Our training-time intervention achieves the best OOD performance, with an absolute improvement of 2.2 points on RE and 1.4 points on MRC compared with Entity Substitution. At the same time, its ID performance is also better. These results show that our method mitigates entity bias more effectively without losing much predictive information. In other words, the proposed method represents a better way to estimate the parameters of the proposed SCM accurately.", + "bbox": [ + 112, + 486, + 489, + 646 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3 Analysis", + "text_level": 1, + "bbox": [ + 112, + 663, + 228, + 678 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To provide a comprehensive understanding of our training-time intervention, we further conduct analyses on RE.", + "bbox": [ + 112, + 686, + 489, + 734 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Effect of $k$ . The number of neighbors, $k$ , plays a crucial role in balancing the predictive information and biasing information from entities. To find the sweet spot of $k$ , we examine its influence on model performance as shown in Fig. 5. In general, the ID performance decreases when $k$ increases. As the value of $k$ increases, the resulting convex hull becomes larger, causing the center of the hull to move further away from the original entity. Consequently, both the predictive information and biasing information that contribute to ID performance grad", + "bbox": [ + 112, + 741, + 489, + 919 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "ually diminish. In contrast, the OOD performance is lower when $k$ is too big or too small. When $k$ is too big, the same problem under ID setting also happens to the OOD setting. When $k$ is too small, the biasing information is not effectively mitigated, because the perturbed entity is too close to the original entity.", + "bbox": [ + 507, + 84, + 884, + 197 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Entity Type as Input. Previous experiments in this section do not explicitly input entity information as it may disturb the causal analysis. Here, we analyze the effect of entity type information as input. We use the typed-entity Marker_punct input format from Zhou and Chen (2022). The ID and OOD F1 scores of vanilla RoBERTa model are 74.6 and 68.9 points, respectively. Our training-time intervention further improves the ID performance by 0.7 points and the OOD performance by 2.9 points. These results indicate that information from neighboring entities is complementary to coarse-grained entity type information for precise RE.", + "bbox": [ + 507, + 203, + 885, + 413 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5 Black-Box Experiments", + "text_level": 1, + "bbox": [ + 507, + 426, + 749, + 443 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In this section, we evaluate our in-context intervention for mitigating entity bias from LLMs under black-box setting.", + "bbox": [ + 507, + 453, + 882, + 502 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1 Experimental Setup", + "text_level": 1, + "bbox": [ + 507, + 514, + 714, + 531 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Datasets. Following Zhou et al. (2023), we adopt GPT-3.5 text-davinci-003 as the backbone LLM and evaluate the model performance under a zero-shot setting. We use the RE and MRC datasets provided by Zhou et al. (2023). The RE dataset is based on Re-TACRED (Stoica et al., 2021). Zhou et al. pair each instance's entities with a randomly sampled context that shares the same entity types but possesses different relations. To mitigate the influence of the label no Relation, which can also serve as a signal of abstention, we further filter out all instances whose original or updated labels are no relation. The MRC dataset is based on Natural Questions (Kwiatkowski et al., 2019). Zhou et al. replace the original answer in each instance with a randomly sampled entity of the same type. They only collect instances where the LLM can give the correct answer based on the raw context. Intuitively, LLMs that faithfully capture contextual information should update their answers based on the new context.", + "bbox": [ + 507, + 542, + 884, + 879 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Metrics. We report the F1 score for RE, and EM score for MRC. To align with previous works, we", + "bbox": [ + 507, + 887, + 882, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "15179", + "bbox": [ + 477, + 927, + 524, + 941 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/be6c9003fb40edac702a485a798962fd538118535c39d89b507e4846ed2de6dc.jpg", + "image_caption": [ + "MRC (EM↑)" + ], + "image_footnote": [], + "bbox": [ + 117, + 96, + 302, + 223 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/bc72c29f86b2b87f5661457131bdf034bd1761395f3cb70403b534e8f09f5462.jpg", + "image_caption": [ + "MRC (MR↓)" + ], + "image_footnote": [], + "bbox": [ + 309, + 96, + 495, + 222 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/cd431268605c6800f35fa1004b6cbc4ecbaddfcdcd993a6c0e1d14738e944b64.jpg", + "image_caption": [ + "RE(F1↑)" + ], + "image_footnote": [], + "bbox": [ + 502, + 96, + 685, + 222 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/e81114962483800c37f0c8a36288ff6f38777800b7221cf11cb9b2953493d90d.jpg", + "image_caption": [ + "RE (MR↓)" + ], + "image_footnote": [], + "bbox": [ + 694, + 96, + 877, + 222 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/4d847bd6ecc6921a78d6e8692d8fa0ca19bbd734699d2d8f8b462ba5f6d13c98.jpg", + "image_caption": [ + "Figure 6: GPT-3.5 results on MRC and RE under black-box setting. We report the EM score on MRC and the F1 score on RE, for which higher scores are better. We also report the MR score on both tasks, for which lower scores are better. Our in-context intervention performs consistently better than baselines under all settings.", + "Figure 7: Ablation study of in-context intervention for GPT-3.5 on RE." + ], + "image_footnote": [], + "bbox": [ + 119, + 318, + 484, + 422 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "also report the memorization ratio (MR; Longpre et al. 2021) to measure the model's ability to update answers based on given contexts. $^{8}$", + "bbox": [ + 112, + 488, + 485, + 538 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Baselines. We compare our in-context intervention with the methods introduced by Zhou et al. (2023). Base prompts directly concatenate the context and the question of each instance as the query. Attribute-based prompts append \"in the given context\" to the question. Opinion-based prompts modified the context to a narrator's statement by prepending \"Bob said\" to the context, and then query the LLM about the narrator's opinion by preponding \"What's Bob's opinion on\" to the question. We evaluate all methods with and without specifically designed task instructions following Zhou et al. (2023).", + "bbox": [ + 110, + 543, + 490, + 736 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Implementation Details. We apply our in-context intervention to attribute-based prompts. We adopt the backbone LLM to propose two similar entities along with the original entity to define each placeholder. To further eliminate the spurious entity mapping, we shuffle the entities for each placeholder before verbalization. Details of all prompt templates used can be found in Appx. §A.2. Since", + "bbox": [ + 112, + 741, + 487, + 871 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "entities are not given in MRC, we detect named entities and replace them with placeholders using gpt-3.5-turbo as an external tool. Given the potential abundance of entities in long contexts, we do not replace entities that exclusively appear in the context.", + "bbox": [ + 507, + 319, + 884, + 416 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.2 Results", + "text_level": 1, + "bbox": [ + 507, + 429, + 613, + 442 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "As shown in Fig. 6, all methods benefit from carefully designed task instructions in terms of task performance. The Opinion-based prompt performs the best among all baselines in most cases. Compared with the Base prompt, it significantly improves the EM score by 18.7-21.5 points on MRC and the F1 score by 0.6-4.7 points on RE. Our in-context intervention achieves the highest EM/F1 score and the lowest MR score under all settings. Specifically, without task instruction, our in-context intervention outperforms the best baseline by 20.5 EM points on MRC and reduces the MR score by 17.6 points on RE. These results demonstrate the effectiveness of our causal intervention for addressing entity-based knowledge conflicts in black-box LLMs.", + "bbox": [ + 507, + 450, + 882, + 690 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.3 Ablation Study", + "text_level": 1, + "bbox": [ + 507, + 703, + 673, + 719 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We in addition conduct an ablation study on RE to provide a comprehensive understanding of our method, as shown in Fig. 7. When the placeholder definition is not provided (i.e., w/o definition), no entity information, including both biasing and predictive information, appears in the input. As a result, it successfully blocks any spurious shortcuts with MR drops to 0. However, the F1 score also drops sharply from 71.8 points to 37.9 points, indicating that some entity information is essential to accurate RE and the LLM cannot understand the placeholders well without their definition.", + "bbox": [ + 507, + 726, + 882, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_footnote", + "text": "${}^{8}MR = \\frac{P_{o}}{P_{o} + P_{s}}$ , where $P_{o}$ is the probability that the model generates the original answer and $P_{s}$ is the probability that the model updates the answer correctly.", + "bbox": [ + 112, + 877, + 487, + 919 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "15180", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We further examine the role of original entities in the placeholder definition. On the one hand, we remove the original entities from the definition (i.e., w/o original entity). Results show that our method can still improve F1 while reducing MR. This verifies the effectiveness of using a set of similar entities to represent the predictive information from the original entity. On the other hand, we put the original subject and object entities at the same position (i.e., w/o entity shuffle) in the definition so that the LLM can easily map them. As a result, the MR increases significantly, showing that the LLM can find spurious shortcuts even through mapping the subject entity and the object entity from two entity sets.", + "bbox": [ + 112, + 84, + 492, + 326 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 112, + 343, + 247, + 357 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In this paper, we analyze the entity bias in LLMs from a causal view. Building upon an SCM whose parameters are easier to estimate, we propose training-time causal intervention for white-box LLMs and in-context causal intervention for black-box LLMs. Both intervention techniques perturb the original entity with neighboring entities to mitigate spurious correlations between specific entities and predictions. Experiments on relation extraction and machine reading comprehension show that the proposed intervention can effectively reduce the conflicts between parametric knowledge and contextual knowledge and significantly improve the performance of LLMs. Future work can apply our causal intervention to more LLMs and tasks to achieve context-faithful answers.", + "bbox": [ + 112, + 372, + 489, + 630 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Acknowledgement", + "text_level": 1, + "bbox": [ + 114, + 648, + 278, + 665 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We appreciate the reviewers for their insightful comments and suggestions. Fei Wang is supported by the Annenberg Fellowship and the Amazon ML Fellowship. Wenjie Mo is supported by the USC CURVE Fellowship and the Provost's Research Fellowship. Wenxuan Zhou and Muhao Chen are supported by the NSF Grant IIS 2105329, the NSF Grant ITE 2333736, the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. This work is also supported in part by a Cisco Research Award, two Amazon Research Awards, and a Keston Research Award. Computing of this work has been partly supported by a subaward of NSF Cloudbank 1925001 through UCSD.", + "bbox": [ + 112, + 677, + 489, + 919 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Limitation", + "text_level": 1, + "bbox": [ + 509, + 83, + 606, + 98 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Although we have tried to verify the effectiveness of our method under diverse settings, including different LLMs, different accessibility of model parameters, and different tasks, there are always more options for further investigation, especially nowadays when more and more LLMs are kept produced. Considering the property of the entity bias issue may vary when it comes to different LLMs and datasets from different domains, future work can build better benchmark for more comprehensive evaluation. In this paper, we only consider zero-shot prompting for black-box LLMs, because this will help us to control variables during causal analysis. However, it is possible to combine the proposed causal intervention with cutting-edge LLM inference methods, such as in-context learning (Brown et al., 2020), although the underlying SCM may become more complex.", + "bbox": [ + 507, + 108, + 885, + 399 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 510, + 425, + 608, + 439 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.", + "Hung-Ting Chen, Michael Zhang, and Eunsol Choi. 2022. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2292-2307, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.", + "Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. 2021. Towards robustness against natural language word substitutions. In International Conference on Learning Representations.", + "Amir Feder, Katherine A Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E Roberts, et al. 2022. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. Transactions of the Association for Computational Linguistics, 10:1138-1158.", + "Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan Belinkov. 2021. Causal analysis of syntactic agreement mechanisms in neural language models. arXiv preprint arXiv:2106.06087.", + "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman," + ], + "bbox": [ + 509, + 447, + 885, + 919 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "15181", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28.", + "Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438.", + "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.", + "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611.", + "Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. Realtime qa: What's the answer right now? arXiv preprint arXiv:2207.13332.", + "Katherine Keith, David Jensen, and Brendan O'Connor. 2020. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332-5344.", + "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.", + "John P Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, and Ahmed Abbasi. 2022. Benchmarking intersectional biases in nlp. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3598-3609.", + "Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2022. Large language models with controllable working memory. arXiv preprint arXiv:2211.05110.", + "Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214-3252, Dublin, Ireland. Association for Computational Linguistics.", + "Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, D'Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer," + ], + "bbox": [ + 115, + 85, + 487, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models. In International Conference on Machine Learning, pages 13604-13622. PMLR.", + "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.", + "Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052-7063.", + "Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, and Wei Lu. 2021. Uncovering main causalities for long-tailed information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9683-9695.", + "Yulei Niu and Hanwang Zhang. 2021. Introspective distillation for robust question answering. Advances in Neural Information Processing Systems, 34:16292-16304.", + "Judea Pearl. 2012. The do-calculus revisited. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, pages 3-11.", + "Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2020. Learning from context or names? an empirical study on neural relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3661-3672.", + "Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.", + "Chen Qian, Fuli Feng, Lijie Wen, Chunping Ma, and Pengjun Xie. 2021a. Counterfactual inference for text classification debiasing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5434-5445.", + "Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. 2021b. Annotation inconsistency and entity bias in MultiWOZ. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 326-337, Singapore and Online. Association for Computational Linguistics." + ], + "bbox": [ + 510, + 85, + 882, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "15182", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.", + "George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-tacred: Addressing shortcomings of the tacred dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13843-13850.", + "Chris Sweeney and Maryam Najafian. 2019. A transparent framework for evaluating unintended demographic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1662-1667.", + "Bing Tian, Yixin Cao, Yong Zhang, and Chunxiao Xing. 2022. Debiasing nlu models via causal intervention and counterfactual reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11376-11384.", + "Can Udomcharoenchaikit, Wuttikorn Ponwitayarat, Patomporn Payoungkhamdee, Kanruethai Masuk, Weerayut Buaphet, Ekapol Chuangsuwanich, and Sarana Nutanong. 2022. Mitigating spurious correlation in natural language understanding with counterfactual inference. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11308-11321.", + "Pranav Narayanan Venkit and Shomir Wilson. 2021. Identification of bias against people with disabilities in sentiment analysis and toxicity detection models. arXiv preprint arXiv:2111.13259.", + "Thomas Verma and Judea Pearl. 1990. Equivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, pages 255-270.", + "Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Simas Sakenis, Jason Huang, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for interpreting neural nlp: The case of gender bias. arXiv preprint arXiv:2004.12265.", + "Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, and Bryan Hooi. 2022. Should we rely on entity mentions for relation extraction? debi- aising relation extraction with counterfactual analysis. arXiv preprint arXiv:2205.03784.", + "Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, and Muhao Chen. 2023. How fragile is relation extraction under entity replacements? In Proceedings of the 27th SIGNLL Conference on Computational Natural Language Learning (CoNLL)." + ], + "bbox": [ + 115, + 85, + 489, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In International Conference on Learning Representations.", + "Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan A Rossi, and Tim Althoff. 2022. Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 1109-1120.", + "Nan Xu, Fei Wang, Bangzheng Li, Mingtao Dong, and Muhao Chen. 2022. Does your model classify entities reasonably? diagnosing and mitigating spurious correlations in entity typing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.", + "Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. 2022. On the robustness of reading comprehension models to entity renaming. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508-520.", + "Dong Zhang, Hanwang Zhang, Jinhui Tang, Xian-Sheng Hua, and Qianru Sun. 2020a. Causal intervention for weakly-supervised semantic segmentation. Advances in Neural Information Processing Systems, 33:655-666.", + "Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu, and Tiejun Zhao. 2020b. Demographics should not be the reason of toxicity: Mitigating discrimination in text classifications with instance weighting. arXiv preprint arXiv:2004.14088.", + "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35-45.", + "Wenxuan Zhou and Muhao Chen. 2022. An improved baseline for sentence-level relation extraction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 161-168.", + "Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023. Context-faithful prompting for large language models. In *Findings of the 2023 Conference on Empirical Methods in Natural Language Processing*.", + "Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, and Fuzhen Zhuang. 2022. Generalizing to the future: Mitigating entity bias in fake news detection. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2120-2125." + ], + "bbox": [ + 510, + 85, + 882, + 910 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "15183", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A Implementation Details", + "text_level": 1, + "bbox": [ + 114, + 83, + 356, + 99 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A.1 White-Box Experiments", + "text_level": 1, + "bbox": [ + 114, + 112, + 354, + 129 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "For RE, we use RoBERTa-Large as our backbone model, which has 354 million parameters. Our implementation is based on the codebase by Zhou and Chen (2022) with their default hyper-parameters. More specifically, we employ a learning rate of 3e-5, a batch size of 32, and conduct training for a total of 5 epochs. Other method-specific hyperparameters are selected on the development set of TACRED. Finetuning typically takes 1.5 hours on an NVIDIA RTX A5000 GPU.", + "bbox": [ + 112, + 137, + 487, + 296 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "For MRC, we use SpanBERT-base-cased as our backbone model, which has 110 million parameters. Our implementation is based on the codebase by Yan et al. (2022) with their default hyperparameters. More specifically, we employ a learning rate of 2e-5, a batch size of 16, and conduct training for a total of 4 epochs. Other method-specific hyper-parameters are selected on the hold-out development set of TriviaQA. Finetuning typically takes 3 hours on an NVIDIA RTX A5000 GPU.", + "bbox": [ + 112, + 300, + 489, + 475 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A.2 Black-Box Experiments", + "text_level": 1, + "bbox": [ + 114, + 494, + 351, + 508 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Our implementation is based on the codebase by Zhou et al. (2023).", + "bbox": [ + 112, + 518, + 485, + 549 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The instruction for MRC is", + "bbox": [ + 132, + 552, + 337, + 565 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Instruction: read the given information and answer the corresponding question.", + "bbox": [ + 131, + 586, + 468, + 619 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The prompt without instruction for MRC is", + "bbox": [ + 132, + 638, + 455, + 653 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Assume that {ENTITY0} can be any of {entity0Candidates}. [Assume that {ENTITY1} can be any of {entity1Candidates} ...] {context}", + "bbox": [ + 132, + 671, + 470, + 734 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Q:{question} based on the given text? Extract the answer from the given text. Do not add other words.", + "bbox": [ + 132, + 737, + 468, + 782 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A:", + "bbox": [ + 132, + 785, + 154, + 799 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The instruction for RE is", + "bbox": [ + 132, + 816, + 319, + 831 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Identify the relationship between two entities from a list of options.", + "bbox": [ + 132, + 852, + 467, + 883 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The prompt without instruction for RE is", + "bbox": [ + 132, + 903, + 438, + 917 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Assume that subject_entity is one of {subjCandidates}, while object-entity is one of {objCandidates} in the following text. {context}", + "bbox": [ + 526, + 89, + 863, + 153 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Q: Which option indicates the relationship between subject_entity and object-entity in the given text?", + "bbox": [ + 527, + 154, + 863, + 200 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Options:{options}", + "bbox": [ + 529, + 202, + 665, + 218 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A:", + "bbox": [ + 529, + 219, + 549, + 231 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The prompt template for detecting entities in MRC is", + "bbox": [ + 509, + 247, + 880, + 277 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "List named entities in the following sentence. Separate the entities with $\\# \\# \\# \\#$ , if you find multiple entities. Do not add additional words before or after your answers..", + "bbox": [ + 526, + 294, + 865, + 357 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "{sentence}", + "bbox": [ + 527, + 360, + 608, + 374 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The prompt template for replacing entities with placeholders in MRC is", + "bbox": [ + 507, + 390, + 880, + 420 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Replace the entity {entity_list} in the following paragraph. \n{paragraph}", + "bbox": [ + 526, + 437, + 865, + 485 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The prompt template for finding similar entities is", + "bbox": [ + 509, + 501, + 880, + 531 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Name two [{\\entity_type}] entities similar to {\" $\\{entity\\}''$ . Separate the entities with \\#\\#\\#, and do not add additional words before or after your answers. Provide random answers if you are not sure.", + "bbox": [ + 526, + 546, + 863, + 627 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "In all the above prompts, variables are surrounded with curly brackets and optional variables are surrounded with square brackets.", + "bbox": [ + 507, + 640, + 882, + 689 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "15184", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 11 + } +] \ No newline at end of file diff --git a/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_model.json b/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_model.json new file mode 100644 index 0000000000000000000000000000000000000000..984fb9586e4d2cb5174c76193e107d2a5e92ac45 --- /dev/null +++ b/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_model.json @@ -0,0 +1,2446 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.197, + 0.091, + 0.8, + 0.112 + ], + "angle": 0, + "content": "A Causal View of Entity Bias in (Large) Language Models" + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.13, + 0.82, + 0.148 + ], + "angle": 0, + "content": "Fei Wang† Wenjie Mo† Yiwei Wang‡ Wenxuan Zhou† Muhao Chen†#" + }, + { + "type": "text", + "bbox": [ + 0.199, + 0.149, + 0.805, + 0.165 + ], + "angle": 0, + "content": "†University of Southern California; ‡University of California, Los Angeles;" + }, + { + "type": "text", + "bbox": [ + 0.373, + 0.166, + 0.631, + 0.181 + ], + "angle": 0, + "content": "\\(^{\\#}\\)University of California, Davis" + }, + { + "type": "text", + "bbox": [ + 0.204, + 0.182, + 0.797, + 0.199 + ], + "angle": 0, + "content": "{fwang598, jackymo, zhouwenx}@usc.edu; wangyw.evan@gmail.com;" + }, + { + "type": "text", + "bbox": [ + 0.405, + 0.2, + 0.599, + 0.214 + ], + "angle": 0, + "content": "muhchen@ucdavis.edu" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.267 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.28, + 0.461, + 0.735 + ], + "angle": 0, + "content": "Entity bias widely affects pretrained (large) language models, causing them to rely on (biased) parametric knowledge to make unfaithful predictions. Although causality-inspired methods have shown great potential to mitigate entity bias, it is hard to precisely estimate the parameters of underlying causal models in practice. The rise of black-box LLMs also makes the situation even worse, because of their inaccessible parameters and uncalibrated logits. To address these problems, we propose a specific structured causal model (SCM) whose parameters are comparatively easier to estimate. Building upon this SCM, we propose causal intervention techniques to mitigate entity bias for both white-box and black-box settings. The proposed causal intervention perturbs the original entity with neighboring entities. This intervention reduces specific biasing information pertaining to the original entity while still preserving sufficient semantic information from similar entities. Under the white-box setting, our training-time intervention improves OOD performance of PLMs on relation extraction (RE) and machine reading comprehension (MRC) by 5.7 points and by 9.1 points, respectively. Under the black-box setting, our in-context intervention effectively reduces the entity-based knowledge conflicts of GPT-3.5, achieving up to 20.5 points of improvement of exact match accuracy on MRC and up to 17.6 points of reduction in memorization ratio on RE.1" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.747, + 0.26, + 0.763 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.111, + 0.772, + 0.489, + 0.884 + ], + "angle": 0, + "content": "Entity bias (Longpre et al., 2021; Wang et al., 2022; Xu et al., 2022; Peng et al., 2020; Qian et al., 2021b; Hermann et al., 2015) refers to an undesirable phenomenon where models overly rely on prediction shortcuts triggered by specific entities to make spurious predictions. For example, given the sentence \"Bill Gates went to Microsoft Building 99,\" models" + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.264, + 0.808, + 0.277 + ], + "angle": 0, + "content": "Context: Bill Gates went to Microsoft Building 99." + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.278, + 0.825, + 0.288 + ], + "angle": 0, + "content": "Question: What's the relation between Bill Gates and" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.29, + 0.704, + 0.301 + ], + "angle": 0, + "content": "Microsoft in the given context?" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.302, + 0.66, + 0.313 + ], + "angle": 0, + "content": "Option: founder, visitor." + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.314, + 0.783, + 0.327 + ], + "angle": 0, + "content": "Answer with one word: founder (GPT-3.5) X" + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.349, + 0.858, + 0.386 + ], + "angle": 0, + "content": "Assume subject_entity can be any of Bill Gates, Jeff Bezos, and Steve Jobs, while object-entity can be any of Google, Microsoft, and Meta." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.388, + 0.851, + 0.399 + ], + "angle": 0, + "content": "Context: subject entity went to object entity Building 99." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.4, + 0.854, + 0.424 + ], + "angle": 0, + "content": "Question: What's the relation between subject_entity and object-entity in the given context?" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.425, + 0.661, + 0.436 + ], + "angle": 0, + "content": "Option: founder, visitor." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.438, + 0.776, + 0.45 + ], + "angle": 0, + "content": "Answer with one word: visitor (GPT-3.5)" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.469, + 0.883, + 0.513 + ], + "angle": 0, + "content": "Figure 1: An example of entity bias in GPT-3.5. Our in-context intervention mitigates the conflicts between parametric knowledge and contextual knowledge." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.542, + 0.885, + 0.735 + ], + "angle": 0, + "content": "may be misled by their memory of the entities Bill Gates and Microsoft, saying the relation between them in this context is founder rather than visitor, as shown in Fig. 1. Recent studies show that entity bias widely affects pretrained (large) language models (LLMs; Longpre et al. 2021; Yan et al. 2022; Zhou et al. 2023). These models have a tendency to disregard contextual information that contradicts or is infrequently reported in the pretrained corpus, while excessively relying on (biased) parametric knowledge (Longpre et al., 2021) to make unfaithful predictions and perpetuate bias." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.737, + 0.884, + 0.866 + ], + "angle": 0, + "content": "Prior studies have proposed multiple causality-inspired methods to mitigate entity bias (Zhang et al., 2017; Nan et al., 2021; Wang et al., 2022; Zhu et al., 2022). Despite their potential, the causal models underlying these methods are flawed in practice, primarily because of imprecise parameter estimation. For example, some causal models necessitate estimating the probability distribution" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.892, + 0.488, + 0.919 + ], + "angle": 0, + "content": "1Our code is available at https://github.com/ luka-group/Causal-View-of-Entity-Bias" + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.881, + 0.884, + 0.919 + ], + "angle": 0, + "content": "2Although Zhang et al. (2017) do not mention causal theory, the proposed entity masking does follow a relevant principle to cut off causal links between specific entities and labels." + }, + { + "type": "page_number", + "bbox": [ + 0.477, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15173" + }, + { + "type": "footer", + "bbox": [ + 0.21, + 0.946, + 0.788, + 0.959 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15173-15184" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.96, + 0.72, + 0.972 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.112, + 0.085, + 0.489, + 0.228 + ], + "angle": 0, + "content": "over labels when given a sentence that is devoid of entities or contextual information (Zhang et al., 2017; Wang et al., 2022). These methods either lose predictive information about entities, or are prone to erroneous representation without contextualization. The other critical problem is the difficulty of applying these methods to black-box LLMs, of which parameters are inaccessible and logits are uncalibrated." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.231, + 0.49, + 0.472 + ], + "angle": 0, + "content": "To address the aforementioned problems, the first contribution of this paper is a causal analysis of entity bias mitigation methods (§3.1). We examine and compare the structured causal models (SCMs) behind existing methods. We find that, among the theoretically equivalent causal models (Verma and Pearl, 1990), there exists a specific SCM whose parameters are comparatively easier to estimate. As shown in Fig. 2, the proposed SCM only requires to intervene input entities to mitigate the presence of spurious features before passing them to the subsequent neural layers. Moreover, it retains the entity type information3 at an appropriate level of granularity without requiring explicit entity typing." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.473, + 0.49, + 0.713 + ], + "angle": 0, + "content": "The second contribution of this paper is a training-time causal intervention technique for mitigating entity bias based on the proposed SCM (§3.2). Specifically, we identify entities that are likely to share similar predictive information with the given entity. During training, we perturb embedding of the given entity within a convex hull constructed by embeddings of similar entities. During inference, we represent the entity with the center of the convex hull. Taking advantage of the continuous nature of the embedding space, this intervention does not rely on models specifically trained on natural language to estimate the label distribution of unnatural text, nor does it sacrifice predictive entity or contextual information." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.714, + 0.49, + 0.86 + ], + "angle": 0, + "content": "The third contribution of this paper is to transform the training-time intervention into in-context intervention for black-box LLMs whose parameters are inaccessible, and logits are uncalibrated (§3.3). A significant advantage of the proposed SCM is that the causal intervention is carried out at the input layer, enabling its implementation within an in-context setting. Specifically, we replace entities with placeholders and define each placeholder" + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.083, + 0.68, + 0.174 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.697, + 0.081, + 0.864, + 0.174 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.175, + 0.885, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.262, + 0.882, + 0.278 + ], + "angle": 0, + "content": "Figure 2: Structured causal models revealing entity bias." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.304, + 0.885, + 0.414 + ], + "angle": 0, + "content": "by examples - a set of similar entities. For example, we can replace Bill Gates in Fig. 1 with subject_entity and presuppend the prompt, \"Assume that subject-entity can be any of Steve Jobs, Bill Gates, and Jeff Bezos\", to the input. This in-context intervention can be applied to any black-box LLM without additional cost." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.416, + 0.885, + 0.77 + ], + "angle": 0, + "content": "Experiments on relation extraction (RE) and machine reading comprehension (MRC) show that the proposed causal intervention techniques are effective for both white-box and black-box LLMs. Under the white-box setting (\\(\\S 4\\)), our training-time intervention significantly improves out-of-distribution performance of RoBERTa (Liu et al., 2019) on RE by 5.7 points and SpanBERT (Joshi et al., 2020) on MRC by 9.1 points, comparing with the vanilla version. Under the black-box setting (\\(\\S 5\\)), our in-context intervention effectively reduces the entity-based knowledge conflicts (Long-pre et al., 2021) and improves the task performance of GPT-3.5. Specifically, our method outperforms the best baseline by up to 20.5 points of exact match accuracy on MRC and reduces the memorization ratio by up to 17.6 points on RE. Further analyses reveal the crucial role of the number of neighboring entities \\(k\\) in balancing the predictive information and biasing information from entities, and the necessity of entity placeholder definition for in-context intervention." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.783, + 0.666, + 0.799 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.815, + 0.885, + 0.895 + ], + "angle": 0, + "content": "Entity Bias in LLMs. LLMs memorize factual knowledge in their parameters during pretraining (Roberts et al., 2020; Jiang et al., 2020) and show promising results in answering factual questions (Petroni et al., 2019; Brown et al., 2020; Wei" + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.869, + 0.489, + 0.919 + ], + "angle": 0, + "content": "3Entity type information plays a crucial role in entity-driven tasks. For example, without knowing a more specific location type, it is impossible to differentiate between relations born_in_city and born_in_country." + }, + { + "type": "page_footnote", + "bbox": [ + 0.53, + 0.904, + 0.838, + 0.919 + ], + "angle": 0, + "content": "4https://platform.openai.com/docs/models/gpt-3-5" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15174" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.112, + 0.085, + 0.493, + 0.312 + ], + "angle": 0, + "content": "et al., 2022). However, the parametric knowledge may be inaccurate due to the misinformation in the training corpus (Lin et al., 2022) or outdated as the world evolves (Liska et al., 2022; Kasai et al., 2022). In such scenarios, it is critical for LLMs to update their predictions when provided with contextual evidence. However, previous studies (Longpre et al., 2021; Qian et al., 2021b; Yan et al., 2022) observe that language models may take entities as shortcuts, leading to spurious predictions based solely on parametric knowledge. This bias becomes more prominent when the evidence contains infrequent or conflicting knowledge compared to the training corpus." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.313, + 0.492, + 0.522 + ], + "angle": 0, + "content": "To mitigate this bias, previous work (Longpre et al., 2021; Chen et al., 2022; Li et al., 2022; Zhou et al., 2023) introduces the entity substitution technique, which involves constructing counterfactual data by randomly replacing the entities, and updating the language models either by finetuning or in-context learning. Although showing improved results, these techniques are empirical and lack theoretical backgrounds. In this paper, we theoretically analyze the entity bias problem from a causal view. Furthermore, we propose a causal intervention method that surpasses the performance of entity substitution." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.531, + 0.491, + 0.82 + ], + "angle": 0, + "content": "Debiasing with Causal Intervention. LLMs have been revealed with bias problems, for which literature has paid much attention in order to mitigate their adverse effects (Sweeney and Najafian, 2019; Zhang et al., 2020b; Venkit and Wilson, 2021; Lalor et al., 2022). Recent debiasing techniques incorporate the concept of counterfactual inference, and have been applied in various tasks for bias mitigation (Niu and Zhang, 2021; Qian et al., 2021a; Wang et al., 2022). One dominant technique is based on causal mediation analysis (Udomcharoenchaikit et al., 2022), which involves decomposing the total effect into pure direct effect and total indirect effect. In this context, Wang et al. (2022) utilize total direct effect and total effect to debias the relation extraction. Apart from debiasing, causal mediation analysis can be used to analyze biases in LLMs (Vig et al., 2020; Finlayson et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.823, + 0.492, + 0.921 + ], + "angle": 0, + "content": "In addition to intervening causal mediator, previous studies have also explored confounder analysis (Keith et al., 2020; Qian et al., 2021a; Feder et al., 2022; Weld et al., 2022). A confounder is a variable that influences both the input and the output, causing a spurious correlation between them." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.887, + 0.263 + ], + "angle": 0, + "content": "Typically, the de-confounder process applies the do-calculus (Pearl, 2012) to compute the prediction assuming that the value of the confounder variable is not the observed one but follows its natural distribution (Zhang et al., 2020a; Tian et al., 2022). Our approach is also based on confounder analysis. While nearly all the aforementioned approaches request a white-box accessibility of the model with at least logits of predictions, this work represents a pilot study of deconfounder method that applies to purely black-box LLMs." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.276, + 0.613, + 0.291 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.303, + 0.886, + 0.415 + ], + "angle": 0, + "content": "In this section, we first analyze methods for mitigating entity bias in a causal view and propose an easy-to-estimate SCM as a theoretical basis (§3.1). Based on the proposed SCM, we design a training-time intervention technique for white-box LLMs (§3.2) and an in-context intervention technique for black-box LLMs (§3.3)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.43, + 0.796, + 0.445 + ], + "angle": 0, + "content": "3.1 Causal Analysis of Entity Bias" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.452, + 0.886, + 0.709 + ], + "angle": 0, + "content": "To compare existing methods in the same context, we analyze the structured causal models (SCMs) behind them. Fig. 2 shows two typical SCMs for entity bias mitigation methods, where \\( X \\) refers to the raw input, \\( E \\) refers to entities, and \\( Y \\) refers to the label. The links \\( X \\rightarrow Y \\leftarrow E \\) show that LLMs rely on both predictive information from the whole input and the biasing information from specific entities to make the prediction. The links \\( E \\rightarrow X \\) and \\( X \\rightarrow E \\) assume that the context is written down with the entity in mind or vice versa. As discussed by Verma and Pearl (1990), we cannot differentiate between these two directions merely based on statistical observations. Indeed, the two SCMs with opposite links between \\( X \\) and \\( E \\) are equivalent according to the Bayes' theorem:" + }, + { + "type": "equation", + "bbox": [ + 0.591, + 0.725, + 0.803, + 0.78 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} P (X) P (E | X) P (Y | X, E) \\\\ = P (Y, X, E) \\\\ = P (E) P (X \\mid E) P (Y \\mid X, E) \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.796, + 0.886, + 0.876 + ], + "angle": 0, + "content": "As revealed by these SCMs, entity bias exists in LLMs because entities serve as either confounders or mediators. Thus, the bias can be mitigated through causal intervention, such as backdoor adjustment" + }, + { + "type": "equation", + "bbox": [ + 0.55, + 0.89, + 0.843, + 0.922 + ], + "angle": 0, + "content": "\\[\nP (Y | d o (X)) = \\sum_ {E} P (Y | X, E) P (E),\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15175" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.082, + 0.528, + 0.245 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.561, + 0.081, + 0.887, + 0.246 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.128, + 0.254, + 0.867, + 0.27 + ], + "angle": 0, + "content": "Figure 3: Left: Training-time intervention with \\( k = 4 \\). Right: Example of predictive and biasing information." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.294, + 0.49, + 0.552 + ], + "angle": 0, + "content": "which eliminates the influence of a specific variable (in this context, \\( E \\)) by assigning values to this variable. However, previous SCM-based debiasing methods exhibit divergent performances, since they estimate different (conditional) probabilities using different surrogates when performing the causal intervention. For example, counterfactual analysis by Wang et al. (2022) estimates and deducts the biasing effect of entities on labels by masking the context, while Zhang et al. (2017) and Longpre et al. (2021) directly remove the effect of entities by entity masking or substitution. None of them estimates the causal effects of entity names precisely, due to the highly complex architectures of LLMs, which account for their unsatisfactory performance on mitigating entity bias." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.566, + 0.49, + 0.919 + ], + "angle": 0, + "content": "In this work, we consider the SCM in Fig. 2, whose parameters are much easier to estimate in practice. Since most LLMs follow a sequential structure by stacking neural layers, mitigating the entity bias in one layer will also mitigate the entity bias in subsequent layers. The underlying logic is simple - if we block the spurious features in the input, there will be no spurious correlations to capture. Therefore, we propose to mitigate the entity bias in the input layer \\( M \\), which could be an embedding layer or a prompt layer. Obviously, \\( P(M|X,E) \\) can be estimated more accurately and efficiently than \\( P(Y|X,E) \\), because there is no need to run the whole model, ensuring less error propagation and computational cost. To further improve the estimation by retaining as much predictive information as possible, we propose to estimate \\( P(M|do(X)) \\) by perturbing the entity with similar entities rather than masking it. In the following sections, we will show how to realize the proposed causal intervention on both white-box and black-box LLMs." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.294, + 0.768, + 0.309 + ], + "angle": 0, + "content": "3.2 Training-time Intervention" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.314, + 0.885, + 0.619 + ], + "angle": 0, + "content": "For white-box models of which the parameters are accessible, we can effectively address their internal bias through training-time intervention. In the case of entity bias identified by the proposed SCM, we realize the causal intervention by perturbing the input entities or entity tokens using their neighboring counterparts in the embedding space, as shown in Fig. 3 (Left). For each entity presented in the input text, we first find its top \\( k \\) nearest neighbors according to embedding distance. Then we construct the smallest convex hull5 to cover the original entity and neighboring entities. Due to the continuous nature of the embedding space, the embeddings within the convex hull approximately represent the same predictive information as a whole. The entity-specific biasing information, which has the potential to trigger spurious shortcuts, gradually diminishes from the original entity towards the border of the convex hull." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.621, + 0.885, + 0.862 + ], + "angle": 0, + "content": "During training, we introduce perturbations to the entity embedding by replacing it with a random embedding selected from within the convex hull. In this way, the convex hull bounded the predictive information, while random sampling further introduces noises and increases the diversity of data for robust training. During inference, we replace the original entity embedding with the center of the convex hull, in order to balance the trade-off between predictive and biasing information. Fig. 3 (Right) provides an example of the information preserved through such intervention. By replacing the entity Bill Gates with the center of the convex hull, encompassed by its neighboring entities, such as Steve Jobs and Jeff Bezos, we effectively retain the" + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.87, + 0.883, + 0.919 + ], + "angle": 0, + "content": "5This convex hull-bounded perturbation is inspired by Dong et al. (2021), where perturbation within a convex hull formed by synonyms is used to improve model robustness against word substitutions." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15176" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.21, + 0.086, + 0.443, + 0.099 + ], + "angle": 0, + "content": "1. Replace entities with placeholders" + }, + { + "type": "image", + "bbox": [ + 0.125, + 0.1, + 0.549, + 0.281 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.608, + 0.086, + 0.845, + 0.099 + ], + "angle": 0, + "content": "3. Define placeholders with examples" + }, + { + "type": "image", + "bbox": [ + 0.572, + 0.101, + 0.882, + 0.281 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.168, + 0.292, + 0.828, + 0.308 + ], + "angle": 0, + "content": "Figure 4: In-context intervention for black-box LLMs. We take relation extraction as an example." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.332, + 0.489, + 0.413 + ], + "angle": 0, + "content": "shared predictive information (e.g., person), while mitigating the biasing information (e.g., founder of Microsoft). That is to say, the convex hull-bounded perturbation serves as an effective estimation of \\( P(M|do(X)) \\)." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.454, + 0.344, + 0.468 + ], + "angle": 0, + "content": "3.3 In-context Intervention" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.492, + 0.49, + 0.702 + ], + "angle": 0, + "content": "The rise of Web services powered by black-box LLMs, such as GPT-3.5, introduces new challenges for mitigating entity bias, demanding debiasing methods that do not require accessible model weights and prediction logits. As discussed in §3.1, a key advantage of our SCM is that the deconfounder operation is merely on the input layer. In the context of black-box LLMs, the input is the user-provided prompt. Thus, we perform the causal intervention solely through modifying prompts to resolve entity bias. We propose a four-step (test-time) in-context intervention technique for black-box LLMs. Fig. 4 shows the whole process." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.71, + 0.49, + 0.919 + ], + "angle": 0, + "content": "First, we replace the original entity mention in the input with abstract placeholders (e.g., [ENTITY]). This step effectively mitigates any biasing information from the original entity names, because the placeholders are semantic-neutral. However, this step also eliminates predictive information from entities. We show in §5.3 that, without proper definition for the placeholder, models can easily fail to answer questions. In the next two steps, we construct definitions to provide predictive information for each placeholder while introducing minimal additional biasing information. Second, we query the LLM to name \\( k \\) entities similar to the" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.331, + 0.885, + 0.655 + ], + "angle": 0, + "content": "original one (e.g., \\( E_{o} \\)). These generated entities (e.g., \\( E_{a} \\) and \\( E_{b} \\)) present similar predictive information as the original entity, and are able to fulfill the same function as neighboring entities in §3.2. Third, we define the placeholder with the original entity and generated entities. For example, we can verbalize the definition as \"Assume [ENTITY] can be any of \\( E_{o} \\), \\( E_{a} \\) and \\( E_{b} \\)\". This definition encourages the LLM to find common properties of given entities rather than relying on biasing information of one specific entity. The resulting placeholder along with its definition serves as an effective estimation of \\( P(M|do(X)) \\). Finally, we presuppose the placeholder definition to be modified context and question, and query the LLM with the new prompt. This four-step adjustment ensures that the resulting prompt is free of specific biasing information pertaining to the original entity while still preserving sufficient predictive information by considering given entity examples as a whole." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.665, + 0.753, + 0.683 + ], + "angle": 0, + "content": "4 White-Box Experiments" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.691, + 0.884, + 0.723 + ], + "angle": 0, + "content": "In this section, we evaluate our training-time intervention under the white-box setting." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.734, + 0.715, + 0.75 + ], + "angle": 0, + "content": "4.1 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.76, + 0.884, + 0.873 + ], + "angle": 0, + "content": "Datasets and Metrics. We evaluate our methods on relation extraction (RE) and machine reading comprehension (MRC). For both tasks, we fine-tune models on an in-distribution (ID) training set and evaluate models on both ID and out-of-distribution (OOD) test sets. For RE, we adopt TACRED (Zhang et al., 2017) as the ID dataset and" + }, + { + "type": "page_footnote", + "bbox": [ + 0.508, + 0.881, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Here, we rely on the entity knowledge possessed by LLMs. However, it is possible to replace the LLM with external databases or tools in this step." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15177" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.135, + 0.082, + 0.861, + 0.226 + ], + "angle": 0, + "content": "
RE (F1)MRC (EM)
IDOODΔIDOODΔ
Vanilla Model71.1±0.962.3±0.6-12.4%79.1†±0.163.1†±0.8-20.2%
+ Continual Pretraining (Yan et al., 2022)*---79.6†±0.665.9†±1.1-17.2%
+ CoRE (Wang et al., 2022)71.3±0.361.2±0.6-14.2%---
+ Entity Mask (Zhang et al., 2017)61.4±0.561.9±0.5+0.9%75.7±0.662.9±0.4-16.9%
+ Entity Substitution (Longpre et al., 2021)66.6±0.665.8±0.3-1.2%76.4±0.870.8±1.5-7.3%
+ Ours70.8±0.368.0±0.3-3.9%77.0±0.772.2±0.5-6.2%
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.229, + 0.884, + 0.273 + ], + "angle": 0, + "content": "Table 1: Results under white-box setting. We report the average F1/EM score and standard deviation of three runs. \\(\\Delta\\) shows the relative performance change between ID and OOD. The best number of each column is in bold. * Continual pretraining is not directly comparable to finetuning methods. † Numbers copied from Yan et al. (2022)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.298, + 0.49, + 0.492 + ], + "angle": 0, + "content": "EntRED (Wang et al., 2023) as the OOD dataset, and report micro-F1 score. In both datasets, entities in each sentence are given. For MRC, we adopt TriviaQA (Joshi et al., 2017) as the ID dataset and its answer-substituted version (Yan et al., 2022) as the OOD dataset, and report exact match (EM) score. Following Yan et al. (2022), we hold out \\(10\\%\\) of the training data for development and evaluate models on the original development set. We use the DBName version of their OOD dataset. For all metrics, we report the average score with standard deviation of three runs." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.505, + 0.49, + 0.714 + ], + "angle": 0, + "content": "Baselines. We compare our methods with the following baselines. Entity Mask (Zhang et al., 2017) masks the subject and object entities in the sentence with special tokens. Entity Substitution (Longpre et al., 2021) randomly selects an entity of the same type to substitute the original entity. CoRE (Wang et al., 2022) applies counterfactual inference by computing the difference between the prediction made with the entire sentence and the prediction made with only the entities observed. Continual Pretraining (Yan et al., 2022) introduces an intermediate pretraining stage to the backbone model with the objective of recovering masked entities." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.727, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Implementation Details. For RE, we apply RoBERTa (Liu et al., 2019) as the backbone model following previous works (Zhou and Chen, 2022; Wang et al., 2022). We use the entity Marker_punct input format from Zhou and Chen (2022) in main experiments, in order to mitigate the impact of explicit entity type information on our analysis of entity bias. For MRC, we apply SpanBERT (Joshi et al., 2020) as the backbone model following Yan et al. (2022). Since entities are not given in MRC datasets, we use the same named entity recognition tool used by Yan et al. to" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.298, + 0.885, + 0.555 + ], + "angle": 0, + "content": "extract entities. Since the detected entities could be noisy and incomplete, we perform our method upon answer-substituted training set ensuring all answer entities are perturbed as strong as Entity Substitution. Since RoBERTa and SpanBERT lack entity-level embeddings, we apply our causal intervention to each token embedding within the entity mention instead. To construct convex hull, We select neighboring tokens based on their Euclidean distance to the original token in the embedding space. For both tasks, we perform training-time intervention on each entity token with \\( k = 3 \\). While further data augmentation is always possible, for a fair comparison, we finetune all the models with the same amount of data. More implementation details are in Appx. §A.1." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.568, + 0.614, + 0.582 + ], + "angle": 0, + "content": "4.2 Results" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.59, + 0.884, + 0.701 + ], + "angle": 0, + "content": "As shown in Tab. 1, the vanilla RoBERTa and Span-BERT experiences significant declines in performance on RE \\((-12.4\\%)\\) and MRC \\((-20.2\\%)\\) when evaluated on OOD test sets. For both tasks, the OOD test set exhibits lower entity bias, achieving better performance on it suggests that the model relies less on entity bias as a predictive factor." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.703, + 0.885, + 0.848 + ], + "angle": 0, + "content": "CoRE and Continual Pretraining are the only baselines that improve the ID performance. CoRE leads to a slight performance decrease on the OOD test set of RE in exchange, while Continual Pretraining further increases the OOD performance on MRC. Entity Mask successfully narrows down or even reverses the relative performance drop under OOD setting on the two tasks. However, its absolute performance decreases significantly due" + }, + { + "type": "page_footnote", + "bbox": [ + 0.508, + 0.858, + 0.884, + 0.919 + ], + "angle": 0, + "content": "This is because CoRE is designed for a class-balanced setting, but this experiment emphasizes the performance on the raw class distribution. Moreover, we search its bias mitigation weight on the ID development set, which has a notably different entity distribution compared with the OOD test set." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15178" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.119, + 0.085, + 0.483, + 0.238 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.251, + 0.488, + 0.281 + ], + "angle": 0, + "content": "Figure 5: F1 score of training-time intervention with different \\(k\\) on RE." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.309, + 0.49, + 0.485 + ], + "angle": 0, + "content": "to the loss of predictive information from entities. Moreover, its effectiveness is dependent on the task property. Unlike MRC, entities are given and are not answers in RE, so the gap between ID and OOD performance of Entity Mask are much smaller. Entity Substitution stands out among all the baselines in terms of the OOD performance, with an absolute improvement of 3.5 points on RE and 7.7 points on MRC. However, its ID performance suffers a lot from the distribution shift of entities during training." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.487, + 0.49, + 0.648 + ], + "angle": 0, + "content": "Our training-time intervention achieves the best OOD performance, with an absolute improvement of 2.2 points on RE and 1.4 points on MRC compared with Entity Substitution. At the same time, its ID performance is also better. These results show that our method mitigates entity bias more effectively without losing much predictive information. In other words, the proposed method represents a better way to estimate the parameters of the proposed SCM accurately." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.664, + 0.229, + 0.679 + ], + "angle": 0, + "content": "4.3 Analysis" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.687, + 0.49, + 0.735 + ], + "angle": 0, + "content": "To provide a comprehensive understanding of our training-time intervention, we further conduct analyses on RE." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.743, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Effect of \\( k \\). The number of neighbors, \\( k \\), plays a crucial role in balancing the predictive information and biasing information from entities. To find the sweet spot of \\( k \\), we examine its influence on model performance as shown in Fig. 5. In general, the ID performance decreases when \\( k \\) increases. As the value of \\( k \\) increases, the resulting convex hull becomes larger, causing the center of the hull to move further away from the original entity. Consequently, both the predictive information and biasing information that contribute to ID performance grad" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.198 + ], + "angle": 0, + "content": "ually diminish. In contrast, the OOD performance is lower when \\( k \\) is too big or too small. When \\( k \\) is too big, the same problem under ID setting also happens to the OOD setting. When \\( k \\) is too small, the biasing information is not effectively mitigated, because the perturbed entity is too close to the original entity." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.204, + 0.886, + 0.414 + ], + "angle": 0, + "content": "Entity Type as Input. Previous experiments in this section do not explicitly input entity information as it may disturb the causal analysis. Here, we analyze the effect of entity type information as input. We use the typed-entity Marker_punct input format from Zhou and Chen (2022). The ID and OOD F1 scores of vanilla RoBERTa model are 74.6 and 68.9 points, respectively. Our training-time intervention further improves the ID performance by 0.7 points and the OOD performance by 2.9 points. These results indicate that information from neighboring entities is complementary to coarse-grained entity type information for precise RE." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.427, + 0.75, + 0.444 + ], + "angle": 0, + "content": "5 Black-Box Experiments" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.454, + 0.884, + 0.503 + ], + "angle": 0, + "content": "In this section, we evaluate our in-context intervention for mitigating entity bias from LLMs under black-box setting." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.515, + 0.715, + 0.532 + ], + "angle": 0, + "content": "5.1 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.543, + 0.885, + 0.88 + ], + "angle": 0, + "content": "Datasets. Following Zhou et al. (2023), we adopt GPT-3.5 text-davinci-003 as the backbone LLM and evaluate the model performance under a zero-shot setting. We use the RE and MRC datasets provided by Zhou et al. (2023). The RE dataset is based on Re-TACRED (Stoica et al., 2021). Zhou et al. pair each instance's entities with a randomly sampled context that shares the same entity types but possesses different relations. To mitigate the influence of the label no Relation, which can also serve as a signal of abstention, we further filter out all instances whose original or updated labels are no relation. The MRC dataset is based on Natural Questions (Kwiatkowski et al., 2019). Zhou et al. replace the original answer in each instance with a randomly sampled entity of the same type. They only collect instances where the LLM can give the correct answer based on the raw context. Intuitively, LLMs that faithfully capture contextual information should update their answers based on the new context." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.888, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Metrics. We report the F1 score for RE, and EM score for MRC. To align with previous works, we" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.942 + ], + "angle": 0, + "content": "15179" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.176, + 0.084, + 0.262, + 0.097 + ], + "angle": 0, + "content": "MRC (EM↑)" + }, + { + "type": "image", + "bbox": [ + 0.119, + 0.097, + 0.304, + 0.224 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.368, + 0.084, + 0.453, + 0.097 + ], + "angle": 0, + "content": "MRC (MR↓)" + }, + { + "type": "image", + "bbox": [ + 0.31, + 0.097, + 0.496, + 0.223 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.568, + 0.084, + 0.636, + 0.097 + ], + "angle": 0, + "content": "RE(F1↑)" + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.097, + 0.687, + 0.223 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.757, + 0.084, + 0.829, + 0.097 + ], + "angle": 0, + "content": "RE (MR↓)" + }, + { + "type": "image", + "bbox": [ + 0.695, + 0.097, + 0.878, + 0.223 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.251, + 0.882, + 0.296 + ], + "angle": 0, + "content": "Figure 6: GPT-3.5 results on MRC and RE under black-box setting. We report the EM score on MRC and the F1 score on RE, for which higher scores are better. We also report the MR score on both tasks, for which lower scores are better. Our in-context intervention performs consistently better than baselines under all settings." + }, + { + "type": "image", + "bbox": [ + 0.12, + 0.319, + 0.485, + 0.423 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.435, + 0.487, + 0.464 + ], + "angle": 0, + "content": "Figure 7: Ablation study of in-context intervention for GPT-3.5 on RE." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.489, + 0.487, + 0.539 + ], + "angle": 0, + "content": "also report the memorization ratio (MR; Longpre et al. 2021) to measure the model's ability to update answers based on given contexts.\\(^{8}\\)" + }, + { + "type": "text", + "bbox": [ + 0.111, + 0.544, + 0.491, + 0.737 + ], + "angle": 0, + "content": "Baselines. We compare our in-context intervention with the methods introduced by Zhou et al. (2023). Base prompts directly concatenate the context and the question of each instance as the query. Attribute-based prompts append \"in the given context\" to the question. Opinion-based prompts modified the context to a narrator's statement by prepending \"Bob said\" to the context, and then query the LLM about the narrator's opinion by preponding \"What's Bob's opinion on\" to the question. We evaluate all methods with and without specifically designed task instructions following Zhou et al. (2023)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.743, + 0.489, + 0.872 + ], + "angle": 0, + "content": "Implementation Details. We apply our in-context intervention to attribute-based prompts. We adopt the backbone LLM to propose two similar entities along with the original entity to define each placeholder. To further eliminate the spurious entity mapping, we shuffle the entities for each placeholder before verbalization. Details of all prompt templates used can be found in Appx. §A.2. Since" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.32, + 0.885, + 0.417 + ], + "angle": 0, + "content": "entities are not given in MRC, we detect named entities and replace them with placeholders using gpt-3.5-turbo as an external tool. Given the potential abundance of entities in long contexts, we do not replace entities that exclusively appear in the context." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.43, + 0.614, + 0.443 + ], + "angle": 0, + "content": "5.2 Results" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.451, + 0.884, + 0.692 + ], + "angle": 0, + "content": "As shown in Fig. 6, all methods benefit from carefully designed task instructions in terms of task performance. The Opinion-based prompt performs the best among all baselines in most cases. Compared with the Base prompt, it significantly improves the EM score by 18.7-21.5 points on MRC and the F1 score by 0.6-4.7 points on RE. Our in-context intervention achieves the highest EM/F1 score and the lowest MR score under all settings. Specifically, without task instruction, our in-context intervention outperforms the best baseline by 20.5 EM points on MRC and reduces the MR score by 17.6 points on RE. These results demonstrate the effectiveness of our causal intervention for addressing entity-based knowledge conflicts in black-box LLMs." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.705, + 0.675, + 0.72 + ], + "angle": 0, + "content": "5.3 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.727, + 0.884, + 0.919 + ], + "angle": 0, + "content": "We in addition conduct an ablation study on RE to provide a comprehensive understanding of our method, as shown in Fig. 7. When the placeholder definition is not provided (i.e., w/o definition), no entity information, including both biasing and predictive information, appears in the input. As a result, it successfully blocks any spurious shortcuts with MR drops to 0. However, the F1 score also drops sharply from 71.8 points to 37.9 points, indicating that some entity information is essential to accurate RE and the LLM cannot understand the placeholders well without their definition." + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.878, + 0.488, + 0.92 + ], + "angle": 0, + "content": "\\({}^{8}MR = \\frac{P_{o}}{P_{o} + P_{s}}\\) , where \\(P_{o}\\) is the probability that the model generates the original answer and \\(P_{s}\\) is the probability that the model updates the answer correctly." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15180" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.328 + ], + "angle": 0, + "content": "We further examine the role of original entities in the placeholder definition. On the one hand, we remove the original entities from the definition (i.e., w/o original entity). Results show that our method can still improve F1 while reducing MR. This verifies the effectiveness of using a set of similar entities to represent the predictive information from the original entity. On the other hand, we put the original subject and object entities at the same position (i.e., w/o entity shuffle) in the definition so that the LLM can easily map them. As a result, the MR increases significantly, showing that the LLM can find spurious shortcuts even through mapping the subject entity and the object entity from two entity sets." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.344, + 0.248, + 0.358 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.373, + 0.49, + 0.631 + ], + "angle": 0, + "content": "In this paper, we analyze the entity bias in LLMs from a causal view. Building upon an SCM whose parameters are easier to estimate, we propose training-time causal intervention for white-box LLMs and in-context causal intervention for black-box LLMs. Both intervention techniques perturb the original entity with neighboring entities to mitigate spurious correlations between specific entities and predictions. Experiments on relation extraction and machine reading comprehension show that the proposed intervention can effectively reduce the conflicts between parametric knowledge and contextual knowledge and significantly improve the performance of LLMs. Future work can apply our causal intervention to more LLMs and tasks to achieve context-faithful answers." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.649, + 0.28, + 0.666 + ], + "angle": 0, + "content": "Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.678, + 0.49, + 0.92 + ], + "angle": 0, + "content": "We appreciate the reviewers for their insightful comments and suggestions. Fei Wang is supported by the Annenberg Fellowship and the Amazon ML Fellowship. Wenjie Mo is supported by the USC CURVE Fellowship and the Provost's Research Fellowship. Wenxuan Zhou and Muhao Chen are supported by the NSF Grant IIS 2105329, the NSF Grant ITE 2333736, the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. This work is also supported in part by a Cisco Research Award, two Amazon Research Awards, and a Keston Research Award. Computing of this work has been partly supported by a subaward of NSF Cloudbank 1925001 through UCSD." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.084, + 0.608, + 0.099 + ], + "angle": 0, + "content": "Limitation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.109, + 0.886, + 0.4 + ], + "angle": 0, + "content": "Although we have tried to verify the effectiveness of our method under diverse settings, including different LLMs, different accessibility of model parameters, and different tasks, there are always more options for further investigation, especially nowadays when more and more LLMs are kept produced. Considering the property of the entity bias issue may vary when it comes to different LLMs and datasets from different domains, future work can build better benchmark for more comprehensive evaluation. In this paper, we only consider zero-shot prompting for black-box LLMs, because this will help us to control variables during causal analysis. However, it is possible to combine the proposed causal intervention with cutting-edge LLM inference methods, such as in-context learning (Brown et al., 2020), although the underlying SCM may become more complex." + }, + { + "type": "title", + "bbox": [ + 0.511, + 0.426, + 0.61, + 0.441 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.448, + 0.884, + 0.528 + ], + "angle": 0, + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.537, + 0.885, + 0.643 + ], + "angle": 0, + "content": "Hung-Ting Chen, Michael Zhang, and Eunsol Choi. 2022. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2292-2307, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.652, + 0.884, + 0.706 + ], + "angle": 0, + "content": "Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. 2021. Towards robustness against natural language word substitutions. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.714, + 0.885, + 0.807 + ], + "angle": 0, + "content": "Amir Feder, Katherine A Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E Roberts, et al. 2022. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. Transactions of the Association for Computational Linguistics, 10:1138-1158." + }, + { + "type": "ref_text", + "bbox": [ + 0.51, + 0.816, + 0.885, + 0.883 + ], + "angle": 0, + "content": "Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan Belinkov. 2021. Causal analysis of syntactic agreement mechanisms in neural language models. arXiv preprint arXiv:2106.06087." + }, + { + "type": "ref_text", + "bbox": [ + 0.51, + 0.891, + 0.887, + 0.92 + ], + "angle": 0, + "content": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman," + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.448, + 0.887, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "15181" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.135, + 0.086, + 0.489, + 0.126 + ], + "angle": 0, + "content": "and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.136, + 0.487, + 0.188 + ], + "angle": 0, + "content": "Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.198, + 0.489, + 0.264 + ], + "angle": 0, + "content": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.273, + 0.489, + 0.352 + ], + "angle": 0, + "content": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.362, + 0.489, + 0.427 + ], + "angle": 0, + "content": "Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. Realtime qa: What's the answer right now? arXiv preprint arXiv:2207.13332." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.437, + 0.489, + 0.514 + ], + "angle": 0, + "content": "Katherine Keith, David Jensen, and Brendan O'Connor. 2020. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332-5344." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.525, + 0.489, + 0.616 + ], + "angle": 0, + "content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.627, + 0.489, + 0.706 + ], + "angle": 0, + "content": "John P Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, and Ahmed Abbasi. 2022. Benchmarking intersectional biases in nlp. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3598-3609." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.715, + 0.489, + 0.78 + ], + "angle": 0, + "content": "Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2022. Large language models with controllable working memory. arXiv preprint arXiv:2211.05110." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.791, + 0.489, + 0.87 + ], + "angle": 0, + "content": "Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214-3252, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.879, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, D'Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer," + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.489, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.529, + 0.086, + 0.884, + 0.151 + ], + "angle": 0, + "content": "Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models. In International Conference on Machine Learning, pages 13604-13622. PMLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.162, + 0.884, + 0.227 + ], + "angle": 0, + "content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.236, + 0.883, + 0.315 + ], + "angle": 0, + "content": "Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052-7063." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.324, + 0.883, + 0.39 + ], + "angle": 0, + "content": "Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, and Wei Lu. 2021. Uncovering main causalities for long-tailed information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9683-9695." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.4, + 0.883, + 0.451 + ], + "angle": 0, + "content": "Yulei Niu and Hanwang Zhang. 2021. Introspective distillation for robust question answering. Advances in Neural Information Processing Systems, 34:16292-16304." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.461, + 0.883, + 0.501 + ], + "angle": 0, + "content": "Judea Pearl. 2012. The do-calculus revisited. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, pages 3-11." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.511, + 0.883, + 0.59 + ], + "angle": 0, + "content": "Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2020. Learning from context or names? an empirical study on neural relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3661-3672." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.599, + 0.883, + 0.716 + ], + "angle": 0, + "content": "Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.726, + 0.883, + 0.818 + ], + "angle": 0, + "content": "Chen Qian, Fuli Feng, Lijie Wen, Chunping Ma, and Pengjun Xie. 2021a. Counterfactual inference for text classification debiasing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5434-5445." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.827, + 0.883, + 0.919 + ], + "angle": 0, + "content": "Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. 2021b. Annotation inconsistency and entity bias in MultiWOZ. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 326-337, Singapore and Online. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.884, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "15182" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.166 + ], + "angle": 0, + "content": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.177, + 0.49, + 0.243 + ], + "angle": 0, + "content": "George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-tacred: Addressing shortcomings of the tacred dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13843-13850." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.255, + 0.488, + 0.321 + ], + "angle": 0, + "content": "Chris Sweeney and Maryam Najafian. 2019. A transparent framework for evaluating unintended demographic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1662-1667." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.333, + 0.488, + 0.399 + ], + "angle": 0, + "content": "Bing Tian, Yixin Cao, Yong Zhang, and Chunxiao Xing. 2022. Debiasing nlu models via causal intervention and counterfactual reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11376-11384." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.411, + 0.488, + 0.516 + ], + "angle": 0, + "content": "Can Udomcharoenchaikit, Wuttikorn Ponwitayarat, Patomporn Payoungkhamdee, Kanruethai Masuk, Weerayut Buaphet, Ekapol Chuangsuwanich, and Sarana Nutanong. 2022. Mitigating spurious correlation in natural language understanding with counterfactual inference. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11308-11321." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.528, + 0.488, + 0.58 + ], + "angle": 0, + "content": "Pranav Narayanan Venkit and Shomir Wilson. 2021. Identification of bias against people with disabilities in sentiment analysis and toxicity detection models. arXiv preprint arXiv:2111.13259." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.593, + 0.488, + 0.646 + ], + "angle": 0, + "content": "Thomas Verma and Judea Pearl. 1990. Equivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, pages 255-270." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.658, + 0.488, + 0.736 + ], + "angle": 0, + "content": "Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Simas Sakenis, Jason Huang, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for interpreting neural nlp: The case of gender bias. arXiv preprint arXiv:2004.12265." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.748, + 0.488, + 0.827 + ], + "angle": 0, + "content": "Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, and Bryan Hooi. 2022. Should we rely on entity mentions for relation extraction? debi- aising relation extraction with counterfactual analysis. arXiv preprint arXiv:2205.03784." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.84, + 0.488, + 0.918 + ], + "angle": 0, + "content": "Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, and Muhao Chen. 2023. How fragile is relation extraction under entity replacements? In Proceedings of the 27th SIGNLL Conference on Computational Natural Language Learning (CoNLL)." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.884, + 0.152 + ], + "angle": 0, + "content": "Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.16, + 0.884, + 0.24 + ], + "angle": 0, + "content": "Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan A Rossi, and Tim Althoff. 2022. Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 1109-1120." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.248, + 0.883, + 0.327 + ], + "angle": 0, + "content": "Nan Xu, Fei Wang, Bangzheng Li, Mingtao Dong, and Muhao Chen. 2022. Does your model classify entities reasonably? diagnosing and mitigating spurious correlations in entity typing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.334, + 0.883, + 0.427 + ], + "angle": 0, + "content": "Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. 2022. On the robustness of reading comprehension models to entity renaming. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508-520." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.435, + 0.884, + 0.5 + ], + "angle": 0, + "content": "Dong Zhang, Hanwang Zhang, Jinhui Tang, Xian-Sheng Hua, and Qianru Sun. 2020a. Causal intervention for weakly-supervised semantic segmentation. Advances in Neural Information Processing Systems, 33:655-666." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.509, + 0.884, + 0.575 + ], + "angle": 0, + "content": "Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu, and Tiejun Zhao. 2020b. Demographics should not be the reason of toxicity: Mitigating discrimination in text classifications with instance weighting. arXiv preprint arXiv:2004.14088." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.583, + 0.884, + 0.661 + ], + "angle": 0, + "content": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35-45." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.671, + 0.884, + 0.75 + ], + "angle": 0, + "content": "Wenxuan Zhou and Muhao Chen. 2022. An improved baseline for sentence-level relation extraction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 161-168." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.758, + 0.883, + 0.824 + ], + "angle": 0, + "content": "Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023. Context-faithful prompting for large language models. In *Findings of the 2023 Conference on Empirical Methods in Natural Language Processing*." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.832, + 0.884, + 0.911 + ], + "angle": 0, + "content": "Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, and Fuzhen Zhuang. 2022. Generalizing to the future: Mitigating entity bias in fake news detection. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2120-2125." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.884, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "15183" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.084, + 0.357, + 0.101 + ], + "angle": 0, + "content": "A Implementation Details" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.114, + 0.356, + 0.13 + ], + "angle": 0, + "content": "A.1 White-Box Experiments" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.138, + 0.489, + 0.297 + ], + "angle": 0, + "content": "For RE, we use RoBERTa-Large as our backbone model, which has 354 million parameters. Our implementation is based on the codebase by Zhou and Chen (2022) with their default hyper-parameters. More specifically, we employ a learning rate of 3e-5, a batch size of 32, and conduct training for a total of 5 epochs. Other method-specific hyperparameters are selected on the development set of TACRED. Finetuning typically takes 1.5 hours on an NVIDIA RTX A5000 GPU." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.301, + 0.49, + 0.476 + ], + "angle": 0, + "content": "For MRC, we use SpanBERT-base-cased as our backbone model, which has 110 million parameters. Our implementation is based on the codebase by Yan et al. (2022) with their default hyperparameters. More specifically, we employ a learning rate of 2e-5, a batch size of 16, and conduct training for a total of 4 epochs. Other method-specific hyper-parameters are selected on the hold-out development set of TriviaQA. Finetuning typically takes 3 hours on an NVIDIA RTX A5000 GPU." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.495, + 0.352, + 0.51 + ], + "angle": 0, + "content": "A.2 Black-Box Experiments" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.519, + 0.486, + 0.55 + ], + "angle": 0, + "content": "Our implementation is based on the codebase by Zhou et al. (2023)." + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.553, + 0.338, + 0.566 + ], + "angle": 0, + "content": "The instruction for MRC is" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.587, + 0.469, + 0.62 + ], + "angle": 0, + "content": "Instruction: read the given information and answer the corresponding question." + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.639, + 0.456, + 0.654 + ], + "angle": 0, + "content": "The prompt without instruction for MRC is" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.672, + 0.471, + 0.736 + ], + "angle": 0, + "content": "Assume that {ENTITY0} can be any of {entity0Candidates}. [Assume that {ENTITY1} can be any of {entity1Candidates} ...] {context}" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.738, + 0.469, + 0.783 + ], + "angle": 0, + "content": "Q:{question} based on the given text? Extract the answer from the given text. Do not add other words." + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.787, + 0.155, + 0.8 + ], + "angle": 0, + "content": "A:" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.818, + 0.321, + 0.832 + ], + "angle": 0, + "content": "The instruction for RE is" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.853, + 0.468, + 0.884 + ], + "angle": 0, + "content": "Identify the relationship between two entities from a list of options." + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.904, + 0.439, + 0.919 + ], + "angle": 0, + "content": "The prompt without instruction for RE is" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.09, + 0.865, + 0.154 + ], + "angle": 0, + "content": "Assume that subject_entity is one of {subjCandidates}, while object-entity is one of {objCandidates} in the following text. {context}" + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.155, + 0.864, + 0.202 + ], + "angle": 0, + "content": "Q: Which option indicates the relationship between subject_entity and object-entity in the given text?" + }, + { + "type": "text", + "bbox": [ + 0.53, + 0.203, + 0.666, + 0.219 + ], + "angle": 0, + "content": "Options:{options}" + }, + { + "type": "text", + "bbox": [ + 0.53, + 0.221, + 0.551, + 0.233 + ], + "angle": 0, + "content": "A:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.248, + 0.882, + 0.278 + ], + "angle": 0, + "content": "The prompt template for detecting entities in MRC is" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.295, + 0.867, + 0.359 + ], + "angle": 0, + "content": "List named entities in the following sentence. Separate the entities with \\(\\# \\# \\# \\#\\) , if you find multiple entities. Do not add additional words before or after your answers.." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.361, + 0.61, + 0.375 + ], + "angle": 0, + "content": "{sentence}" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.391, + 0.882, + 0.422 + ], + "angle": 0, + "content": "The prompt template for replacing entities with placeholders in MRC is" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.438, + 0.866, + 0.486 + ], + "angle": 0, + "content": "Replace the entity {entity_list} in the following paragraph. \n{paragraph}" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.502, + 0.882, + 0.532 + ], + "angle": 0, + "content": "The prompt template for finding similar entities is" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.548, + 0.865, + 0.628 + ], + "angle": 0, + "content": "Name two [{\\entity_type}] entities similar to {\"\\(\\{entity\\}''\\). Separate the entities with \\#\\#\\#, and do not add additional words before or after your answers. Provide random answers if you are not sure." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.642, + 0.884, + 0.69 + ], + "angle": 0, + "content": "In all the above prompts, variables are surrounded with curly brackets and optional variables are surrounded with square brackets." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "15184" + } + ] +] \ No newline at end of file diff --git a/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_origin.pdf b/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..725f7e7e7c38223e1df3accf1114de8a59fcb14b --- /dev/null +++ b/2023/A Causal View of Entity Bias in (Large) Language Models/729caa97-c496-4007-b8f4-5ff70bb6b7ae_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90964d0f41dcecb380f0e13b6cd6e51f46125dbe8ce9f3b5f9d81b32bc449be8 +size 1174471 diff --git a/2023/A Causal View of Entity Bias in (Large) Language Models/full.md b/2023/A Causal View of Entity Bias in (Large) Language Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..da77bcc8508e4213543f7872ef8d6ca1d7dd1687 --- /dev/null +++ b/2023/A Causal View of Entity Bias in (Large) Language Models/full.md @@ -0,0 +1,336 @@ +# A Causal View of Entity Bias in (Large) Language Models + +Fei Wang† Wenjie Mo† Yiwei Wang‡ Wenxuan Zhou† Muhao Chen†# + +†University of Southern California; ‡University of California, Los Angeles; + +$^{\#}$ University of California, Davis + +{fwang598, jackymo, zhouwenx}@usc.edu; wangyw.evan@gmail.com; + +muhchen@ucdavis.edu + +# Abstract + +Entity bias widely affects pretrained (large) language models, causing them to rely on (biased) parametric knowledge to make unfaithful predictions. Although causality-inspired methods have shown great potential to mitigate entity bias, it is hard to precisely estimate the parameters of underlying causal models in practice. The rise of black-box LLMs also makes the situation even worse, because of their inaccessible parameters and uncalibrated logits. To address these problems, we propose a specific structured causal model (SCM) whose parameters are comparatively easier to estimate. Building upon this SCM, we propose causal intervention techniques to mitigate entity bias for both white-box and black-box settings. The proposed causal intervention perturbs the original entity with neighboring entities. This intervention reduces specific biasing information pertaining to the original entity while still preserving sufficient semantic information from similar entities. Under the white-box setting, our training-time intervention improves OOD performance of PLMs on relation extraction (RE) and machine reading comprehension (MRC) by 5.7 points and by 9.1 points, respectively. Under the black-box setting, our in-context intervention effectively reduces the entity-based knowledge conflicts of GPT-3.5, achieving up to 20.5 points of improvement of exact match accuracy on MRC and up to 17.6 points of reduction in memorization ratio on RE.1 + +# 1 Introduction + +Entity bias (Longpre et al., 2021; Wang et al., 2022; Xu et al., 2022; Peng et al., 2020; Qian et al., 2021b; Hermann et al., 2015) refers to an undesirable phenomenon where models overly rely on prediction shortcuts triggered by specific entities to make spurious predictions. For example, given the sentence "Bill Gates went to Microsoft Building 99," models + +Context: Bill Gates went to Microsoft Building 99. + +Question: What's the relation between Bill Gates and + +Microsoft in the given context? + +Option: founder, visitor. + +Answer with one word: founder (GPT-3.5) X + +Assume subject_entity can be any of Bill Gates, Jeff Bezos, and Steve Jobs, while object-entity can be any of Google, Microsoft, and Meta. + +Context: subject entity went to object entity Building 99. + +Question: What's the relation between subject_entity and object-entity in the given context? + +Option: founder, visitor. + +Answer with one word: visitor (GPT-3.5) + +Figure 1: An example of entity bias in GPT-3.5. Our in-context intervention mitigates the conflicts between parametric knowledge and contextual knowledge. + +may be misled by their memory of the entities Bill Gates and Microsoft, saying the relation between them in this context is founder rather than visitor, as shown in Fig. 1. Recent studies show that entity bias widely affects pretrained (large) language models (LLMs; Longpre et al. 2021; Yan et al. 2022; Zhou et al. 2023). These models have a tendency to disregard contextual information that contradicts or is infrequently reported in the pretrained corpus, while excessively relying on (biased) parametric knowledge (Longpre et al., 2021) to make unfaithful predictions and perpetuate bias. + +Prior studies have proposed multiple causality-inspired methods to mitigate entity bias (Zhang et al., 2017; Nan et al., 2021; Wang et al., 2022; Zhu et al., 2022). Despite their potential, the causal models underlying these methods are flawed in practice, primarily because of imprecise parameter estimation. For example, some causal models necessitate estimating the probability distribution + +over labels when given a sentence that is devoid of entities or contextual information (Zhang et al., 2017; Wang et al., 2022). These methods either lose predictive information about entities, or are prone to erroneous representation without contextualization. The other critical problem is the difficulty of applying these methods to black-box LLMs, of which parameters are inaccessible and logits are uncalibrated. + +To address the aforementioned problems, the first contribution of this paper is a causal analysis of entity bias mitigation methods (§3.1). We examine and compare the structured causal models (SCMs) behind existing methods. We find that, among the theoretically equivalent causal models (Verma and Pearl, 1990), there exists a specific SCM whose parameters are comparatively easier to estimate. As shown in Fig. 2, the proposed SCM only requires to intervene input entities to mitigate the presence of spurious features before passing them to the subsequent neural layers. Moreover, it retains the entity type information3 at an appropriate level of granularity without requiring explicit entity typing. + +The second contribution of this paper is a training-time causal intervention technique for mitigating entity bias based on the proposed SCM (§3.2). Specifically, we identify entities that are likely to share similar predictive information with the given entity. During training, we perturb embedding of the given entity within a convex hull constructed by embeddings of similar entities. During inference, we represent the entity with the center of the convex hull. Taking advantage of the continuous nature of the embedding space, this intervention does not rely on models specifically trained on natural language to estimate the label distribution of unnatural text, nor does it sacrifice predictive entity or contextual information. + +The third contribution of this paper is to transform the training-time intervention into in-context intervention for black-box LLMs whose parameters are inaccessible, and logits are uncalibrated (§3.3). A significant advantage of the proposed SCM is that the causal intervention is carried out at the input layer, enabling its implementation within an in-context setting. Specifically, we replace entities with placeholders and define each placeholder + +![](images/bf9d06eb92efa725a6034a50a4f065110b9aba5f3115575234f82e257a19ac56.jpg) + +![](images/822404bcab44be672f773ca48fbde5c0e883b30f7145a30b4351ca44fa7346de.jpg) + +![](images/fa721ff35c2731ef19e22b55cd75093b559fef91bc441146d20f4632827f9845.jpg) +Figure 2: Structured causal models revealing entity bias. + +by examples - a set of similar entities. For example, we can replace Bill Gates in Fig. 1 with subject_entity and presuppend the prompt, "Assume that subject-entity can be any of Steve Jobs, Bill Gates, and Jeff Bezos", to the input. This in-context intervention can be applied to any black-box LLM without additional cost. + +Experiments on relation extraction (RE) and machine reading comprehension (MRC) show that the proposed causal intervention techniques are effective for both white-box and black-box LLMs. Under the white-box setting ( $\S 4$ ), our training-time intervention significantly improves out-of-distribution performance of RoBERTa (Liu et al., 2019) on RE by 5.7 points and SpanBERT (Joshi et al., 2020) on MRC by 9.1 points, comparing with the vanilla version. Under the black-box setting ( $\S 5$ ), our in-context intervention effectively reduces the entity-based knowledge conflicts (Long-pre et al., 2021) and improves the task performance of GPT-3.5. Specifically, our method outperforms the best baseline by up to 20.5 points of exact match accuracy on MRC and reduces the memorization ratio by up to 17.6 points on RE. Further analyses reveal the crucial role of the number of neighboring entities $k$ in balancing the predictive information and biasing information from entities, and the necessity of entity placeholder definition for in-context intervention. + +# 2 Related Work + +Entity Bias in LLMs. LLMs memorize factual knowledge in their parameters during pretraining (Roberts et al., 2020; Jiang et al., 2020) and show promising results in answering factual questions (Petroni et al., 2019; Brown et al., 2020; Wei + +et al., 2022). However, the parametric knowledge may be inaccurate due to the misinformation in the training corpus (Lin et al., 2022) or outdated as the world evolves (Liska et al., 2022; Kasai et al., 2022). In such scenarios, it is critical for LLMs to update their predictions when provided with contextual evidence. However, previous studies (Longpre et al., 2021; Qian et al., 2021b; Yan et al., 2022) observe that language models may take entities as shortcuts, leading to spurious predictions based solely on parametric knowledge. This bias becomes more prominent when the evidence contains infrequent or conflicting knowledge compared to the training corpus. + +To mitigate this bias, previous work (Longpre et al., 2021; Chen et al., 2022; Li et al., 2022; Zhou et al., 2023) introduces the entity substitution technique, which involves constructing counterfactual data by randomly replacing the entities, and updating the language models either by finetuning or in-context learning. Although showing improved results, these techniques are empirical and lack theoretical backgrounds. In this paper, we theoretically analyze the entity bias problem from a causal view. Furthermore, we propose a causal intervention method that surpasses the performance of entity substitution. + +Debiasing with Causal Intervention. LLMs have been revealed with bias problems, for which literature has paid much attention in order to mitigate their adverse effects (Sweeney and Najafian, 2019; Zhang et al., 2020b; Venkit and Wilson, 2021; Lalor et al., 2022). Recent debiasing techniques incorporate the concept of counterfactual inference, and have been applied in various tasks for bias mitigation (Niu and Zhang, 2021; Qian et al., 2021a; Wang et al., 2022). One dominant technique is based on causal mediation analysis (Udomcharoenchaikit et al., 2022), which involves decomposing the total effect into pure direct effect and total indirect effect. In this context, Wang et al. (2022) utilize total direct effect and total effect to debias the relation extraction. Apart from debiasing, causal mediation analysis can be used to analyze biases in LLMs (Vig et al., 2020; Finlayson et al., 2021). + +In addition to intervening causal mediator, previous studies have also explored confounder analysis (Keith et al., 2020; Qian et al., 2021a; Feder et al., 2022; Weld et al., 2022). A confounder is a variable that influences both the input and the output, causing a spurious correlation between them. + +Typically, the de-confounder process applies the do-calculus (Pearl, 2012) to compute the prediction assuming that the value of the confounder variable is not the observed one but follows its natural distribution (Zhang et al., 2020a; Tian et al., 2022). Our approach is also based on confounder analysis. While nearly all the aforementioned approaches request a white-box accessibility of the model with at least logits of predictions, this work represents a pilot study of deconfounder method that applies to purely black-box LLMs. + +# 3 Method + +In this section, we first analyze methods for mitigating entity bias in a causal view and propose an easy-to-estimate SCM as a theoretical basis (§3.1). Based on the proposed SCM, we design a training-time intervention technique for white-box LLMs (§3.2) and an in-context intervention technique for black-box LLMs (§3.3). + +# 3.1 Causal Analysis of Entity Bias + +To compare existing methods in the same context, we analyze the structured causal models (SCMs) behind them. Fig. 2 shows two typical SCMs for entity bias mitigation methods, where $X$ refers to the raw input, $E$ refers to entities, and $Y$ refers to the label. The links $X \rightarrow Y \leftarrow E$ show that LLMs rely on both predictive information from the whole input and the biasing information from specific entities to make the prediction. The links $E \rightarrow X$ and $X \rightarrow E$ assume that the context is written down with the entity in mind or vice versa. As discussed by Verma and Pearl (1990), we cannot differentiate between these two directions merely based on statistical observations. Indeed, the two SCMs with opposite links between $X$ and $E$ are equivalent according to the Bayes' theorem: + +$$ +\begin{array}{l} P (X) P (E | X) P (Y | X, E) \\ = P (Y, X, E) \\ = P (E) P (X \mid E) P (Y \mid X, E) \\ \end{array} +$$ + +As revealed by these SCMs, entity bias exists in LLMs because entities serve as either confounders or mediators. Thus, the bias can be mitigated through causal intervention, such as backdoor adjustment + +$$ +P (Y | d o (X)) = \sum_ {E} P (Y | X, E) P (E), +$$ + +![](images/8e8cd6f645fb1ebe10dd5f451789ba5103994691b8b9e54976f1e508ad3d7713.jpg) +Figure 3: Left: Training-time intervention with $k = 4$ . Right: Example of predictive and biasing information. + +![](images/5559d0c93cb9298a3db4eb6479d03fe4fabaa1a8fd6a0b0f1c71dd44810e15ea.jpg) + +which eliminates the influence of a specific variable (in this context, $E$ ) by assigning values to this variable. However, previous SCM-based debiasing methods exhibit divergent performances, since they estimate different (conditional) probabilities using different surrogates when performing the causal intervention. For example, counterfactual analysis by Wang et al. (2022) estimates and deducts the biasing effect of entities on labels by masking the context, while Zhang et al. (2017) and Longpre et al. (2021) directly remove the effect of entities by entity masking or substitution. None of them estimates the causal effects of entity names precisely, due to the highly complex architectures of LLMs, which account for their unsatisfactory performance on mitigating entity bias. + +In this work, we consider the SCM in Fig. 2, whose parameters are much easier to estimate in practice. Since most LLMs follow a sequential structure by stacking neural layers, mitigating the entity bias in one layer will also mitigate the entity bias in subsequent layers. The underlying logic is simple - if we block the spurious features in the input, there will be no spurious correlations to capture. Therefore, we propose to mitigate the entity bias in the input layer $M$ , which could be an embedding layer or a prompt layer. Obviously, $P(M|X,E)$ can be estimated more accurately and efficiently than $P(Y|X,E)$ , because there is no need to run the whole model, ensuring less error propagation and computational cost. To further improve the estimation by retaining as much predictive information as possible, we propose to estimate $P(M|do(X))$ by perturbing the entity with similar entities rather than masking it. In the following sections, we will show how to realize the proposed causal intervention on both white-box and black-box LLMs. + +# 3.2 Training-time Intervention + +For white-box models of which the parameters are accessible, we can effectively address their internal bias through training-time intervention. In the case of entity bias identified by the proposed SCM, we realize the causal intervention by perturbing the input entities or entity tokens using their neighboring counterparts in the embedding space, as shown in Fig. 3 (Left). For each entity presented in the input text, we first find its top $k$ nearest neighbors according to embedding distance. Then we construct the smallest convex hull5 to cover the original entity and neighboring entities. Due to the continuous nature of the embedding space, the embeddings within the convex hull approximately represent the same predictive information as a whole. The entity-specific biasing information, which has the potential to trigger spurious shortcuts, gradually diminishes from the original entity towards the border of the convex hull. + +During training, we introduce perturbations to the entity embedding by replacing it with a random embedding selected from within the convex hull. In this way, the convex hull bounded the predictive information, while random sampling further introduces noises and increases the diversity of data for robust training. During inference, we replace the original entity embedding with the center of the convex hull, in order to balance the trade-off between predictive and biasing information. Fig. 3 (Right) provides an example of the information preserved through such intervention. By replacing the entity Bill Gates with the center of the convex hull, encompassed by its neighboring entities, such as Steve Jobs and Jeff Bezos, we effectively retain the + +![](images/7c8d3d8e5550de1e1a4c33f69110be8de990706e2138b587b755297749d3f07d.jpg) +1. Replace entities with placeholders + +![](images/403da6bc914c98f8dd45d3130dbb3a8385c8be8e9d55c263417ed0fedccb6b0f.jpg) +3. Define placeholders with examples +Figure 4: In-context intervention for black-box LLMs. We take relation extraction as an example. + +shared predictive information (e.g., person), while mitigating the biasing information (e.g., founder of Microsoft). That is to say, the convex hull-bounded perturbation serves as an effective estimation of $P(M|do(X))$ . + +# 3.3 In-context Intervention + +The rise of Web services powered by black-box LLMs, such as GPT-3.5, introduces new challenges for mitigating entity bias, demanding debiasing methods that do not require accessible model weights and prediction logits. As discussed in §3.1, a key advantage of our SCM is that the deconfounder operation is merely on the input layer. In the context of black-box LLMs, the input is the user-provided prompt. Thus, we perform the causal intervention solely through modifying prompts to resolve entity bias. We propose a four-step (test-time) in-context intervention technique for black-box LLMs. Fig. 4 shows the whole process. + +First, we replace the original entity mention in the input with abstract placeholders (e.g., [ENTITY]). This step effectively mitigates any biasing information from the original entity names, because the placeholders are semantic-neutral. However, this step also eliminates predictive information from entities. We show in §5.3 that, without proper definition for the placeholder, models can easily fail to answer questions. In the next two steps, we construct definitions to provide predictive information for each placeholder while introducing minimal additional biasing information. Second, we query the LLM to name $k$ entities similar to the + +original one (e.g., $E_{o}$ ). These generated entities (e.g., $E_{a}$ and $E_{b}$ ) present similar predictive information as the original entity, and are able to fulfill the same function as neighboring entities in §3.2. Third, we define the placeholder with the original entity and generated entities. For example, we can verbalize the definition as "Assume [ENTITY] can be any of $E_{o}$ , $E_{a}$ and $E_{b}$ ". This definition encourages the LLM to find common properties of given entities rather than relying on biasing information of one specific entity. The resulting placeholder along with its definition serves as an effective estimation of $P(M|do(X))$ . Finally, we presuppose the placeholder definition to be modified context and question, and query the LLM with the new prompt. This four-step adjustment ensures that the resulting prompt is free of specific biasing information pertaining to the original entity while still preserving sufficient predictive information by considering given entity examples as a whole. + +# 4 White-Box Experiments + +In this section, we evaluate our training-time intervention under the white-box setting. + +# 4.1 Experimental Setup + +Datasets and Metrics. We evaluate our methods on relation extraction (RE) and machine reading comprehension (MRC). For both tasks, we fine-tune models on an in-distribution (ID) training set and evaluate models on both ID and out-of-distribution (OOD) test sets. For RE, we adopt TACRED (Zhang et al., 2017) as the ID dataset and + +
RE (F1)MRC (EM)
IDOODΔIDOODΔ
Vanilla Model71.1±0.962.3±0.6-12.4%79.1†±0.163.1†±0.8-20.2%
+ Continual Pretraining (Yan et al., 2022)*---79.6†±0.665.9†±1.1-17.2%
+ CoRE (Wang et al., 2022)71.3±0.361.2±0.6-14.2%---
+ Entity Mask (Zhang et al., 2017)61.4±0.561.9±0.5+0.9%75.7±0.662.9±0.4-16.9%
+ Entity Substitution (Longpre et al., 2021)66.6±0.665.8±0.3-1.2%76.4±0.870.8±1.5-7.3%
+ Ours70.8±0.368.0±0.3-3.9%77.0±0.772.2±0.5-6.2%
+ +Table 1: Results under white-box setting. We report the average F1/EM score and standard deviation of three runs. $\Delta$ shows the relative performance change between ID and OOD. The best number of each column is in bold. * Continual pretraining is not directly comparable to finetuning methods. † Numbers copied from Yan et al. (2022). + +EntRED (Wang et al., 2023) as the OOD dataset, and report micro-F1 score. In both datasets, entities in each sentence are given. For MRC, we adopt TriviaQA (Joshi et al., 2017) as the ID dataset and its answer-substituted version (Yan et al., 2022) as the OOD dataset, and report exact match (EM) score. Following Yan et al. (2022), we hold out $10\%$ of the training data for development and evaluate models on the original development set. We use the DBName version of their OOD dataset. For all metrics, we report the average score with standard deviation of three runs. + +Baselines. We compare our methods with the following baselines. Entity Mask (Zhang et al., 2017) masks the subject and object entities in the sentence with special tokens. Entity Substitution (Longpre et al., 2021) randomly selects an entity of the same type to substitute the original entity. CoRE (Wang et al., 2022) applies counterfactual inference by computing the difference between the prediction made with the entire sentence and the prediction made with only the entities observed. Continual Pretraining (Yan et al., 2022) introduces an intermediate pretraining stage to the backbone model with the objective of recovering masked entities. + +Implementation Details. For RE, we apply RoBERTa (Liu et al., 2019) as the backbone model following previous works (Zhou and Chen, 2022; Wang et al., 2022). We use the entity Marker_punct input format from Zhou and Chen (2022) in main experiments, in order to mitigate the impact of explicit entity type information on our analysis of entity bias. For MRC, we apply SpanBERT (Joshi et al., 2020) as the backbone model following Yan et al. (2022). Since entities are not given in MRC datasets, we use the same named entity recognition tool used by Yan et al. to + +extract entities. Since the detected entities could be noisy and incomplete, we perform our method upon answer-substituted training set ensuring all answer entities are perturbed as strong as Entity Substitution. Since RoBERTa and SpanBERT lack entity-level embeddings, we apply our causal intervention to each token embedding within the entity mention instead. To construct convex hull, We select neighboring tokens based on their Euclidean distance to the original token in the embedding space. For both tasks, we perform training-time intervention on each entity token with $k = 3$ . While further data augmentation is always possible, for a fair comparison, we finetune all the models with the same amount of data. More implementation details are in Appx. §A.1. + +# 4.2 Results + +As shown in Tab. 1, the vanilla RoBERTa and Span-BERT experiences significant declines in performance on RE $(-12.4\%)$ and MRC $(-20.2\%)$ when evaluated on OOD test sets. For both tasks, the OOD test set exhibits lower entity bias, achieving better performance on it suggests that the model relies less on entity bias as a predictive factor. + +CoRE and Continual Pretraining are the only baselines that improve the ID performance. CoRE leads to a slight performance decrease on the OOD test set of RE in exchange, while Continual Pretraining further increases the OOD performance on MRC. Entity Mask successfully narrows down or even reverses the relative performance drop under OOD setting on the two tasks. However, its absolute performance decreases significantly due + +![](images/1e5e73f04a91dfac533284c7fb68b82174d8773d9d033b143674b9b8251048c4.jpg) +Figure 5: F1 score of training-time intervention with different $k$ on RE. + +to the loss of predictive information from entities. Moreover, its effectiveness is dependent on the task property. Unlike MRC, entities are given and are not answers in RE, so the gap between ID and OOD performance of Entity Mask are much smaller. Entity Substitution stands out among all the baselines in terms of the OOD performance, with an absolute improvement of 3.5 points on RE and 7.7 points on MRC. However, its ID performance suffers a lot from the distribution shift of entities during training. + +Our training-time intervention achieves the best OOD performance, with an absolute improvement of 2.2 points on RE and 1.4 points on MRC compared with Entity Substitution. At the same time, its ID performance is also better. These results show that our method mitigates entity bias more effectively without losing much predictive information. In other words, the proposed method represents a better way to estimate the parameters of the proposed SCM accurately. + +# 4.3 Analysis + +To provide a comprehensive understanding of our training-time intervention, we further conduct analyses on RE. + +Effect of $k$ . The number of neighbors, $k$ , plays a crucial role in balancing the predictive information and biasing information from entities. To find the sweet spot of $k$ , we examine its influence on model performance as shown in Fig. 5. In general, the ID performance decreases when $k$ increases. As the value of $k$ increases, the resulting convex hull becomes larger, causing the center of the hull to move further away from the original entity. Consequently, both the predictive information and biasing information that contribute to ID performance grad + +ually diminish. In contrast, the OOD performance is lower when $k$ is too big or too small. When $k$ is too big, the same problem under ID setting also happens to the OOD setting. When $k$ is too small, the biasing information is not effectively mitigated, because the perturbed entity is too close to the original entity. + +Entity Type as Input. Previous experiments in this section do not explicitly input entity information as it may disturb the causal analysis. Here, we analyze the effect of entity type information as input. We use the typed-entity Marker_punct input format from Zhou and Chen (2022). The ID and OOD F1 scores of vanilla RoBERTa model are 74.6 and 68.9 points, respectively. Our training-time intervention further improves the ID performance by 0.7 points and the OOD performance by 2.9 points. These results indicate that information from neighboring entities is complementary to coarse-grained entity type information for precise RE. + +# 5 Black-Box Experiments + +In this section, we evaluate our in-context intervention for mitigating entity bias from LLMs under black-box setting. + +# 5.1 Experimental Setup + +Datasets. Following Zhou et al. (2023), we adopt GPT-3.5 text-davinci-003 as the backbone LLM and evaluate the model performance under a zero-shot setting. We use the RE and MRC datasets provided by Zhou et al. (2023). The RE dataset is based on Re-TACRED (Stoica et al., 2021). Zhou et al. pair each instance's entities with a randomly sampled context that shares the same entity types but possesses different relations. To mitigate the influence of the label no Relation, which can also serve as a signal of abstention, we further filter out all instances whose original or updated labels are no relation. The MRC dataset is based on Natural Questions (Kwiatkowski et al., 2019). Zhou et al. replace the original answer in each instance with a randomly sampled entity of the same type. They only collect instances where the LLM can give the correct answer based on the raw context. Intuitively, LLMs that faithfully capture contextual information should update their answers based on the new context. + +Metrics. We report the F1 score for RE, and EM score for MRC. To align with previous works, we + +![](images/be6c9003fb40edac702a485a798962fd538118535c39d89b507e4846ed2de6dc.jpg) +MRC (EM↑) + +![](images/bc72c29f86b2b87f5661457131bdf034bd1761395f3cb70403b534e8f09f5462.jpg) +MRC (MR↓) + +![](images/cd431268605c6800f35fa1004b6cbc4ecbaddfcdcd993a6c0e1d14738e944b64.jpg) +RE(F1↑) + +![](images/e81114962483800c37f0c8a36288ff6f38777800b7221cf11cb9b2953493d90d.jpg) +RE (MR↓) + +![](images/4d847bd6ecc6921a78d6e8692d8fa0ca19bbd734699d2d8f8b462ba5f6d13c98.jpg) +Figure 6: GPT-3.5 results on MRC and RE under black-box setting. We report the EM score on MRC and the F1 score on RE, for which higher scores are better. We also report the MR score on both tasks, for which lower scores are better. Our in-context intervention performs consistently better than baselines under all settings. +Figure 7: Ablation study of in-context intervention for GPT-3.5 on RE. + +also report the memorization ratio (MR; Longpre et al. 2021) to measure the model's ability to update answers based on given contexts. $^{8}$ + +Baselines. We compare our in-context intervention with the methods introduced by Zhou et al. (2023). Base prompts directly concatenate the context and the question of each instance as the query. Attribute-based prompts append "in the given context" to the question. Opinion-based prompts modified the context to a narrator's statement by prepending "Bob said" to the context, and then query the LLM about the narrator's opinion by preponding "What's Bob's opinion on" to the question. We evaluate all methods with and without specifically designed task instructions following Zhou et al. (2023). + +Implementation Details. We apply our in-context intervention to attribute-based prompts. We adopt the backbone LLM to propose two similar entities along with the original entity to define each placeholder. To further eliminate the spurious entity mapping, we shuffle the entities for each placeholder before verbalization. Details of all prompt templates used can be found in Appx. §A.2. Since + +entities are not given in MRC, we detect named entities and replace them with placeholders using gpt-3.5-turbo as an external tool. Given the potential abundance of entities in long contexts, we do not replace entities that exclusively appear in the context. + +# 5.2 Results + +As shown in Fig. 6, all methods benefit from carefully designed task instructions in terms of task performance. The Opinion-based prompt performs the best among all baselines in most cases. Compared with the Base prompt, it significantly improves the EM score by 18.7-21.5 points on MRC and the F1 score by 0.6-4.7 points on RE. Our in-context intervention achieves the highest EM/F1 score and the lowest MR score under all settings. Specifically, without task instruction, our in-context intervention outperforms the best baseline by 20.5 EM points on MRC and reduces the MR score by 17.6 points on RE. These results demonstrate the effectiveness of our causal intervention for addressing entity-based knowledge conflicts in black-box LLMs. + +# 5.3 Ablation Study + +We in addition conduct an ablation study on RE to provide a comprehensive understanding of our method, as shown in Fig. 7. When the placeholder definition is not provided (i.e., w/o definition), no entity information, including both biasing and predictive information, appears in the input. As a result, it successfully blocks any spurious shortcuts with MR drops to 0. However, the F1 score also drops sharply from 71.8 points to 37.9 points, indicating that some entity information is essential to accurate RE and the LLM cannot understand the placeholders well without their definition. + +We further examine the role of original entities in the placeholder definition. On the one hand, we remove the original entities from the definition (i.e., w/o original entity). Results show that our method can still improve F1 while reducing MR. This verifies the effectiveness of using a set of similar entities to represent the predictive information from the original entity. On the other hand, we put the original subject and object entities at the same position (i.e., w/o entity shuffle) in the definition so that the LLM can easily map them. As a result, the MR increases significantly, showing that the LLM can find spurious shortcuts even through mapping the subject entity and the object entity from two entity sets. + +# 6 Conclusion + +In this paper, we analyze the entity bias in LLMs from a causal view. Building upon an SCM whose parameters are easier to estimate, we propose training-time causal intervention for white-box LLMs and in-context causal intervention for black-box LLMs. Both intervention techniques perturb the original entity with neighboring entities to mitigate spurious correlations between specific entities and predictions. Experiments on relation extraction and machine reading comprehension show that the proposed intervention can effectively reduce the conflicts between parametric knowledge and contextual knowledge and significantly improve the performance of LLMs. Future work can apply our causal intervention to more LLMs and tasks to achieve context-faithful answers. + +# Acknowledgement + +We appreciate the reviewers for their insightful comments and suggestions. Fei Wang is supported by the Annenberg Fellowship and the Amazon ML Fellowship. Wenjie Mo is supported by the USC CURVE Fellowship and the Provost's Research Fellowship. Wenxuan Zhou and Muhao Chen are supported by the NSF Grant IIS 2105329, the NSF Grant ITE 2333736, the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. This work is also supported in part by a Cisco Research Award, two Amazon Research Awards, and a Keston Research Award. Computing of this work has been partly supported by a subaward of NSF Cloudbank 1925001 through UCSD. + +# Limitation + +Although we have tried to verify the effectiveness of our method under diverse settings, including different LLMs, different accessibility of model parameters, and different tasks, there are always more options for further investigation, especially nowadays when more and more LLMs are kept produced. Considering the property of the entity bias issue may vary when it comes to different LLMs and datasets from different domains, future work can build better benchmark for more comprehensive evaluation. In this paper, we only consider zero-shot prompting for black-box LLMs, because this will help us to control variables during causal analysis. However, it is possible to combine the proposed causal intervention with cutting-edge LLM inference methods, such as in-context learning (Brown et al., 2020), although the underlying SCM may become more complex. + +# References + +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. +Hung-Ting Chen, Michael Zhang, and Eunsol Choi. 2022. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2292-2307, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. +Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. 2021. Towards robustness against natural language word substitutions. In International Conference on Learning Representations. +Amir Feder, Katherine A Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E Roberts, et al. 2022. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. Transactions of the Association for Computational Linguistics, 10:1138-1158. +Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan Belinkov. 2021. Causal analysis of syntactic agreement mechanisms in neural language models. arXiv preprint arXiv:2106.06087. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, + +and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28. +Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77. +Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611. +Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. Realtime qa: What's the answer right now? arXiv preprint arXiv:2207.13332. +Katherine Keith, David Jensen, and Brendan O'Connor. 2020. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332-5344. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466. +John P Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, and Ahmed Abbasi. 2022. Benchmarking intersectional biases in nlp. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3598-3609. +Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2022. Large language models with controllable working memory. arXiv preprint arXiv:2211.05110. +Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214-3252, Dublin, Ireland. Association for Computational Linguistics. +Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, D'Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer, + +Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models. In International Conference on Machine Learning, pages 13604-13622. PMLR. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052-7063. +Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, and Wei Lu. 2021. Uncovering main causalities for long-tailed information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9683-9695. +Yulei Niu and Hanwang Zhang. 2021. Introspective distillation for robust question answering. Advances in Neural Information Processing Systems, 34:16292-16304. +Judea Pearl. 2012. The do-calculus revisited. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, pages 3-11. +Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2020. Learning from context or names? an empirical study on neural relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3661-3672. +Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics. +Chen Qian, Fuli Feng, Lijie Wen, Chunping Ma, and Pengjun Xie. 2021a. Counterfactual inference for text classification debiasing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5434-5445. +Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. 2021b. Annotation inconsistency and entity bias in MultiWOZ. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 326-337, Singapore and Online. Association for Computational Linguistics. + +Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics. +George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-tacred: Addressing shortcomings of the tacred dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13843-13850. +Chris Sweeney and Maryam Najafian. 2019. A transparent framework for evaluating unintended demographic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1662-1667. +Bing Tian, Yixin Cao, Yong Zhang, and Chunxiao Xing. 2022. Debiasing nlu models via causal intervention and counterfactual reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11376-11384. +Can Udomcharoenchaikit, Wuttikorn Ponwitayarat, Patomporn Payoungkhamdee, Kanruethai Masuk, Weerayut Buaphet, Ekapol Chuangsuwanich, and Sarana Nutanong. 2022. Mitigating spurious correlation in natural language understanding with counterfactual inference. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11308-11321. +Pranav Narayanan Venkit and Shomir Wilson. 2021. Identification of bias against people with disabilities in sentiment analysis and toxicity detection models. arXiv preprint arXiv:2111.13259. +Thomas Verma and Judea Pearl. 1990. Equivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, pages 255-270. +Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Simas Sakenis, Jason Huang, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for interpreting neural nlp: The case of gender bias. arXiv preprint arXiv:2004.12265. +Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, and Bryan Hooi. 2022. Should we rely on entity mentions for relation extraction? debi- aising relation extraction with counterfactual analysis. arXiv preprint arXiv:2205.03784. +Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, and Muhao Chen. 2023. How fragile is relation extraction under entity replacements? In Proceedings of the 27th SIGNLL Conference on Computational Natural Language Learning (CoNLL). + +Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In International Conference on Learning Representations. +Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan A Rossi, and Tim Althoff. 2022. Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 1109-1120. +Nan Xu, Fei Wang, Bangzheng Li, Mingtao Dong, and Muhao Chen. 2022. Does your model classify entities reasonably? diagnosing and mitigating spurious correlations in entity typing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. +Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. 2022. On the robustness of reading comprehension models to entity renaming. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508-520. +Dong Zhang, Hanwang Zhang, Jinhui Tang, Xian-Sheng Hua, and Qianru Sun. 2020a. Causal intervention for weakly-supervised semantic segmentation. Advances in Neural Information Processing Systems, 33:655-666. +Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu, and Tiejun Zhao. 2020b. Demographics should not be the reason of toxicity: Mitigating discrimination in text classifications with instance weighting. arXiv preprint arXiv:2004.14088. +Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35-45. +Wenxuan Zhou and Muhao Chen. 2022. An improved baseline for sentence-level relation extraction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 161-168. +Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023. Context-faithful prompting for large language models. In *Findings of the 2023 Conference on Empirical Methods in Natural Language Processing*. +Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, and Fuzhen Zhuang. 2022. Generalizing to the future: Mitigating entity bias in fake news detection. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2120-2125. + +# A Implementation Details + +# A.1 White-Box Experiments + +For RE, we use RoBERTa-Large as our backbone model, which has 354 million parameters. Our implementation is based on the codebase by Zhou and Chen (2022) with their default hyper-parameters. More specifically, we employ a learning rate of 3e-5, a batch size of 32, and conduct training for a total of 5 epochs. Other method-specific hyperparameters are selected on the development set of TACRED. Finetuning typically takes 1.5 hours on an NVIDIA RTX A5000 GPU. + +For MRC, we use SpanBERT-base-cased as our backbone model, which has 110 million parameters. Our implementation is based on the codebase by Yan et al. (2022) with their default hyperparameters. More specifically, we employ a learning rate of 2e-5, a batch size of 16, and conduct training for a total of 4 epochs. Other method-specific hyper-parameters are selected on the hold-out development set of TriviaQA. Finetuning typically takes 3 hours on an NVIDIA RTX A5000 GPU. + +# A.2 Black-Box Experiments + +Our implementation is based on the codebase by Zhou et al. (2023). + +The instruction for MRC is + +Instruction: read the given information and answer the corresponding question. + +The prompt without instruction for MRC is + +Assume that {ENTITY0} can be any of {entity0Candidates}. [Assume that {ENTITY1} can be any of {entity1Candidates} ...] {context} + +Q:{question} based on the given text? Extract the answer from the given text. Do not add other words. + +A: + +The instruction for RE is + +Identify the relationship between two entities from a list of options. + +The prompt without instruction for RE is + +Assume that subject_entity is one of {subjCandidates}, while object-entity is one of {objCandidates} in the following text. {context} + +Q: Which option indicates the relationship between subject_entity and object-entity in the given text? + +Options:{options} + +A: + +The prompt template for detecting entities in MRC is + +List named entities in the following sentence. Separate the entities with $\# \# \# \#$ , if you find multiple entities. Do not add additional words before or after your answers.. + +{sentence} + +The prompt template for replacing entities with placeholders in MRC is + +Replace the entity {entity_list} in the following paragraph. +{paragraph} + +The prompt template for finding similar entities is + +Name two [{\entity_type}] entities similar to {" $\{entity\}''$ . Separate the entities with \#\#\#, and do not add additional words before or after your answers. Provide random answers if you are not sure. + +In all the above prompts, variables are surrounded with curly brackets and optional variables are surrounded with square brackets. \ No newline at end of file diff --git a/2023/A Causal View of Entity Bias in (Large) Language Models/images.zip b/2023/A Causal View of Entity Bias in (Large) Language Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c94494e080ce8bd2456532bb0902683a25e4d1e9 --- /dev/null +++ b/2023/A Causal View of Entity Bias in (Large) Language Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64ecad02ce6744cebabe842156c4f930b44b585ec8c8d8dfe667f8bcd9559e58 +size 301551 diff --git a/2023/A Causal View of Entity Bias in (Large) Language Models/layout.json b/2023/A Causal View of Entity Bias in (Large) Language Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d6f5827d1afad18d6d4b341c357f18f5eaa3cf62 --- /dev/null +++ b/2023/A Causal View of Entity Bias in (Large) Language Models/layout.json @@ -0,0 +1,8404 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 117, + 76, + 476, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 76, + 476, + 94 + ], + "spans": [ + { + "bbox": [ + 117, + 76, + 476, + 94 + ], + "type": "text", + "content": "A Causal View of Entity Bias in (Large) Language Models" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 111, + 109, + 487, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 109, + 487, + 124 + ], + "spans": [ + { + "bbox": [ + 111, + 109, + 487, + 124 + ], + "type": "text", + "content": "Fei Wang† Wenjie Mo† Yiwei Wang‡ Wenxuan Zhou† Muhao Chen†#" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 118, + 125, + 478, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 125, + 478, + 138 + ], + "spans": [ + { + "bbox": [ + 118, + 125, + 478, + 138 + ], + "type": "text", + "content": "†University of Southern California; ‡University of California, Los Angeles;" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 221, + 139, + 375, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 221, + 139, + 375, + 152 + ], + "spans": [ + { + "bbox": [ + 221, + 139, + 375, + 152 + ], + "type": "inline_equation", + "content": "^{\\#}" + }, + { + "bbox": [ + 221, + 139, + 375, + 152 + ], + "type": "text", + "content": "University of California, Davis" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 121, + 153, + 474, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 153, + 474, + 167 + ], + "spans": [ + { + "bbox": [ + 121, + 153, + 474, + 167 + ], + "type": "text", + "content": "{fwang598, jackymo, zhouwenx}@usc.edu; wangyw.evan@gmail.com;" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 240, + 168, + 356, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 240, + 168, + 356, + 179 + ], + "spans": [ + { + "bbox": [ + 240, + 168, + 356, + 179 + ], + "type": "text", + "content": "muhchen@ucdavis.edu" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 84, + 235, + 274, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 235, + 274, + 618 + ], + "spans": [ + { + "bbox": [ + 84, + 235, + 274, + 618 + ], + "type": "text", + "content": "Entity bias widely affects pretrained (large) language models, causing them to rely on (biased) parametric knowledge to make unfaithful predictions. Although causality-inspired methods have shown great potential to mitigate entity bias, it is hard to precisely estimate the parameters of underlying causal models in practice. The rise of black-box LLMs also makes the situation even worse, because of their inaccessible parameters and uncalibrated logits. To address these problems, we propose a specific structured causal model (SCM) whose parameters are comparatively easier to estimate. Building upon this SCM, we propose causal intervention techniques to mitigate entity bias for both white-box and black-box settings. The proposed causal intervention perturbs the original entity with neighboring entities. This intervention reduces specific biasing information pertaining to the original entity while still preserving sufficient semantic information from similar entities. Under the white-box setting, our training-time intervention improves OOD performance of PLMs on relation extraction (RE) and machine reading comprehension (MRC) by 5.7 points and by 9.1 points, respectively. Under the black-box setting, our in-context intervention effectively reduces the entity-based knowledge conflicts of GPT-3.5, achieving up to 20.5 points of improvement of exact match accuracy on MRC and up to 17.6 points of reduction in memorization ratio on RE.1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 628, + 154, + 641 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 628, + 154, + 641 + ], + "spans": [ + { + "bbox": [ + 68, + 628, + 154, + 641 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 66, + 649, + 290, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 649, + 290, + 743 + ], + "spans": [ + { + "bbox": [ + 66, + 649, + 290, + 743 + ], + "type": "text", + "content": "Entity bias (Longpre et al., 2021; Wang et al., 2022; Xu et al., 2022; Peng et al., 2020; Qian et al., 2021b; Hermann et al., 2015) refers to an undesirable phenomenon where models overly rely on prediction shortcuts triggered by specific entities to make spurious predictions. For example, given the sentence \"Bill Gates went to Microsoft Building 99,\" models" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 311, + 222, + 480, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 222, + 480, + 232 + ], + "spans": [ + { + "bbox": [ + 311, + 222, + 480, + 232 + ], + "type": "text", + "content": "Context: Bill Gates went to Microsoft Building 99." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 312, + 233, + 490, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 233, + 490, + 242 + ], + "spans": [ + { + "bbox": [ + 312, + 233, + 490, + 242 + ], + "type": "text", + "content": "Question: What's the relation between Bill Gates and" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 243, + 418, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 243, + 418, + 253 + ], + "spans": [ + { + "bbox": [ + 313, + 243, + 418, + 253 + ], + "type": "text", + "content": "Microsoft in the given context?" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 253, + 392, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 253, + 392, + 263 + ], + "spans": [ + { + "bbox": [ + 313, + 253, + 392, + 263 + ], + "type": "text", + "content": "Option: founder, visitor." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 264, + 465, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 264, + 465, + 275 + ], + "spans": [ + { + "bbox": [ + 313, + 264, + 465, + 275 + ], + "type": "text", + "content": "Answer with one word: founder (GPT-3.5) X" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 312, + 293, + 510, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 293, + 510, + 324 + ], + "spans": [ + { + "bbox": [ + 312, + 293, + 510, + 324 + ], + "type": "text", + "content": "Assume subject_entity can be any of Bill Gates, Jeff Bezos, and Steve Jobs, while object-entity can be any of Google, Microsoft, and Meta." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 314, + 326, + 506, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 326, + 506, + 335 + ], + "spans": [ + { + "bbox": [ + 314, + 326, + 506, + 335 + ], + "type": "text", + "content": "Context: subject entity went to object entity Building 99." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 314, + 336, + 508, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 336, + 508, + 356 + ], + "spans": [ + { + "bbox": [ + 314, + 336, + 508, + 356 + ], + "type": "text", + "content": "Question: What's the relation between subject_entity and object-entity in the given context?" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 314, + 357, + 393, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 357, + 393, + 366 + ], + "spans": [ + { + "bbox": [ + 314, + 357, + 393, + 366 + ], + "type": "text", + "content": "Option: founder, visitor." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 314, + 368, + 461, + 378 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 368, + 461, + 378 + ], + "spans": [ + { + "bbox": [ + 314, + 368, + 461, + 378 + ], + "type": "text", + "content": "Answer with one word: visitor (GPT-3.5)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 394, + 525, + 431 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 394, + 525, + 431 + ], + "spans": [ + { + "bbox": [ + 302, + 394, + 525, + 431 + ], + "type": "text", + "content": "Figure 1: An example of entity bias in GPT-3.5. Our in-context intervention mitigates the conflicts between parametric knowledge and contextual knowledge." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 455, + 526, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 455, + 526, + 618 + ], + "spans": [ + { + "bbox": [ + 302, + 455, + 526, + 618 + ], + "type": "text", + "content": "may be misled by their memory of the entities Bill Gates and Microsoft, saying the relation between them in this context is founder rather than visitor, as shown in Fig. 1. Recent studies show that entity bias widely affects pretrained (large) language models (LLMs; Longpre et al. 2021; Yan et al. 2022; Zhou et al. 2023). These models have a tendency to disregard contextual information that contradicts or is infrequently reported in the pretrained corpus, while excessively relying on (biased) parametric knowledge (Longpre et al., 2021) to make unfaithful predictions and perpetuate bias." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 619, + 525, + 728 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 619, + 525, + 728 + ], + "spans": [ + { + "bbox": [ + 302, + 619, + 525, + 728 + ], + "type": "text", + "content": "Prior studies have proposed multiple causality-inspired methods to mitigate entity bias (Zhang et al., 2017; Nan et al., 2021; Wang et al., 2022; Zhu et al., 2022). Despite their potential, the causal models underlying these methods are flawed in practice, primarily because of imprecise parameter estimation. For example, some causal models necessitate estimating the probability distribution" + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 750, + 290, + 772 + ], + "type": "text", + "content": "1Our code is available at https://github.com/ luka-group/Causal-View-of-Entity-Bias" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "text", + "content": "2Although Zhang et al. (2017) do not mention causal theory, the proposed entity masking does follow a relevant principle to cut off causal links between specific entities and labels." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "text", + "content": "15173" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "spans": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15173-15184" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 165, + 807, + 428, + 817 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 807, + 428, + 817 + ], + "spans": [ + { + "bbox": [ + 165, + 807, + 428, + 817 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 66, + 71, + 290, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 71, + 290, + 191 + ], + "spans": [ + { + "bbox": [ + 66, + 71, + 290, + 191 + ], + "type": "text", + "content": "over labels when given a sentence that is devoid of entities or contextual information (Zhang et al., 2017; Wang et al., 2022). These methods either lose predictive information about entities, or are prone to erroneous representation without contextualization. The other critical problem is the difficulty of applying these methods to black-box LLMs, of which parameters are inaccessible and logits are uncalibrated." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 194, + 291, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 194, + 291, + 396 + ], + "spans": [ + { + "bbox": [ + 68, + 194, + 291, + 396 + ], + "type": "text", + "content": "To address the aforementioned problems, the first contribution of this paper is a causal analysis of entity bias mitigation methods (§3.1). We examine and compare the structured causal models (SCMs) behind existing methods. We find that, among the theoretically equivalent causal models (Verma and Pearl, 1990), there exists a specific SCM whose parameters are comparatively easier to estimate. As shown in Fig. 2, the proposed SCM only requires to intervene input entities to mitigate the presence of spurious features before passing them to the subsequent neural layers. Moreover, it retains the entity type information3 at an appropriate level of granularity without requiring explicit entity typing." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 397, + 291, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 397, + 291, + 599 + ], + "spans": [ + { + "bbox": [ + 66, + 397, + 291, + 599 + ], + "type": "text", + "content": "The second contribution of this paper is a training-time causal intervention technique for mitigating entity bias based on the proposed SCM (§3.2). Specifically, we identify entities that are likely to share similar predictive information with the given entity. During training, we perturb embedding of the given entity within a convex hull constructed by embeddings of similar entities. During inference, we represent the entity with the center of the convex hull. Taking advantage of the continuous nature of the embedding space, this intervention does not rely on models specifically trained on natural language to estimate the label distribution of unnatural text, nor does it sacrifice predictive entity or contextual information." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 600, + 291, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 600, + 291, + 723 + ], + "spans": [ + { + "bbox": [ + 67, + 600, + 291, + 723 + ], + "type": "text", + "content": "The third contribution of this paper is to transform the training-time intervention into in-context intervention for black-box LLMs whose parameters are inaccessible, and logits are uncalibrated (§3.3). A significant advantage of the proposed SCM is that the causal intervention is carried out at the input layer, enabling its implementation within an in-context setting. Specifically, we replace entities with placeholders and define each placeholder" + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 306, + 69, + 404, + 146 + ], + "blocks": [ + { + "bbox": [ + 306, + 69, + 404, + 146 + ], + "lines": [ + { + "bbox": [ + 306, + 69, + 404, + 146 + ], + "spans": [ + { + "bbox": [ + 306, + 69, + 404, + 146 + ], + "type": "image", + "image_path": "bf9d06eb92efa725a6034a50a4f065110b9aba5f3115575234f82e257a19ac56.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 414, + 68, + 514, + 146 + ], + "blocks": [ + { + "bbox": [ + 414, + 68, + 514, + 146 + ], + "lines": [ + { + "bbox": [ + 414, + 68, + 514, + 146 + ], + "spans": [ + { + "bbox": [ + 414, + 68, + 514, + 146 + ], + "type": "image", + "image_path": "822404bcab44be672f773ca48fbde5c0e883b30f7145a30b4351ca44fa7346de.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 305, + 147, + 526, + 212 + ], + "blocks": [ + { + "bbox": [ + 305, + 147, + 526, + 212 + ], + "lines": [ + { + "bbox": [ + 305, + 147, + 526, + 212 + ], + "spans": [ + { + "bbox": [ + 305, + 147, + 526, + 212 + ], + "type": "image", + "image_path": "fa721ff35c2731ef19e22b55cd75093b559fef91bc441146d20f4632827f9845.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 220, + 524, + 233 + ], + "lines": [ + { + "bbox": [ + 302, + 220, + 524, + 233 + ], + "spans": [ + { + "bbox": [ + 302, + 220, + 524, + 233 + ], + "type": "text", + "content": "Figure 2: Structured causal models revealing entity bias." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 255, + 526, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 255, + 526, + 348 + ], + "spans": [ + { + "bbox": [ + 302, + 255, + 526, + 348 + ], + "type": "text", + "content": "by examples - a set of similar entities. For example, we can replace Bill Gates in Fig. 1 with subject_entity and presuppend the prompt, \"Assume that subject-entity can be any of Steve Jobs, Bill Gates, and Jeff Bezos\", to the input. This in-context intervention can be applied to any black-box LLM without additional cost." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "spans": [ + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "text", + "content": "Experiments on relation extraction (RE) and machine reading comprehension (MRC) show that the proposed causal intervention techniques are effective for both white-box and black-box LLMs. Under the white-box setting (" + }, + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "inline_equation", + "content": "\\S 4" + }, + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "text", + "content": "), our training-time intervention significantly improves out-of-distribution performance of RoBERTa (Liu et al., 2019) on RE by 5.7 points and SpanBERT (Joshi et al., 2020) on MRC by 9.1 points, comparing with the vanilla version. Under the black-box setting (" + }, + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "inline_equation", + "content": "\\S 5" + }, + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "text", + "content": "), our in-context intervention effectively reduces the entity-based knowledge conflicts (Long-pre et al., 2021) and improves the task performance of GPT-3.5. Specifically, our method outperforms the best baseline by up to 20.5 points of exact match accuracy on MRC and reduces the memorization ratio by up to 17.6 points on RE. Further analyses reveal the crucial role of the number of neighboring entities " + }, + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 349, + 526, + 647 + ], + "type": "text", + "content": " in balancing the predictive information and biasing information from entities, and the necessity of entity placeholder definition for in-context intervention." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 658, + 396, + 671 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 658, + 396, + 671 + ], + "spans": [ + { + "bbox": [ + 303, + 658, + 396, + 671 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 685, + 526, + 752 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 685, + 526, + 752 + ], + "spans": [ + { + "bbox": [ + 302, + 685, + 526, + 752 + ], + "type": "text", + "content": "Entity Bias in LLMs. LLMs memorize factual knowledge in their parameters during pretraining (Roberts et al., 2020; Jiang et al., 2020) and show promising results in answering factual questions (Petroni et al., 2019; Brown et al., 2020; Wei" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 730, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 730, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 730, + 290, + 772 + ], + "type": "text", + "content": "3Entity type information plays a crucial role in entity-driven tasks. For example, without knowing a more specific location type, it is impossible to differentiate between relations born_in_city and born_in_country." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 315, + 760, + 498, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 760, + 498, + 772 + ], + "spans": [ + { + "bbox": [ + 315, + 760, + 498, + 772 + ], + "type": "text", + "content": "4https://platform.openai.com/docs/models/gpt-3-5" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15174" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 66, + 71, + 293, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 71, + 293, + 262 + ], + "spans": [ + { + "bbox": [ + 66, + 71, + 293, + 262 + ], + "type": "text", + "content": "et al., 2022). However, the parametric knowledge may be inaccurate due to the misinformation in the training corpus (Lin et al., 2022) or outdated as the world evolves (Liska et al., 2022; Kasai et al., 2022). In such scenarios, it is critical for LLMs to update their predictions when provided with contextual evidence. However, previous studies (Longpre et al., 2021; Qian et al., 2021b; Yan et al., 2022) observe that language models may take entities as shortcuts, leading to spurious predictions based solely on parametric knowledge. This bias becomes more prominent when the evidence contains infrequent or conflicting knowledge compared to the training corpus." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 263, + 292, + 439 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 263, + 292, + 439 + ], + "spans": [ + { + "bbox": [ + 66, + 263, + 292, + 439 + ], + "type": "text", + "content": "To mitigate this bias, previous work (Longpre et al., 2021; Chen et al., 2022; Li et al., 2022; Zhou et al., 2023) introduces the entity substitution technique, which involves constructing counterfactual data by randomly replacing the entities, and updating the language models either by finetuning or in-context learning. Although showing improved results, these techniques are empirical and lack theoretical backgrounds. In this paper, we theoretically analyze the entity bias problem from a causal view. Furthermore, we propose a causal intervention method that surpasses the performance of entity substitution." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 446, + 292, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 446, + 292, + 689 + ], + "spans": [ + { + "bbox": [ + 66, + 446, + 292, + 689 + ], + "type": "text", + "content": "Debiasing with Causal Intervention. LLMs have been revealed with bias problems, for which literature has paid much attention in order to mitigate their adverse effects (Sweeney and Najafian, 2019; Zhang et al., 2020b; Venkit and Wilson, 2021; Lalor et al., 2022). Recent debiasing techniques incorporate the concept of counterfactual inference, and have been applied in various tasks for bias mitigation (Niu and Zhang, 2021; Qian et al., 2021a; Wang et al., 2022). One dominant technique is based on causal mediation analysis (Udomcharoenchaikit et al., 2022), which involves decomposing the total effect into pure direct effect and total indirect effect. In this context, Wang et al. (2022) utilize total direct effect and total effect to debias the relation extraction. Apart from debiasing, causal mediation analysis can be used to analyze biases in LLMs (Vig et al., 2020; Finlayson et al., 2021)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 692, + 292, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 692, + 292, + 774 + ], + "spans": [ + { + "bbox": [ + 66, + 692, + 292, + 774 + ], + "type": "text", + "content": "In addition to intervening causal mediator, previous studies have also explored confounder analysis (Keith et al., 2020; Qian et al., 2021a; Feder et al., 2022; Weld et al., 2022). A confounder is a variable that influences both the input and the output, causing a spurious correlation between them." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 71, + 527, + 221 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 527, + 221 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 527, + 221 + ], + "type": "text", + "content": "Typically, the de-confounder process applies the do-calculus (Pearl, 2012) to compute the prediction assuming that the value of the confounder variable is not the observed one but follows its natural distribution (Zhang et al., 2020a; Tian et al., 2022). Our approach is also based on confounder analysis. While nearly all the aforementioned approaches request a white-box accessibility of the model with at least logits of predictions, this work represents a pilot study of deconfounder method that applies to purely black-box LLMs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 232, + 364, + 244 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 232, + 364, + 244 + ], + "spans": [ + { + "bbox": [ + 302, + 232, + 364, + 244 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 254, + 527, + 349 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 254, + 527, + 349 + ], + "spans": [ + { + "bbox": [ + 302, + 254, + 527, + 349 + ], + "type": "text", + "content": "In this section, we first analyze methods for mitigating entity bias in a causal view and propose an easy-to-estimate SCM as a theoretical basis (§3.1). Based on the proposed SCM, we design a training-time intervention technique for white-box LLMs (§3.2) and an in-context intervention technique for black-box LLMs (§3.3)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 361, + 473, + 374 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 361, + 473, + 374 + ], + "spans": [ + { + "bbox": [ + 302, + 361, + 473, + 374 + ], + "type": "text", + "content": "3.1 Causal Analysis of Entity Bias" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "spans": [ + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": "To compare existing methods in the same context, we analyze the structured causal models (SCMs) behind them. Fig. 2 shows two typical SCMs for entity bias mitigation methods, where " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " refers to the raw input, " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " refers to entities, and " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "Y" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " refers to the label. The links " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "X \\rightarrow Y \\leftarrow E" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " show that LLMs rely on both predictive information from the whole input and the biasing information from specific entities to make the prediction. The links " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "E \\rightarrow X" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "X \\rightarrow E" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " assume that the context is written down with the entity in mind or vice versa. As discussed by Verma and Pearl (1990), we cannot differentiate between these two directions merely based on statistical observations. Indeed, the two SCMs with opposite links between " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 302, + 380, + 527, + 596 + ], + "type": "text", + "content": " are equivalent according to the Bayes' theorem:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 351, + 609, + 477, + 655 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 351, + 609, + 477, + 655 + ], + "spans": [ + { + "bbox": [ + 351, + 609, + 477, + 655 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} P (X) P (E | X) P (Y | X, E) \\\\ = P (Y, X, E) \\\\ = P (E) P (X \\mid E) P (Y \\mid X, E) \\\\ \\end{array}", + "image_path": "ee9ebfe937b23ab358f5f7799c7312885ff9f6ab46f3e11f5fcf7836338ba7f4.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 669, + 527, + 736 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 669, + 527, + 736 + ], + "spans": [ + { + "bbox": [ + 302, + 669, + 527, + 736 + ], + "type": "text", + "content": "As revealed by these SCMs, entity bias exists in LLMs because entities serve as either confounders or mediators. Thus, the bias can be mitigated through causal intervention, such as backdoor adjustment" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 327, + 748, + 501, + 775 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 327, + 748, + 501, + 775 + ], + "spans": [ + { + "bbox": [ + 327, + 748, + 501, + 775 + ], + "type": "interline_equation", + "content": "P (Y | d o (X)) = \\sum_ {E} P (Y | X, E) P (E),", + "image_path": "a2937db22a1e1541432e9a6f933a27fe9a7c0fb052ab471dcb11c8b8424585a6.jpg" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15175" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 68, + 314, + 206 + ], + "blocks": [ + { + "bbox": [ + 70, + 68, + 314, + 206 + ], + "lines": [ + { + "bbox": [ + 70, + 68, + 314, + 206 + ], + "spans": [ + { + "bbox": [ + 70, + 68, + 314, + 206 + ], + "type": "image", + "image_path": "8e8cd6f645fb1ebe10dd5f451789ba5103994691b8b9e54976f1e508ad3d7713.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 76, + 213, + 515, + 227 + ], + "lines": [ + { + "bbox": [ + 76, + 213, + 515, + 227 + ], + "spans": [ + { + "bbox": [ + 76, + 213, + 515, + 227 + ], + "type": "text", + "content": "Figure 3: Left: Training-time intervention with " + }, + { + "bbox": [ + 76, + 213, + 515, + 227 + ], + "type": "inline_equation", + "content": "k = 4" + }, + { + "bbox": [ + 76, + 213, + 515, + 227 + ], + "type": "text", + "content": ". Right: Example of predictive and biasing information." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 333, + 68, + 527, + 206 + ], + "blocks": [ + { + "bbox": [ + 333, + 68, + 527, + 206 + ], + "lines": [ + { + "bbox": [ + 333, + 68, + 527, + 206 + ], + "spans": [ + { + "bbox": [ + 333, + 68, + 527, + 206 + ], + "type": "image", + "image_path": "5559d0c93cb9298a3db4eb6479d03fe4fabaa1a8fd6a0b0f1c71dd44810e15ea.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 247, + 291, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 247, + 291, + 464 + ], + "spans": [ + { + "bbox": [ + 67, + 247, + 291, + 464 + ], + "type": "text", + "content": "which eliminates the influence of a specific variable (in this context, " + }, + { + "bbox": [ + 67, + 247, + 291, + 464 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 67, + 247, + 291, + 464 + ], + "type": "text", + "content": ") by assigning values to this variable. However, previous SCM-based debiasing methods exhibit divergent performances, since they estimate different (conditional) probabilities using different surrogates when performing the causal intervention. For example, counterfactual analysis by Wang et al. (2022) estimates and deducts the biasing effect of entities on labels by masking the context, while Zhang et al. (2017) and Longpre et al. (2021) directly remove the effect of entities by entity masking or substitution. None of them estimates the causal effects of entity names precisely, due to the highly complex architectures of LLMs, which account for their unsatisfactory performance on mitigating entity bias." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "text", + "content": "In this work, we consider the SCM in Fig. 2, whose parameters are much easier to estimate in practice. Since most LLMs follow a sequential structure by stacking neural layers, mitigating the entity bias in one layer will also mitigate the entity bias in subsequent layers. The underlying logic is simple - if we block the spurious features in the input, there will be no spurious correlations to capture. Therefore, we propose to mitigate the entity bias in the input layer " + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "text", + "content": ", which could be an embedding layer or a prompt layer. Obviously, " + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "inline_equation", + "content": "P(M|X,E)" + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "text", + "content": " can be estimated more accurately and efficiently than " + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "inline_equation", + "content": "P(Y|X,E)" + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "text", + "content": ", because there is no need to run the whole model, ensuring less error propagation and computational cost. To further improve the estimation by retaining as much predictive information as possible, we propose to estimate " + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "inline_equation", + "content": "P(M|do(X))" + }, + { + "bbox": [ + 67, + 476, + 291, + 772 + ], + "type": "text", + "content": " by perturbing the entity with similar entities rather than masking it. In the following sections, we will show how to realize the proposed causal intervention on both white-box and black-box LLMs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 247, + 456, + 259 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 247, + 456, + 259 + ], + "spans": [ + { + "bbox": [ + 302, + 247, + 456, + 259 + ], + "type": "text", + "content": "3.2 Training-time Intervention" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 301, + 264, + 526, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 264, + 526, + 520 + ], + "spans": [ + { + "bbox": [ + 301, + 264, + 526, + 520 + ], + "type": "text", + "content": "For white-box models of which the parameters are accessible, we can effectively address their internal bias through training-time intervention. In the case of entity bias identified by the proposed SCM, we realize the causal intervention by perturbing the input entities or entity tokens using their neighboring counterparts in the embedding space, as shown in Fig. 3 (Left). For each entity presented in the input text, we first find its top " + }, + { + "bbox": [ + 301, + 264, + 526, + 520 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 301, + 264, + 526, + 520 + ], + "type": "text", + "content": " nearest neighbors according to embedding distance. Then we construct the smallest convex hull5 to cover the original entity and neighboring entities. Due to the continuous nature of the embedding space, the embeddings within the convex hull approximately represent the same predictive information as a whole. The entity-specific biasing information, which has the potential to trigger spurious shortcuts, gradually diminishes from the original entity towards the border of the convex hull." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 522, + 526, + 724 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 522, + 526, + 724 + ], + "spans": [ + { + "bbox": [ + 302, + 522, + 526, + 724 + ], + "type": "text", + "content": "During training, we introduce perturbations to the entity embedding by replacing it with a random embedding selected from within the convex hull. In this way, the convex hull bounded the predictive information, while random sampling further introduces noises and increases the diversity of data for robust training. During inference, we replace the original entity embedding with the center of the convex hull, in order to balance the trade-off between predictive and biasing information. Fig. 3 (Right) provides an example of the information preserved through such intervention. By replacing the entity Bill Gates with the center of the convex hull, encompassed by its neighboring entities, such as Steve Jobs and Jeff Bezos, we effectively retain the" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 731, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 731, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 731, + 525, + 772 + ], + "type": "text", + "content": "5This convex hull-bounded perturbation is inspired by Dong et al. (2021), where perturbation within a convex hull formed by synonyms is used to improve model robustness against word substitutions." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15176" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 74, + 84, + 326, + 236 + ], + "blocks": [ + { + "bbox": [ + 124, + 72, + 263, + 83 + ], + "lines": [ + { + "bbox": [ + 124, + 72, + 263, + 83 + ], + "spans": [ + { + "bbox": [ + 124, + 72, + 263, + 83 + ], + "type": "text", + "content": "1. Replace entities with placeholders" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 74, + 84, + 326, + 236 + ], + "lines": [ + { + "bbox": [ + 74, + 84, + 326, + 236 + ], + "spans": [ + { + "bbox": [ + 74, + 84, + 326, + 236 + ], + "type": "image", + "image_path": "7c8d3d8e5550de1e1a4c33f69110be8de990706e2138b587b755297749d3f07d.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 340, + 84, + 524, + 236 + ], + "blocks": [ + { + "bbox": [ + 361, + 72, + 502, + 83 + ], + "lines": [ + { + "bbox": [ + 361, + 72, + 502, + 83 + ], + "spans": [ + { + "bbox": [ + 361, + 72, + 502, + 83 + ], + "type": "text", + "content": "3. Define placeholders with examples" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 340, + 84, + 524, + 236 + ], + "lines": [ + { + "bbox": [ + 340, + 84, + 524, + 236 + ], + "spans": [ + { + "bbox": [ + 340, + 84, + 524, + 236 + ], + "type": "image", + "image_path": "403da6bc914c98f8dd45d3130dbb3a8385c8be8e9d55c263417ed0fedccb6b0f.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 99, + 245, + 492, + 259 + ], + "lines": [ + { + "bbox": [ + 99, + 245, + 492, + 259 + ], + "spans": [ + { + "bbox": [ + 99, + 245, + 492, + 259 + ], + "type": "text", + "content": "Figure 4: In-context intervention for black-box LLMs. We take relation extraction as an example." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 279, + 290, + 347 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 279, + 290, + 347 + ], + "spans": [ + { + "bbox": [ + 67, + 279, + 290, + 347 + ], + "type": "text", + "content": "shared predictive information (e.g., person), while mitigating the biasing information (e.g., founder of Microsoft). That is to say, the convex hull-bounded perturbation serves as an effective estimation of " + }, + { + "bbox": [ + 67, + 279, + 290, + 347 + ], + "type": "inline_equation", + "content": "P(M|do(X))" + }, + { + "bbox": [ + 67, + 279, + 290, + 347 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 381, + 204, + 393 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 381, + 204, + 393 + ], + "spans": [ + { + "bbox": [ + 68, + 381, + 204, + 393 + ], + "type": "text", + "content": "3.3 In-context Intervention" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 413, + 291, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 413, + 291, + 590 + ], + "spans": [ + { + "bbox": [ + 67, + 413, + 291, + 590 + ], + "type": "text", + "content": "The rise of Web services powered by black-box LLMs, such as GPT-3.5, introduces new challenges for mitigating entity bias, demanding debiasing methods that do not require accessible model weights and prediction logits. As discussed in §3.1, a key advantage of our SCM is that the deconfounder operation is merely on the input layer. In the context of black-box LLMs, the input is the user-provided prompt. Thus, we perform the causal intervention solely through modifying prompts to resolve entity bias. We propose a four-step (test-time) in-context intervention technique for black-box LLMs. Fig. 4 shows the whole process." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "type": "text", + "content": "First, we replace the original entity mention in the input with abstract placeholders (e.g., [ENTITY]). This step effectively mitigates any biasing information from the original entity names, because the placeholders are semantic-neutral. However, this step also eliminates predictive information from entities. We show in §5.3 that, without proper definition for the placeholder, models can easily fail to answer questions. In the next two steps, we construct definitions to provide predictive information for each placeholder while introducing minimal additional biasing information. Second, we query the LLM to name " + }, + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "type": "text", + "content": " entities similar to the" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "spans": [ + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": "original one (e.g., " + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "inline_equation", + "content": "E_{o}" + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": "). These generated entities (e.g., " + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "inline_equation", + "content": "E_{a}" + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "inline_equation", + "content": "E_{b}" + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": ") present similar predictive information as the original entity, and are able to fulfill the same function as neighboring entities in §3.2. Third, we define the placeholder with the original entity and generated entities. For example, we can verbalize the definition as \"Assume [ENTITY] can be any of " + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "inline_equation", + "content": "E_{o}" + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "inline_equation", + "content": "E_{a}" + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "inline_equation", + "content": "E_{b}" + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": "\". This definition encourages the LLM to find common properties of given entities rather than relying on biasing information of one specific entity. The resulting placeholder along with its definition serves as an effective estimation of " + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "inline_equation", + "content": "P(M|do(X))" + }, + { + "bbox": [ + 302, + 278, + 526, + 550 + ], + "type": "text", + "content": ". Finally, we presuppose the placeholder definition to be modified context and question, and query the LLM with the new prompt. This four-step adjustment ensures that the resulting prompt is free of specific biasing information pertaining to the original entity while still preserving sufficient predictive information by considering given entity examples as a whole." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 559, + 448, + 574 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 559, + 448, + 574 + ], + "spans": [ + { + "bbox": [ + 302, + 559, + 448, + 574 + ], + "type": "text", + "content": "4 White-Box Experiments" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 581, + 525, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 581, + 525, + 608 + ], + "spans": [ + { + "bbox": [ + 302, + 581, + 525, + 608 + ], + "type": "text", + "content": "In this section, we evaluate our training-time intervention under the white-box setting." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 617, + 425, + 630 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 617, + 425, + 630 + ], + "spans": [ + { + "bbox": [ + 302, + 617, + 425, + 630 + ], + "type": "text", + "content": "4.1 Experimental Setup" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 639, + 525, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 639, + 525, + 734 + ], + "spans": [ + { + "bbox": [ + 302, + 639, + 525, + 734 + ], + "type": "text", + "content": "Datasets and Metrics. We evaluate our methods on relation extraction (RE) and machine reading comprehension (MRC). For both tasks, we fine-tune models on an in-distribution (ID) training set and evaluate models on both ID and out-of-distribution (OOD) test sets. For RE, we adopt TACRED (Zhang et al., 2017) as the ID dataset and" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 740, + 525, + 772 + ], + "type": "text", + "content": "Here, we rely on the entity knowledge possessed by LLMs. However, it is possible to replace the LLM with external databases or tools in this step." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15177" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 80, + 68, + 512, + 190 + ], + "blocks": [ + { + "bbox": [ + 80, + 68, + 512, + 190 + ], + "lines": [ + { + "bbox": [ + 80, + 68, + 512, + 190 + ], + "spans": [ + { + "bbox": [ + 80, + 68, + 512, + 190 + ], + "type": "table", + "html": "
RE (F1)MRC (EM)
IDOODΔIDOODΔ
Vanilla Model71.1±0.962.3±0.6-12.4%79.1†±0.163.1†±0.8-20.2%
+ Continual Pretraining (Yan et al., 2022)*---79.6†±0.665.9†±1.1-17.2%
+ CoRE (Wang et al., 2022)71.3±0.361.2±0.6-14.2%---
+ Entity Mask (Zhang et al., 2017)61.4±0.561.9±0.5+0.9%75.7±0.662.9±0.4-16.9%
+ Entity Substitution (Longpre et al., 2021)66.6±0.665.8±0.3-1.2%76.4±0.870.8±1.5-7.3%
+ Ours70.8±0.368.0±0.3-3.9%77.0±0.772.2±0.5-6.2%
", + "image_path": "bfed6b16267ab5669a18bb0a93289a68731bd6628eaeb168d33cc6461d8768dd.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 192, + 525, + 229 + ], + "lines": [ + { + "bbox": [ + 67, + 192, + 525, + 229 + ], + "spans": [ + { + "bbox": [ + 67, + 192, + 525, + 229 + ], + "type": "text", + "content": "Table 1: Results under white-box setting. We report the average F1/EM score and standard deviation of three runs. " + }, + { + "bbox": [ + 67, + 192, + 525, + 229 + ], + "type": "inline_equation", + "content": "\\Delta" + }, + { + "bbox": [ + 67, + 192, + 525, + 229 + ], + "type": "text", + "content": " shows the relative performance change between ID and OOD. The best number of each column is in bold. * Continual pretraining is not directly comparable to finetuning methods. † Numbers copied from Yan et al. (2022)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 250, + 291, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 250, + 291, + 413 + ], + "spans": [ + { + "bbox": [ + 67, + 250, + 291, + 413 + ], + "type": "text", + "content": "EntRED (Wang et al., 2023) as the OOD dataset, and report micro-F1 score. In both datasets, entities in each sentence are given. For MRC, we adopt TriviaQA (Joshi et al., 2017) as the ID dataset and its answer-substituted version (Yan et al., 2022) as the OOD dataset, and report exact match (EM) score. Following Yan et al. (2022), we hold out " + }, + { + "bbox": [ + 67, + 250, + 291, + 413 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 67, + 250, + 291, + 413 + ], + "type": "text", + "content": " of the training data for development and evaluate models on the original development set. We use the DBName version of their OOD dataset. For all metrics, we report the average score with standard deviation of three runs." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 424, + 291, + 600 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 424, + 291, + 600 + ], + "spans": [ + { + "bbox": [ + 67, + 424, + 291, + 600 + ], + "type": "text", + "content": "Baselines. We compare our methods with the following baselines. Entity Mask (Zhang et al., 2017) masks the subject and object entities in the sentence with special tokens. Entity Substitution (Longpre et al., 2021) randomly selects an entity of the same type to substitute the original entity. CoRE (Wang et al., 2022) applies counterfactual inference by computing the difference between the prediction made with the entire sentence and the prediction made with only the entities observed. Continual Pretraining (Yan et al., 2022) introduces an intermediate pretraining stage to the backbone model with the objective of recovering masked entities." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "type": "text", + "content": "Implementation Details. For RE, we apply RoBERTa (Liu et al., 2019) as the backbone model following previous works (Zhou and Chen, 2022; Wang et al., 2022). We use the entity Marker_punct input format from Zhou and Chen (2022) in main experiments, in order to mitigate the impact of explicit entity type information on our analysis of entity bias. For MRC, we apply SpanBERT (Joshi et al., 2020) as the backbone model following Yan et al. (2022). Since entities are not given in MRC datasets, we use the same named entity recognition tool used by Yan et al. to" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 250, + 526, + 466 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 250, + 526, + 466 + ], + "spans": [ + { + "bbox": [ + 302, + 250, + 526, + 466 + ], + "type": "text", + "content": "extract entities. Since the detected entities could be noisy and incomplete, we perform our method upon answer-substituted training set ensuring all answer entities are perturbed as strong as Entity Substitution. Since RoBERTa and SpanBERT lack entity-level embeddings, we apply our causal intervention to each token embedding within the entity mention instead. To construct convex hull, We select neighboring tokens based on their Euclidean distance to the original token in the embedding space. For both tasks, we perform training-time intervention on each entity token with " + }, + { + "bbox": [ + 302, + 250, + 526, + 466 + ], + "type": "inline_equation", + "content": "k = 3" + }, + { + "bbox": [ + 302, + 250, + 526, + 466 + ], + "type": "text", + "content": ". While further data augmentation is always possible, for a fair comparison, we finetune all the models with the same amount of data. More implementation details are in Appx. §A.1." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 477, + 365, + 489 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 477, + 365, + 489 + ], + "spans": [ + { + "bbox": [ + 302, + 477, + 365, + 489 + ], + "type": "text", + "content": "4.2 Results" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 496, + 525, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 496, + 525, + 589 + ], + "spans": [ + { + "bbox": [ + 302, + 496, + 525, + 589 + ], + "type": "text", + "content": "As shown in Tab. 1, the vanilla RoBERTa and Span-BERT experiences significant declines in performance on RE " + }, + { + "bbox": [ + 302, + 496, + 525, + 589 + ], + "type": "inline_equation", + "content": "(-12.4\\%)" + }, + { + "bbox": [ + 302, + 496, + 525, + 589 + ], + "type": "text", + "content": " and MRC " + }, + { + "bbox": [ + 302, + 496, + 525, + 589 + ], + "type": "inline_equation", + "content": "(-20.2\\%)" + }, + { + "bbox": [ + 302, + 496, + 525, + 589 + ], + "type": "text", + "content": " when evaluated on OOD test sets. For both tasks, the OOD test set exhibits lower entity bias, achieving better performance on it suggests that the model relies less on entity bias as a predictive factor." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 591, + 526, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 591, + 526, + 713 + ], + "spans": [ + { + "bbox": [ + 302, + 591, + 526, + 713 + ], + "type": "text", + "content": "CoRE and Continual Pretraining are the only baselines that improve the ID performance. CoRE leads to a slight performance decrease on the OOD test set of RE in exchange, while Continual Pretraining further increases the OOD performance on MRC. Entity Mask successfully narrows down or even reverses the relative performance drop under OOD setting on the two tasks. However, its absolute performance decreases significantly due" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 721, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 721, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 721, + 525, + 772 + ], + "type": "text", + "content": "This is because CoRE is designed for a class-balanced setting, but this experiment emphasizes the performance on the raw class distribution. Moreover, we search its bias mitigation weight on the ID development set, which has a notably different entity distribution compared with the OOD test set." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15178" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 71, + 287, + 200 + ], + "blocks": [ + { + "bbox": [ + 70, + 71, + 287, + 200 + ], + "lines": [ + { + "bbox": [ + 70, + 71, + 287, + 200 + ], + "spans": [ + { + "bbox": [ + 70, + 71, + 287, + 200 + ], + "type": "image", + "image_path": "1e5e73f04a91dfac533284c7fb68b82174d8773d9d033b143674b9b8251048c4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 211, + 290, + 236 + ], + "lines": [ + { + "bbox": [ + 67, + 211, + 290, + 236 + ], + "spans": [ + { + "bbox": [ + 67, + 211, + 290, + 236 + ], + "type": "text", + "content": "Figure 5: F1 score of training-time intervention with different " + }, + { + "bbox": [ + 67, + 211, + 290, + 236 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 211, + 290, + 236 + ], + "type": "text", + "content": " on RE." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 259, + 291, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 259, + 291, + 407 + ], + "spans": [ + { + "bbox": [ + 67, + 259, + 291, + 407 + ], + "type": "text", + "content": "to the loss of predictive information from entities. Moreover, its effectiveness is dependent on the task property. Unlike MRC, entities are given and are not answers in RE, so the gap between ID and OOD performance of Entity Mask are much smaller. Entity Substitution stands out among all the baselines in terms of the OOD performance, with an absolute improvement of 3.5 points on RE and 7.7 points on MRC. However, its ID performance suffers a lot from the distribution shift of entities during training." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 409, + 291, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 409, + 291, + 544 + ], + "spans": [ + { + "bbox": [ + 67, + 409, + 291, + 544 + ], + "type": "text", + "content": "Our training-time intervention achieves the best OOD performance, with an absolute improvement of 2.2 points on RE and 1.4 points on MRC compared with Entity Substitution. At the same time, its ID performance is also better. These results show that our method mitigates entity bias more effectively without losing much predictive information. In other words, the proposed method represents a better way to estimate the parameters of the proposed SCM accurately." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 558, + 136, + 571 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 558, + 136, + 571 + ], + "spans": [ + { + "bbox": [ + 67, + 558, + 136, + 571 + ], + "type": "text", + "content": "4.3 Analysis" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 577, + 291, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 577, + 291, + 618 + ], + "spans": [ + { + "bbox": [ + 67, + 577, + 291, + 618 + ], + "type": "text", + "content": "To provide a comprehensive understanding of our training-time intervention, we further conduct analyses on RE." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "text", + "content": "Effect of " + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "text", + "content": ". The number of neighbors, " + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "text", + "content": ", plays a crucial role in balancing the predictive information and biasing information from entities. To find the sweet spot of " + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "text", + "content": ", we examine its influence on model performance as shown in Fig. 5. In general, the ID performance decreases when " + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "text", + "content": " increases. As the value of " + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 624, + 291, + 773 + ], + "type": "text", + "content": " increases, the resulting convex hull becomes larger, causing the center of the hull to move further away from the original entity. Consequently, both the predictive information and biasing information that contribute to ID performance grad" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": "ually diminish. In contrast, the OOD performance is lower when " + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": " is too big or too small. When " + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": " is too big, the same problem under ID setting also happens to the OOD setting. When " + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": " is too small, the biasing information is not effectively mitigated, because the perturbed entity is too close to the original entity." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 171, + 527, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 171, + 527, + 348 + ], + "spans": [ + { + "bbox": [ + 302, + 171, + 527, + 348 + ], + "type": "text", + "content": "Entity Type as Input. Previous experiments in this section do not explicitly input entity information as it may disturb the causal analysis. Here, we analyze the effect of entity type information as input. We use the typed-entity Marker_punct input format from Zhou and Chen (2022). The ID and OOD F1 scores of vanilla RoBERTa model are 74.6 and 68.9 points, respectively. Our training-time intervention further improves the ID performance by 0.7 points and the OOD performance by 2.9 points. These results indicate that information from neighboring entities is complementary to coarse-grained entity type information for precise RE." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 359, + 446, + 373 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 359, + 446, + 373 + ], + "spans": [ + { + "bbox": [ + 302, + 359, + 446, + 373 + ], + "type": "text", + "content": "5 Black-Box Experiments" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 381, + 525, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 381, + 525, + 423 + ], + "spans": [ + { + "bbox": [ + 302, + 381, + 525, + 423 + ], + "type": "text", + "content": "In this section, we evaluate our in-context intervention for mitigating entity bias from LLMs under black-box setting." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 433, + 425, + 447 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 433, + 425, + 447 + ], + "spans": [ + { + "bbox": [ + 302, + 433, + 425, + 447 + ], + "type": "text", + "content": "5.1 Experimental Setup" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 456, + 526, + 740 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 456, + 526, + 740 + ], + "spans": [ + { + "bbox": [ + 302, + 456, + 526, + 740 + ], + "type": "text", + "content": "Datasets. Following Zhou et al. (2023), we adopt GPT-3.5 text-davinci-003 as the backbone LLM and evaluate the model performance under a zero-shot setting. We use the RE and MRC datasets provided by Zhou et al. (2023). The RE dataset is based on Re-TACRED (Stoica et al., 2021). Zhou et al. pair each instance's entities with a randomly sampled context that shares the same entity types but possesses different relations. To mitigate the influence of the label no Relation, which can also serve as a signal of abstention, we further filter out all instances whose original or updated labels are no relation. The MRC dataset is based on Natural Questions (Kwiatkowski et al., 2019). Zhou et al. replace the original answer in each instance with a randomly sampled entity of the same type. They only collect instances where the LLM can give the correct answer based on the raw context. Intuitively, LLMs that faithfully capture contextual information should update their answers based on the new context." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "content": "Metrics. We report the F1 score for RE, and EM score for MRC. To align with previous works, we" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 792 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 792 + ], + "type": "text", + "content": "15179" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 81, + 180, + 188 + ], + "blocks": [ + { + "bbox": [ + 104, + 70, + 155, + 81 + ], + "lines": [ + { + "bbox": [ + 104, + 70, + 155, + 81 + ], + "spans": [ + { + "bbox": [ + 104, + 70, + 155, + 81 + ], + "type": "text", + "content": "MRC (EM↑)" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 70, + 81, + 180, + 188 + ], + "lines": [ + { + "bbox": [ + 70, + 81, + 180, + 188 + ], + "spans": [ + { + "bbox": [ + 70, + 81, + 180, + 188 + ], + "type": "image", + "image_path": "be6c9003fb40edac702a485a798962fd538118535c39d89b507e4846ed2de6dc.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 184, + 81, + 295, + 187 + ], + "blocks": [ + { + "bbox": [ + 218, + 70, + 269, + 81 + ], + "lines": [ + { + "bbox": [ + 218, + 70, + 269, + 81 + ], + "spans": [ + { + "bbox": [ + 218, + 70, + 269, + 81 + ], + "type": "text", + "content": "MRC (MR↓)" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 184, + 81, + 295, + 187 + ], + "lines": [ + { + "bbox": [ + 184, + 81, + 295, + 187 + ], + "spans": [ + { + "bbox": [ + 184, + 81, + 295, + 187 + ], + "type": "image", + "image_path": "bc72c29f86b2b87f5661457131bdf034bd1761395f3cb70403b534e8f09f5462.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 299, + 81, + 408, + 187 + ], + "blocks": [ + { + "bbox": [ + 337, + 70, + 378, + 81 + ], + "lines": [ + { + "bbox": [ + 337, + 70, + 378, + 81 + ], + "spans": [ + { + "bbox": [ + 337, + 70, + 378, + 81 + ], + "type": "text", + "content": "RE(F1↑)" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 299, + 81, + 408, + 187 + ], + "lines": [ + { + "bbox": [ + 299, + 81, + 408, + 187 + ], + "spans": [ + { + "bbox": [ + 299, + 81, + 408, + 187 + ], + "type": "image", + "image_path": "cd431268605c6800f35fa1004b6cbc4ecbaddfcdcd993a6c0e1d14738e944b64.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 413, + 81, + 522, + 187 + ], + "blocks": [ + { + "bbox": [ + 450, + 70, + 493, + 81 + ], + "lines": [ + { + "bbox": [ + 450, + 70, + 493, + 81 + ], + "spans": [ + { + "bbox": [ + 450, + 70, + 493, + 81 + ], + "type": "text", + "content": "RE (MR↓)" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 413, + 81, + 522, + 187 + ], + "lines": [ + { + "bbox": [ + 413, + 81, + 522, + 187 + ], + "spans": [ + { + "bbox": [ + 413, + 81, + 522, + 187 + ], + "type": "image", + "image_path": "e81114962483800c37f0c8a36288ff6f38777800b7221cf11cb9b2953493d90d.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 71, + 268, + 288, + 355 + ], + "blocks": [ + { + "bbox": [ + 67, + 211, + 524, + 248 + ], + "lines": [ + { + "bbox": [ + 67, + 211, + 524, + 248 + ], + "spans": [ + { + "bbox": [ + 67, + 211, + 524, + 248 + ], + "type": "text", + "content": "Figure 6: GPT-3.5 results on MRC and RE under black-box setting. We report the EM score on MRC and the F1 score on RE, for which higher scores are better. We also report the MR score on both tasks, for which lower scores are better. Our in-context intervention performs consistently better than baselines under all settings." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 71, + 268, + 288, + 355 + ], + "lines": [ + { + "bbox": [ + 71, + 268, + 288, + 355 + ], + "spans": [ + { + "bbox": [ + 71, + 268, + 288, + 355 + ], + "type": "image", + "image_path": "4d847bd6ecc6921a78d6e8692d8fa0ca19bbd734699d2d8f8b462ba5f6d13c98.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 365, + 289, + 390 + ], + "lines": [ + { + "bbox": [ + 67, + 365, + 289, + 390 + ], + "spans": [ + { + "bbox": [ + 67, + 365, + 289, + 390 + ], + "type": "text", + "content": "Figure 7: Ablation study of in-context intervention for GPT-3.5 on RE." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 411, + 289, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 411, + 289, + 453 + ], + "spans": [ + { + "bbox": [ + 67, + 411, + 289, + 453 + ], + "type": "text", + "content": "also report the memorization ratio (MR; Longpre et al. 2021) to measure the model's ability to update answers based on given contexts." + }, + { + "bbox": [ + 67, + 411, + 289, + 453 + ], + "type": "inline_equation", + "content": "^{8}" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 66, + 457, + 292, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 457, + 292, + 619 + ], + "spans": [ + { + "bbox": [ + 66, + 457, + 292, + 619 + ], + "type": "text", + "content": "Baselines. We compare our in-context intervention with the methods introduced by Zhou et al. (2023). Base prompts directly concatenate the context and the question of each instance as the query. Attribute-based prompts append \"in the given context\" to the question. Opinion-based prompts modified the context to a narrator's statement by prepending \"Bob said\" to the context, and then query the LLM about the narrator's opinion by preponding \"What's Bob's opinion on\" to the question. We evaluate all methods with and without specifically designed task instructions following Zhou et al. (2023)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 624, + 290, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 290, + 733 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 290, + 733 + ], + "type": "text", + "content": "Implementation Details. We apply our in-context intervention to attribute-based prompts. We adopt the backbone LLM to propose two similar entities along with the original entity to define each placeholder. To further eliminate the spurious entity mapping, we shuffle the entities for each placeholder before verbalization. Details of all prompt templates used can be found in Appx. §A.2. Since" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 269, + 526, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 269, + 526, + 350 + ], + "spans": [ + { + "bbox": [ + 302, + 269, + 526, + 350 + ], + "type": "text", + "content": "entities are not given in MRC, we detect named entities and replace them with placeholders using gpt-3.5-turbo as an external tool. Given the potential abundance of entities in long contexts, we do not replace entities that exclusively appear in the context." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 361, + 365, + 372 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 361, + 365, + 372 + ], + "spans": [ + { + "bbox": [ + 302, + 361, + 365, + 372 + ], + "type": "text", + "content": "5.2 Results" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 379, + 525, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 379, + 525, + 581 + ], + "spans": [ + { + "bbox": [ + 302, + 379, + 525, + 581 + ], + "type": "text", + "content": "As shown in Fig. 6, all methods benefit from carefully designed task instructions in terms of task performance. The Opinion-based prompt performs the best among all baselines in most cases. Compared with the Base prompt, it significantly improves the EM score by 18.7-21.5 points on MRC and the F1 score by 0.6-4.7 points on RE. Our in-context intervention achieves the highest EM/F1 score and the lowest MR score under all settings. Specifically, without task instruction, our in-context intervention outperforms the best baseline by 20.5 EM points on MRC and reduces the MR score by 17.6 points on RE. These results demonstrate the effectiveness of our causal intervention for addressing entity-based knowledge conflicts in black-box LLMs." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 592, + 401, + 605 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 592, + 401, + 605 + ], + "spans": [ + { + "bbox": [ + 302, + 592, + 401, + 605 + ], + "type": "text", + "content": "5.3 Ablation Study" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 611, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 611, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 611, + 525, + 772 + ], + "type": "text", + "content": "We in addition conduct an ablation study on RE to provide a comprehensive understanding of our method, as shown in Fig. 7. When the placeholder definition is not provided (i.e., w/o definition), no entity information, including both biasing and predictive information, appears in the input. As a result, it successfully blocks any spurious shortcuts with MR drops to 0. However, the F1 score also drops sharply from 71.8 points to 37.9 points, indicating that some entity information is essential to accurate RE and the LLM cannot understand the placeholders well without their definition." + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "type": "inline_equation", + "content": "{}^{8}MR = \\frac{P_{o}}{P_{o} + P_{s}}" + }, + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "type": "text", + "content": " , where " + }, + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "type": "inline_equation", + "content": "P_{o}" + }, + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "type": "text", + "content": " is the probability that the model generates the original answer and " + }, + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "type": "inline_equation", + "content": "P_{s}" + }, + { + "bbox": [ + 67, + 738, + 290, + 773 + ], + "type": "text", + "content": " is the probability that the model updates the answer correctly." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15180" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 275 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 275 + ], + "type": "text", + "content": "We further examine the role of original entities in the placeholder definition. On the one hand, we remove the original entities from the definition (i.e., w/o original entity). Results show that our method can still improve F1 while reducing MR. This verifies the effectiveness of using a set of similar entities to represent the predictive information from the original entity. On the other hand, we put the original subject and object entities at the same position (i.e., w/o entity shuffle) in the definition so that the LLM can easily map them. As a result, the MR increases significantly, showing that the LLM can find spurious shortcuts even through mapping the subject entity and the object entity from two entity sets." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 289, + 147, + 301 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 289, + 147, + 301 + ], + "spans": [ + { + "bbox": [ + 67, + 289, + 147, + 301 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 313, + 291, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 313, + 291, + 530 + ], + "spans": [ + { + "bbox": [ + 67, + 313, + 291, + 530 + ], + "type": "text", + "content": "In this paper, we analyze the entity bias in LLMs from a causal view. Building upon an SCM whose parameters are easier to estimate, we propose training-time causal intervention for white-box LLMs and in-context causal intervention for black-box LLMs. Both intervention techniques perturb the original entity with neighboring entities to mitigate spurious correlations between specific entities and predictions. Experiments on relation extraction and machine reading comprehension show that the proposed intervention can effectively reduce the conflicts between parametric knowledge and contextual knowledge and significantly improve the performance of LLMs. Future work can apply our causal intervention to more LLMs and tasks to achieve context-faithful answers." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 545, + 166, + 560 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 545, + 166, + 560 + ], + "spans": [ + { + "bbox": [ + 68, + 545, + 166, + 560 + ], + "type": "text", + "content": "Acknowledgement" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 570, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 570, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 570, + 291, + 773 + ], + "type": "text", + "content": "We appreciate the reviewers for their insightful comments and suggestions. Fei Wang is supported by the Annenberg Fellowship and the Amazon ML Fellowship. Wenjie Mo is supported by the USC CURVE Fellowship and the Provost's Research Fellowship. Wenxuan Zhou and Muhao Chen are supported by the NSF Grant IIS 2105329, the NSF Grant ITE 2333736, the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. This work is also supported in part by a Cisco Research Award, two Amazon Research Awards, and a Keston Research Award. Computing of this work has been partly supported by a subaward of NSF Cloudbank 1925001 through UCSD." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 303, + 70, + 361, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 70, + 361, + 83 + ], + "spans": [ + { + "bbox": [ + 303, + 70, + 361, + 83 + ], + "type": "text", + "content": "Limitation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 91, + 527, + 336 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 91, + 527, + 336 + ], + "spans": [ + { + "bbox": [ + 302, + 91, + 527, + 336 + ], + "type": "text", + "content": "Although we have tried to verify the effectiveness of our method under diverse settings, including different LLMs, different accessibility of model parameters, and different tasks, there are always more options for further investigation, especially nowadays when more and more LLMs are kept produced. Considering the property of the entity bias issue may vary when it comes to different LLMs and datasets from different domains, future work can build better benchmark for more comprehensive evaluation. In this paper, we only consider zero-shot prompting for black-box LLMs, because this will help us to control variables during causal analysis. However, it is possible to combine the proposed causal intervention with cutting-edge LLM inference methods, such as in-context learning (Brown et al., 2020), although the underlying SCM may become more complex." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 304, + 358, + 362, + 370 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 358, + 362, + 370 + ], + "spans": [ + { + "bbox": [ + 304, + 358, + 362, + 370 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 376, + 527, + 773 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 304, + 376, + 525, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 376, + 525, + 444 + ], + "spans": [ + { + "bbox": [ + 304, + 376, + 525, + 444 + ], + "type": "text", + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 451, + 526, + 540 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 451, + 526, + 540 + ], + "spans": [ + { + "bbox": [ + 304, + 451, + 526, + 540 + ], + "type": "text", + "content": "Hung-Ting Chen, Michael Zhang, and Eunsol Choi. 2022. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2292-2307, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 548, + 525, + 593 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 548, + 525, + 593 + ], + "spans": [ + { + "bbox": [ + 304, + 548, + 525, + 593 + ], + "type": "text", + "content": "Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. 2021. Towards robustness against natural language word substitutions. In International Conference on Learning Representations." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 600, + 526, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 600, + 526, + 678 + ], + "spans": [ + { + "bbox": [ + 304, + 600, + 526, + 678 + ], + "type": "text", + "content": "Amir Feder, Katherine A Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E Roberts, et al. 2022. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. Transactions of the Association for Computational Linguistics, 10:1138-1158." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 303, + 686, + 526, + 742 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 686, + 526, + 742 + ], + "spans": [ + { + "bbox": [ + 303, + 686, + 526, + 742 + ], + "type": "text", + "content": "Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan Belinkov. 2021. Causal analysis of syntactic agreement mechanisms in neural language models. arXiv preprint arXiv:2106.06087." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 303, + 749, + 527, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 749, + 527, + 773 + ], + "spans": [ + { + "bbox": [ + 303, + 749, + 527, + 773 + ], + "type": "text", + "content": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman," + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15181" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 80, + 72, + 290, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 72, + 290, + 105 + ], + "spans": [ + { + "bbox": [ + 80, + 72, + 290, + 105 + ], + "type": "text", + "content": "and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 114, + 289, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 114, + 289, + 158 + ], + "spans": [ + { + "bbox": [ + 69, + 114, + 289, + 158 + ], + "type": "text", + "content": "Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 166, + 290, + 222 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 166, + 290, + 222 + ], + "spans": [ + { + "bbox": [ + 69, + 166, + 290, + 222 + ], + "type": "text", + "content": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 229, + 290, + 296 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 229, + 290, + 296 + ], + "spans": [ + { + "bbox": [ + 69, + 229, + 290, + 296 + ], + "type": "text", + "content": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 304, + 290, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 304, + 290, + 359 + ], + "spans": [ + { + "bbox": [ + 69, + 304, + 290, + 359 + ], + "type": "text", + "content": "Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. Realtime qa: What's the answer right now? arXiv preprint arXiv:2207.13332." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 367, + 290, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 367, + 290, + 432 + ], + "spans": [ + { + "bbox": [ + 69, + 367, + 290, + 432 + ], + "type": "text", + "content": "Katherine Keith, David Jensen, and Brendan O'Connor. 2020. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332-5344." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 441, + 290, + 518 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 441, + 290, + 518 + ], + "spans": [ + { + "bbox": [ + 69, + 441, + 290, + 518 + ], + "type": "text", + "content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 527, + 290, + 593 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 527, + 290, + 593 + ], + "spans": [ + { + "bbox": [ + 69, + 527, + 290, + 593 + ], + "type": "text", + "content": "John P Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, and Ahmed Abbasi. 2022. Benchmarking intersectional biases in nlp. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3598-3609." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 601, + 290, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 601, + 290, + 655 + ], + "spans": [ + { + "bbox": [ + 69, + 601, + 290, + 655 + ], + "type": "text", + "content": "Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2022. Large language models with controllable working memory. arXiv preprint arXiv:2211.05110." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 665, + 290, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 665, + 290, + 731 + ], + "spans": [ + { + "bbox": [ + 69, + 665, + 290, + 731 + ], + "type": "text", + "content": "Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214-3252, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 739, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 739, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 739, + 290, + 772 + ], + "type": "text", + "content": "Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, D'Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer," + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 314, + 72, + 525, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 525, + 126 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 525, + 126 + ], + "type": "text", + "content": "Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models. In International Conference on Machine Learning, pages 13604-13622. PMLR." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 136, + 525, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 136, + 525, + 190 + ], + "spans": [ + { + "bbox": [ + 304, + 136, + 525, + 190 + ], + "type": "text", + "content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 198, + 525, + 264 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 198, + 525, + 264 + ], + "spans": [ + { + "bbox": [ + 304, + 198, + 525, + 264 + ], + "type": "text", + "content": "Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052-7063." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 272, + 525, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 272, + 525, + 327 + ], + "spans": [ + { + "bbox": [ + 304, + 272, + 525, + 327 + ], + "type": "text", + "content": "Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, and Wei Lu. 2021. Uncovering main causalities for long-tailed information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9683-9695." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 336, + 525, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 336, + 525, + 379 + ], + "spans": [ + { + "bbox": [ + 304, + 336, + 525, + 379 + ], + "type": "text", + "content": "Yulei Niu and Hanwang Zhang. 2021. Introspective distillation for robust question answering. Advances in Neural Information Processing Systems, 34:16292-16304." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 387, + 525, + 421 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 387, + 525, + 421 + ], + "spans": [ + { + "bbox": [ + 304, + 387, + 525, + 421 + ], + "type": "text", + "content": "Judea Pearl. 2012. The do-calculus revisited. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, pages 3-11." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 429, + 525, + 496 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 429, + 525, + 496 + ], + "spans": [ + { + "bbox": [ + 304, + 429, + 525, + 496 + ], + "type": "text", + "content": "Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2020. Learning from context or names? an empirical study on neural relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3661-3672." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 503, + 525, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 503, + 525, + 602 + ], + "spans": [ + { + "bbox": [ + 304, + 503, + 525, + 602 + ], + "type": "text", + "content": "Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 610, + 525, + 687 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 610, + 525, + 687 + ], + "spans": [ + { + "bbox": [ + 304, + 610, + 525, + 687 + ], + "type": "text", + "content": "Chen Qian, Fuli Feng, Lijie Wen, Chunping Ma, and Pengjun Xie. 2021a. Counterfactual inference for text classification debiasing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5434-5445." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 695, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 695, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 695, + 525, + 772 + ], + "type": "text", + "content": "Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. 2021b. Annotation inconsistency and entity bias in MultiWOZ. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 326-337, Singapore and Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "15182" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 139 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 139 + ], + "type": "text", + "content": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 148, + 291, + 204 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 148, + 291, + 204 + ], + "spans": [ + { + "bbox": [ + 69, + 148, + 291, + 204 + ], + "type": "text", + "content": "George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-tacred: Addressing shortcomings of the tacred dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13843-13850." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 214, + 290, + 269 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 214, + 290, + 269 + ], + "spans": [ + { + "bbox": [ + 69, + 214, + 290, + 269 + ], + "type": "text", + "content": "Chris Sweeney and Maryam Najafian. 2019. A transparent framework for evaluating unintended demographic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1662-1667." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 280, + 290, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 280, + 290, + 335 + ], + "spans": [ + { + "bbox": [ + 69, + 280, + 290, + 335 + ], + "type": "text", + "content": "Bing Tian, Yixin Cao, Yong Zhang, and Chunxiao Xing. 2022. Debiasing nlu models via causal intervention and counterfactual reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11376-11384." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 345, + 290, + 433 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 345, + 290, + 433 + ], + "spans": [ + { + "bbox": [ + 69, + 345, + 290, + 433 + ], + "type": "text", + "content": "Can Udomcharoenchaikit, Wuttikorn Ponwitayarat, Patomporn Payoungkhamdee, Kanruethai Masuk, Weerayut Buaphet, Ekapol Chuangsuwanich, and Sarana Nutanong. 2022. Mitigating spurious correlation in natural language understanding with counterfactual inference. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11308-11321." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 444, + 290, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 444, + 290, + 487 + ], + "spans": [ + { + "bbox": [ + 69, + 444, + 290, + 487 + ], + "type": "text", + "content": "Pranav Narayanan Venkit and Shomir Wilson. 2021. Identification of bias against people with disabilities in sentiment analysis and toxicity detection models. arXiv preprint arXiv:2111.13259." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 498, + 290, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 498, + 290, + 543 + ], + "spans": [ + { + "bbox": [ + 69, + 498, + 290, + 543 + ], + "type": "text", + "content": "Thomas Verma and Judea Pearl. 1990. Equivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, pages 255-270." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 553, + 290, + 618 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 553, + 290, + 618 + ], + "spans": [ + { + "bbox": [ + 69, + 553, + 290, + 618 + ], + "type": "text", + "content": "Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Simas Sakenis, Jason Huang, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for interpreting neural nlp: The case of gender bias. arXiv preprint arXiv:2004.12265." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 629, + 290, + 695 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 629, + 290, + 695 + ], + "spans": [ + { + "bbox": [ + 69, + 629, + 290, + 695 + ], + "type": "text", + "content": "Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, and Bryan Hooi. 2022. Should we rely on entity mentions for relation extraction? debi- aising relation extraction with counterfactual analysis. arXiv preprint arXiv:2205.03784." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 706, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 706, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 706, + 290, + 772 + ], + "type": "text", + "content": "Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, and Muhao Chen. 2023. How fragile is relation extraction under entity replacements? In Proceedings of the 27th SIGNLL Conference on Computational Natural Language Learning (CoNLL)." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 766 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 304, + 72, + 525, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 525, + 127 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 525, + 127 + ], + "type": "text", + "content": "Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In International Conference on Learning Representations." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 134, + 525, + 201 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 134, + 525, + 201 + ], + "spans": [ + { + "bbox": [ + 304, + 134, + 525, + 201 + ], + "type": "text", + "content": "Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan A Rossi, and Tim Althoff. 2022. Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 1109-1120." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 208, + 525, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 208, + 525, + 275 + ], + "spans": [ + { + "bbox": [ + 304, + 208, + 525, + 275 + ], + "type": "text", + "content": "Nan Xu, Fei Wang, Bangzheng Li, Mingtao Dong, and Muhao Chen. 2022. Does your model classify entities reasonably? diagnosing and mitigating spurious correlations in entity typing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 280, + 525, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 280, + 525, + 359 + ], + "spans": [ + { + "bbox": [ + 304, + 280, + 525, + 359 + ], + "type": "text", + "content": "Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. 2022. On the robustness of reading comprehension models to entity renaming. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508-520." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 365, + 525, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 365, + 525, + 420 + ], + "spans": [ + { + "bbox": [ + 304, + 365, + 525, + 420 + ], + "type": "text", + "content": "Dong Zhang, Hanwang Zhang, Jinhui Tang, Xian-Sheng Hua, and Qianru Sun. 2020a. Causal intervention for weakly-supervised semantic segmentation. Advances in Neural Information Processing Systems, 33:655-666." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 428, + 525, + 483 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 428, + 525, + 483 + ], + "spans": [ + { + "bbox": [ + 304, + 428, + 525, + 483 + ], + "type": "text", + "content": "Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu, and Tiejun Zhao. 2020b. Demographics should not be the reason of toxicity: Mitigating discrimination in text classifications with instance weighting. arXiv preprint arXiv:2004.14088." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 490, + 525, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 490, + 525, + 555 + ], + "spans": [ + { + "bbox": [ + 304, + 490, + 525, + 555 + ], + "type": "text", + "content": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35-45." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 564, + 525, + 630 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 564, + 525, + 630 + ], + "spans": [ + { + "bbox": [ + 304, + 564, + 525, + 630 + ], + "type": "text", + "content": "Wenxuan Zhou and Muhao Chen. 2022. An improved baseline for sentence-level relation extraction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 161-168." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 637, + 525, + 692 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 637, + 525, + 692 + ], + "spans": [ + { + "bbox": [ + 304, + 637, + 525, + 692 + ], + "type": "text", + "content": "Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023. Context-faithful prompting for large language models. In *Findings of the 2023 Conference on Empirical Methods in Natural Language Processing*." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 699, + 525, + 766 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 699, + 525, + 766 + ], + "spans": [ + { + "bbox": [ + 304, + 699, + 525, + 766 + ], + "type": "text", + "content": "Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, and Fuzhen Zhuang. 2022. Generalizing to the future: Mitigating entity bias in fake news detection. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2120-2125." + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15183" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 70, + 212, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 70, + 212, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 70, + 212, + 84 + ], + "type": "text", + "content": "A Implementation Details" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 95, + 211, + 109 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 95, + 211, + 109 + ], + "spans": [ + { + "bbox": [ + 68, + 95, + 211, + 109 + ], + "type": "text", + "content": "A.1 White-Box Experiments" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 116, + 290, + 249 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 116, + 290, + 249 + ], + "spans": [ + { + "bbox": [ + 67, + 116, + 290, + 249 + ], + "type": "text", + "content": "For RE, we use RoBERTa-Large as our backbone model, which has 354 million parameters. Our implementation is based on the codebase by Zhou and Chen (2022) with their default hyper-parameters. More specifically, we employ a learning rate of 3e-5, a batch size of 32, and conduct training for a total of 5 epochs. Other method-specific hyperparameters are selected on the development set of TACRED. Finetuning typically takes 1.5 hours on an NVIDIA RTX A5000 GPU." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 253, + 291, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 253, + 291, + 400 + ], + "spans": [ + { + "bbox": [ + 67, + 253, + 291, + 400 + ], + "type": "text", + "content": "For MRC, we use SpanBERT-base-cased as our backbone model, which has 110 million parameters. Our implementation is based on the codebase by Yan et al. (2022) with their default hyperparameters. More specifically, we employ a learning rate of 2e-5, a batch size of 16, and conduct training for a total of 4 epochs. Other method-specific hyper-parameters are selected on the hold-out development set of TriviaQA. Finetuning typically takes 3 hours on an NVIDIA RTX A5000 GPU." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 416, + 209, + 428 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 416, + 209, + 428 + ], + "spans": [ + { + "bbox": [ + 68, + 416, + 209, + 428 + ], + "type": "text", + "content": "A.2 Black-Box Experiments" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 436, + 289, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 436, + 289, + 462 + ], + "spans": [ + { + "bbox": [ + 67, + 436, + 289, + 462 + ], + "type": "text", + "content": "Our implementation is based on the codebase by Zhou et al. (2023)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 79, + 465, + 201, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 465, + 201, + 476 + ], + "spans": [ + { + "bbox": [ + 79, + 465, + 201, + 476 + ], + "type": "text", + "content": "The instruction for MRC is" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 78, + 493, + 279, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 493, + 279, + 521 + ], + "spans": [ + { + "bbox": [ + 78, + 493, + 279, + 521 + ], + "type": "text", + "content": "Instruction: read the given information and answer the corresponding question." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 79, + 537, + 271, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 537, + 271, + 550 + ], + "spans": [ + { + "bbox": [ + 79, + 537, + 271, + 550 + ], + "type": "text", + "content": "The prompt without instruction for MRC is" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 79, + 565, + 280, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 565, + 280, + 618 + ], + "spans": [ + { + "bbox": [ + 79, + 565, + 280, + 618 + ], + "type": "text", + "content": "Assume that {ENTITY0} can be any of {entity0Candidates}. [Assume that {ENTITY1} can be any of {entity1Candidates} ...] {context}" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 79, + 620, + 279, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 620, + 279, + 658 + ], + "spans": [ + { + "bbox": [ + 79, + 620, + 279, + 658 + ], + "type": "text", + "content": "Q:{question} based on the given text? Extract the answer from the given text. Do not add other words." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 79, + 661, + 92, + 672 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 661, + 92, + 672 + ], + "spans": [ + { + "bbox": [ + 79, + 661, + 92, + 672 + ], + "type": "text", + "content": "A:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 79, + 687, + 190, + 699 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 687, + 190, + 699 + ], + "spans": [ + { + "bbox": [ + 79, + 687, + 190, + 699 + ], + "type": "text", + "content": "The instruction for RE is" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 79, + 717, + 278, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 717, + 278, + 743 + ], + "spans": [ + { + "bbox": [ + 79, + 717, + 278, + 743 + ], + "type": "text", + "content": "Identify the relationship between two entities from a list of options." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 79, + 760, + 261, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 760, + 261, + 772 + ], + "spans": [ + { + "bbox": [ + 79, + 760, + 261, + 772 + ], + "type": "text", + "content": "The prompt without instruction for RE is" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 75, + 514, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 75, + 514, + 129 + ], + "spans": [ + { + "bbox": [ + 313, + 75, + 514, + 129 + ], + "type": "text", + "content": "Assume that subject_entity is one of {subjCandidates}, while object-entity is one of {objCandidates} in the following text. {context}" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 314, + 130, + 514, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 130, + 514, + 169 + ], + "spans": [ + { + "bbox": [ + 314, + 130, + 514, + 169 + ], + "type": "text", + "content": "Q: Which option indicates the relationship between subject_entity and object-entity in the given text?" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 315, + 170, + 396, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 170, + 396, + 184 + ], + "spans": [ + { + "bbox": [ + 315, + 170, + 396, + 184 + ], + "type": "text", + "content": "Options:{options}" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 315, + 185, + 327, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 185, + 327, + 195 + ], + "spans": [ + { + "bbox": [ + 315, + 185, + 327, + 195 + ], + "type": "text", + "content": "A:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 303, + 208, + 524, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 208, + 524, + 233 + ], + "spans": [ + { + "bbox": [ + 303, + 208, + 524, + 233 + ], + "type": "text", + "content": "The prompt template for detecting entities in MRC is" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 248, + 515, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 248, + 515, + 301 + ], + "spans": [ + { + "bbox": [ + 313, + 248, + 515, + 301 + ], + "type": "text", + "content": "List named entities in the following sentence. Separate the entities with " + }, + { + "bbox": [ + 313, + 248, + 515, + 301 + ], + "type": "inline_equation", + "content": "\\# \\# \\# \\#" + }, + { + "bbox": [ + 313, + 248, + 515, + 301 + ], + "type": "text", + "content": " , if you find multiple entities. Do not add additional words before or after your answers.." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 314, + 303, + 362, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 303, + 362, + 315 + ], + "spans": [ + { + "bbox": [ + 314, + 303, + 362, + 315 + ], + "type": "text", + "content": "{sentence}" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 328, + 524, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 328, + 524, + 354 + ], + "spans": [ + { + "bbox": [ + 302, + 328, + 524, + 354 + ], + "type": "text", + "content": "The prompt template for replacing entities with placeholders in MRC is" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 368, + 515, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 368, + 515, + 408 + ], + "spans": [ + { + "bbox": [ + 313, + 368, + 515, + 408 + ], + "type": "text", + "content": "Replace the entity {entity_list} in the following paragraph. \n{paragraph}" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 303, + 422, + 524, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 422, + 524, + 447 + ], + "spans": [ + { + "bbox": [ + 303, + 422, + 524, + 447 + ], + "type": "text", + "content": "The prompt template for finding similar entities is" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 313, + 460, + 514, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 460, + 514, + 528 + ], + "spans": [ + { + "bbox": [ + 313, + 460, + 514, + 528 + ], + "type": "text", + "content": "Name two [{\\entity_type}] entities similar to {\"" + }, + { + "bbox": [ + 313, + 460, + 514, + 528 + ], + "type": "inline_equation", + "content": "\\{entity\\}''" + }, + { + "bbox": [ + 313, + 460, + 514, + 528 + ], + "type": "text", + "content": ". Separate the entities with \\#\\#\\#, and do not add additional words before or after your answers. Provide random answers if you are not sure." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 302, + 539, + 525, + 580 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 539, + 525, + 580 + ], + "spans": [ + { + "bbox": [ + 302, + 539, + 525, + 580 + ], + "type": "text", + "content": "In all the above prompts, variables are surrounded with curly brackets and optional variables are surrounded with square brackets." + } + ] + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "15184" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_content_list.json b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..23e68eebacafbd57fdd4b31b85ca1a9a57eb8569 --- /dev/null +++ b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_content_list.json @@ -0,0 +1,3116 @@ +[ + { + "type": "text", + "text": "A Closer Look into Automatic Evaluation Using Large Language Models", + "text_level": 1, + "bbox": [ + 121, + 89, + 875, + 111 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Cheng-Han Chiang", + "bbox": [ + 245, + 137, + 418, + 154 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "National Taiwan University,", + "bbox": [ + 218, + 154, + 447, + 170 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Taiwan", + "bbox": [ + 302, + 171, + 364, + 185 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "dcml0714@gmail.com", + "bbox": [ + 242, + 187, + 426, + 203 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Hung-yi Lee", + "bbox": [ + 610, + 137, + 721, + 153 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "National Taiwan University,", + "bbox": [ + 552, + 154, + 779, + 170 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Taiwan", + "bbox": [ + 635, + 171, + 695, + 185 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "hungyilee@ntu.edu.tw", + "bbox": [ + 563, + 187, + 768, + 203 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Using large language models (LLMs) to evaluate text quality has recently gained popularity. Some prior works explore the idea of using LLMs for evaluation, while they differ in some details of the evaluation process. In this paper, we analyze LLM evaluation (Chiang and Lee, 2023)1 and G-Eval (Liu et al., 2023), and we discuss how those details in the evaluation process change how well the ratings given by LLMs correlate with human ratings. We find that the auto Chain-of-Thought (CoT) used in G-Eval does not always make G-Eval more aligned with human ratings. We also show that forcing the LLM to output only a numeric rating, as in G-Eval, is suboptimal. Last, we reveal that asking the LLM to explain its own ratings consistently improves the correlation between the ChatGPT and human ratings and pushes state-of-the-art (SoTA) correlations on two meta-evaluation datasets.", + "bbox": [ + 141, + 281, + 460, + 565 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 581, + 258, + 596 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large language models (LLMs) trained with task instructions and human feedback can follow natural language instructions to complete a task (Askell et al., 2021; Sanh et al., 2022; Wei et al., 2022a; Ouyang et al., 2022). Recently, the instruction-following ability of LLMs makes them promising candidates for automatic evaluation (Chiang and Lee, 2023; Liu et al., 2023; Wang et al., 2023; Huang et al., 2023). By simply instructing the LLMs on how to rate and giving the LLMs the sample to be rated, the LLM can follow the instructions and provide a rating of the sample.", + "bbox": [ + 112, + 607, + 489, + 801 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Chiang and Lee (2023) propose LLM evaluation and Liu et al. (2023) propose $G$ -Eval; both of which use LLMs to evaluate samples by giving the LLM instructions, and they both show that some LLMs can yield evaluation results that are aligned to the", + "bbox": [ + 112, + 802, + 487, + 883 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "evaluation results of humans. Still, LLM evaluation and G-Eval differ in some specific design choices in the evaluation procedure. Since Chiang and Lee (2023) and Liu et al. (2023) use distinct tasks, it is hard to know how the differences between LLM evaluation and G-Eval affect the evaluation results. This makes practitioners in the future hard to determine how to conduct an automatic evaluation using LLMs.", + "bbox": [ + 507, + 252, + 884, + 394 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Given that LLM evaluation and G-Eval have already received significant attention shortly after publication, these methods will likely revolutionize the evaluation in NLP. Therefore, conducting a detailed analysis of these approaches is essential and timely. This paper aims to identify the crucial components in LLM evaluation and G-Eval that contribute to stronger correlations with human ratings. Based on our analysis, we provide guidelines on how to use LLMs for automatic evaluations. We have the following findings:", + "bbox": [ + 507, + 397, + 885, + 574 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Auto-CoT (proposed by G-Eval) does not always improve the correlation between LLM and human ratings.", + "- Making the LLMs output only a single numeric rating is suboptimal.", + "- Asking the LLMs to rationalize their own ratings significantly improves the correlation between the LLMs' ratings and human ratings.", + "- On two datasets, we improve the best correlation that ChatGPT's rating can achieve, and some correlations even exceed prior SoTA correlations obtained using the ratings of GPT-4 in Liu et al. (2023)." + ], + "bbox": [ + 531, + 582, + 884, + 818 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "2 Experiment Setup", + "text_level": 1, + "bbox": [ + 507, + 829, + 700, + 847 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Our paper studies what components in LLM evaluation and G-Eval make the ratings generated by LLM correlate with human ratings better, and we aim to improve the correlation.", + "bbox": [ + 507, + 854, + 884, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "1In this paper, the term LLM evaluation is used to refer to the specific method proposed by Chiang and Lee (2023).", + "bbox": [ + 112, + 892, + 487, + 919 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "8928", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8928-8942", + "bbox": [ + 216, + 945, + 779, + 958 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 958, + 719, + 972 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "2.1 LLM as an Automatic Evaluation Metric", + "text_level": 1, + "bbox": [ + 112, + 84, + 480, + 98 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Both LLM evaluation (Chiang and Lee, 2023) and G-Eval (Liu et al., 2023) propose to ask LLMs to rate a sample regarding some attributes of the sample (e.g., fluency, grammaticality) using a $k$ -point Likert scale. They give the LLMs (1) descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence that prompts the LLM to give the rating2. The LLM outputs a sequence containing the rating. Unless specified, we follow prior works to sample $N = 20$ sequences from the LLM and average those ratings as the final rating. While the two methods share the core concept, they differ in two details.", + "bbox": [ + 112, + 105, + 487, + 329 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Difference 1: Auto Chain-of-Thought The task descriptions and rating criteria in LLM evaluation and G-Eval are all human-written. However, Liu et al. (2023) argue that some evaluated attributes require more than simple definition and evaluation criteria, so they use LLMs to determine the evaluation steps. Specifically, they concatenate the task description, definition, and criteria of the attributes and append a line \"Evaluation steps:\" to prompt the LLM. The LLM then generates an ordered list containing the step-by-step evaluation steps. They dub this process auto chain-of-thought $(CoT)$ . G-Eval uses human-written task instructions and auto-CoT-generated evaluation steps to prompt the LLM to rate the sample.", + "bbox": [ + 115, + 331, + 489, + 571 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Difference 2: Prompts for Output At the end of the input to LLMs, G-Eval uses the prompt {\"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ score~only}:\" to restrict the LLM to output only the numeric rating; the placeholder will be replaced by the evaluated attributes. In contrast, LLM evaluation uses the following question to ask the LLM to assign the rating: \"How {\"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ is the sample? (on a scale of 1-k, with 1 being the lowest)\". The LLM's output form is not restricted.", + "bbox": [ + 112, + 571, + 489, + 733 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2 Meta-Evaluating an Evaluation Metric", + "text_level": 1, + "bbox": [ + 112, + 744, + 465, + 759 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Given a sample, an evaluation metric assigns it a rating. To evaluate an evaluation metric, we need a dataset containing human ratings for samples in the dataset. We calculate the correlation coefficient between the ratings obtained by the evaluation metric and the human ratings. A higher correlation", + "bbox": [ + 112, + 764, + 487, + 860 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "indicates the evaluation metric better aligns with human ratings. We adopt Pearson $r$ and Kendall's $\\tau$ as they are widely used in meta-evaluations (Graham et al., 2015; Bojar et al., 2017; Zhang* et al., 2020). In our paper, all the correlation refers to the correlation coefficient between the ratings of LLM and human ratings. Details on the calculation of correlation coefficients are in Appendix C.", + "bbox": [ + 507, + 84, + 884, + 212 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We use SummEval (Fabbri et al., 2021) and Topical-Chat (Gopalakrishnan et al., 2019; Mehri and Eskenazi, 2020) as the meta-evaluation datasets, following Liu et al. (2023). SummEval is a meta-evaluation dataset for summarization derived from the CNN/DailyMail dataset (Hermann et al., 2015). Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Topical-Chat is a dataset that evaluates the quality of a response given the dialogue history and a piece of knowledge relating to the dialogue. We follow Zhong et al. (2022) to evaluate the naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge) of the response. The dataset details are in Appendix E.", + "bbox": [ + 505, + 212, + 884, + 487 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.3 Large Language Models", + "text_level": 1, + "bbox": [ + 507, + 499, + 746, + 514 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "An LLM used as an evaluation metric should be affordable and accessible to whoever wants to use it. Based on this principle, we use ChatGPT (gpt3.5-turbo-0613) (OpenAI, 2022) for evaluation since it has lower cost and improved performance compared with other GPT-3.5 models. ChatGPT is also used in LLM evaluation and G-Eval. While Liu et al. (2023) further use GPT-4 (OpenAI, 2023) in their experiments, we cannot use GPT-4 in our experiments since most people, including us, have limited or no access to GPT-4, making it utterly unsuitable as an evaluation metric.", + "bbox": [ + 507, + 520, + 882, + 712 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In our preliminary experiments, we also try to use the best open LLM (at the time of writing this manuscript) on Open LLM leaderboard, the falcon-40b-instruct model (Almazrouei et al., 2023), but we find it cannot follow the instructions and rate the samples very well. Hence, we exclude open LLMs in our paper.", + "bbox": [ + 507, + 714, + 882, + 826 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3 Better Usage of LLM for Evaluation", + "text_level": 1, + "bbox": [ + 507, + 839, + 855, + 854 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3.1 Is Auto CoT Always Useful?", + "text_level": 1, + "bbox": [ + 507, + 865, + 778, + 881 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Liu et al. (2023) shows that adding the evaluation steps generated by auto CoT improves the correla-", + "bbox": [ + 507, + 887, + 882, + 919 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "2In our paper, we use different highlight colors to represent different parts of the prompt, as shown in the above text. Additionally, we use cyan to represent the parts generated by auto Chain-of-Thought", + "bbox": [ + 112, + 868, + 487, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "8929", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/da30e5019e1285714b9a99c6e894179409063903f8f633ca89509a46eff4ea6e.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Sec.AblationsCoherenceConsistencyFluencyRelevance
CoTOutputrτrτrτrτ
GPT-4†?‡Score only0.5810.4630.5750.4190.60.4570.5990.409
3.1Score only0.450.3590.370.2860.3190.2030.4030.327
X0.3440.2480.3280.1850.3610.1770.3530.248
3.2XScore only0.3440.2480.3280.1850.3610.1770.3530.248
XFree Text0.460.3420.4760.3340.4770.2730.3240.228
XRate-explain0.5570.440.4730.3370.4510.3060.5090.348
XAnalyze-rate0.6350.4760.5370.340.4790.3020.4440.305
", + "bbox": [ + 129, + 80, + 870, + 234 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Table 1: The Pearson's $r$ and Kendall's $\\tau$ correlation coefficient between LLMs' ratings and human ratings for SummEval. All the results in this table, except the first row, are from ChatGPT. We consider auto CoT + score only using ChatGPT proposed in G-Eval as the baseline of this paper. We boldface the Pearson's $r$ statistically significantly higher than the baseline (except GPT-4). †: results from Liu et al. (2023). Some numbers are different because we re-calculate the correlations based on the GPT-4 responses Liu et al. (2023) released. ‡: The results of GPT-4 cannot serve as a reasonable comparison since we find something odd in the prompts Liu et al. (2023) use, which we elaborate in Appendix A.", + "bbox": [ + 112, + 244, + 884, + 347 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "tion on SummEval when using GPT-4 for evaluation. By scrutinizing their results, we find that the correlations when using auto CoT and not using it often differ by less than 0.02. This raises two questions: (1) Is this difference statistically significant? (2) Does auto CoT yield higher correlations for different LLMs and datasets? To answer these questions, we use ChatGPT to rate the samples in SummEval and Topical-Chat using two sets of prompts, one with the evaluation steps generated using auto CoT and one without those evaluation steps. In this experiment, we follow G-Eval and restrict ChatGPT to output only a numeric score. Following Graham and Baldwin (2014), we use William's test for significance to see if the Pearson's $r$ of using and not using auto CoT is statistically significantly different. We try to follow the prompts used in G-Eval when possible; still, we have to construct some prompts since Liu et al. (2023) only release part of the prompts and some of which are problematic. We list all the prompts and how they are obtained in Appendix F.", + "bbox": [ + 115, + 370, + 490, + 722 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The experiment results for SummEval are shown in the block in blue in Table 1. We also list the best results of G-Eval using GPT-4 from Liu et al. (2023) in the first row of Table 1 only for reference. Comparing our results with GPT-4 is unfair since we use ChatGPT, which is weaker than GPT-4. A more reasonable baseline for our paper is the \"auto CoT + score only\" using ChatGPT on the second row, which is the method proposed by G-Eval and shows the highest correlation that ChatGPT can achieve in Liu et al. (2023). The numbers here differ from results in Liu et al. (2023) because", + "bbox": [ + 112, + 726, + 489, + 917 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "we carefully reproduce their results ourselves.", + "bbox": [ + 507, + 370, + 850, + 385 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Back to Table 1, we can see that auto CoT leads to higher correlations for coherence, consistency, and relevance. By William's test, these higher correlations reach statistical significance with $p$ -values less than 0.05. However, using auto CoT results in a lower Pearson's $r$ for fluency, and this inferiority in Pearson's $r$ is also statistically significant.", + "bbox": [ + 507, + 385, + 884, + 498 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The results for Topical-Chat are illustrated in Table 2. For Topical-Chat, the Pearson's $r$ of using and not using auto CoT are very close for all four attributes except groundedness, with differences less than 0.025, and these differences are not statistically significant. For groundedness, auto CoT even drastically decreases the correlation. In summary, using auto CoT does not yield consistent and meaningful improvements compared with not using CoT. This should not be surprising since the evaluation steps generated with auto CoT often merely paraphrases the evaluation criterion and instructions given to the LLM.", + "bbox": [ + 507, + 499, + 885, + 708 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2 Prompt for Outputs", + "text_level": 1, + "bbox": [ + 507, + 720, + 714, + 736 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we explore if the difference in how ChatGPT is prompted to output makes it's ratings better aligned with human ratings. We use two sets of prompts that share the same task descriptions and evaluation criteria but differ in how they prompt the LLM to generate the output. One uses \"score only\", as in G-Eval. The other replaces the \"score only\" with \"How {{placeholder}}\" is the sample? (on a scale of 1-k, with 1 being the lowest), as in LLM evaluation. We call the latter prompts free text since they do not", + "bbox": [ + 507, + 741, + 884, + 919 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "8930", + "bbox": [ + 480, + 928, + 521, + 940 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/3ebb9e36a524e88cb47e148716c33d6eef8920f190ad48fdf8d8efd3429a7754.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.1Score only0.3930.3580.4680.3910.5490.5130.3110.566
X0.4080.3310.4430.4040.5570.5350.3580.582
3.2XScore only0.4080.3310.4430.4040.5570.5350.3580.582
XFree Text0.4640.4760.5240.4260.6110.5570.5630.666
XRate-explain0.5240.470.4770.4160.5670.5240.580.693
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
", + "bbox": [ + 134, + 80, + 862, + 218 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 2: The Pearson's $r$ and Kendall's $\\tau$ correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's $r$ statistically significantly higher than auto CoT + score only. We **underline** the Pearson's $r$ comparable auto CoT + score only.", + "bbox": [ + 112, + 228, + 884, + 285 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "restrict the output form.", + "bbox": [ + 112, + 311, + 294, + 326 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The results for SummEval are shown in the yellow blocks in Table 1, and the results for TopicalChat are shown in Table 2. We find that allowing ChatGPT to respond to the question freely yields Pearson's $r$ and Kendall's $\\tau$ much higher than restricting the model to output a single numeric score for almost all attributes of both datasets. The higher Pearson's $r$ of free text compared with score only is statistically significant. The only exception is the relevance of SummEval, where free text yields slightly lower correlations.", + "bbox": [ + 112, + 331, + 489, + 507 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Initially, we thought ChatGPT aligns better with human ratings in free text because it can generate natural language explanations to justify their rating, making the ratings more correlated with human ratings. However, we observe that the responses of ChatGPT when prompted with free text mostly contain a single numeric rating, which is the same behavior when it is instructed by score only. This means that what the model is allowed to generate is more important than what it really generates.", + "bbox": [ + 112, + 512, + 489, + 674 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The above observations make us curious if the correlations can be higher if ChatGPT is instructed to justify its ratings. Inspired by chain-of-thought in Wei et al. (2022b) and Kojima et al. (2022) (not the auto CoT in G-Eval), we ask ChatGPT to provide their reasoning and rationales on the ratings. Instead of asking ChatGPT to output only a score, we construct two types of prompts that ask ChatGPT to rationalize its decision. The first type of prompt, called analyze-rate, asks ChatGPT to analyze the samples regarding the evaluated criteria first and give the rating. The second type of prompt, called rate-explain, asks ChatGPT to provide the numeric ratings first and explain why it gives such a rating. analyze-rate is more like the zero-shot", + "bbox": [ + 112, + 678, + 489, + 920 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "chain-of-thought (Kojima et al., 2022). Refer to Appendix F.1.1 for the exact prompts we use.", + "bbox": [ + 507, + 311, + 882, + 343 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The results of asking ChatGPT to explain/analyze how they rate the sample are shown in the last two rows in Table 1 and Appendix Table 2. We find that for all attributes of both datasets, rate-explain and anlyze-rate both lead to correlations stronger than or at least comparable to the correlation of asking ChatGPT to output only a numeric rating (score only). By asking ChatGPT to explain/analyze, we improve the best correlations that can be achieved by ChatGPT in Liu et al. (2023) (the Auto-CoT + score only). Moreover, when asked to explain/analyze when rating, ChatGPT's correlation can be better than or comparable to the state-of-the-art correlation coefficients obtained from GPT-4 in Liu et al. (2023) for coherence of SummEval and three attributes of Topical-Chat. We hypothesize that some attributes (e.g., coherence for SummEval) are harder for ChatGPT to rate, so the correlations for these attributes show a larger improvement when ChatGPT explains how it rates the sample.", + "bbox": [ + 507, + 349, + 884, + 671 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In rate-explain, the output of ChatGPT contains a numeric rating followed by some explanations. As an auto-regressive language model, ChatGPT cannot depend on the explanation when generating the rating due to causal attention. If we stop the generation after ChatGPT generates the ratings, the output of rate-explain will only contain the ratings, just like the output forms in score only. Although the ratings in rate-explain do not depend on ChatGPT's rationales for the ratings, the ratings still correlate better with human ratings, compared with the ratings in score only. We think this is because when ChatGPT knows it needs to explain the ratings, it tends to generate ratings that are easier for it to explain, and a rating that is more", + "bbox": [ + 507, + 678, + 885, + 920 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "8931", + "bbox": [ + 480, + 928, + 519, + 941 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "aligned to humans' rating is easier for ChatGPT to explain.", + "bbox": [ + 112, + 84, + 487, + 116 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3 Empirical Guidelines", + "text_level": 1, + "bbox": [ + 112, + 128, + 329, + 143 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Based on the analysis and results in this section, we provide the following guideline: Always ask ChatGPT to explain/analyze when rating. We do not see rate-explain to be significantly better (or worse) than analyze-rate, so it is hard to determine which one to use. A valid method is sampling some ratings using rate-explain and sampling some ratings using analyze-rate and averaging the ratings from the two prompts as the final rating. Using auto CoT is optional since it does not always lead to higher correlations with human ratings. We also find that using auto CoT does not always improve the correlations when ChatGPT is asked to explain; this result is shown in Appendix Table 3.", + "bbox": [ + 112, + 149, + 489, + 374 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4 Robustness of the Guidelines", + "text_level": 1, + "bbox": [ + 112, + 386, + 386, + 400 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "LLMs are notorious for their performance fluctuation due to the input prompts, and the sequence generated by LLMs can be different when changing the hyperparameters used in decoding. To verify the validity of our empirical guidelines, we conduct the following two sets of experiments: (1) we vary the temperature used in sampling the output from ChatGPT, and (2) we vary the prompt given to ChatGPT.", + "bbox": [ + 112, + 407, + 489, + 551 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4.1 Varying the Temperature", + "text_level": 1, + "bbox": [ + 112, + 562, + 371, + 577 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We check if our guideline holds if we change the temperature $T$ during generation. We compare Pearson's $r$ when using the method proposed in G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate under different temperatures used when generating the output from ChatGPT. We follow Chiang and Lee (2023) and use two temperatures: 0.7 and 0.3.", + "bbox": [ + 112, + 581, + 489, + 708 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The results are shown in Appendix Table 5 and summarized as follows: First, when fixing the sampling temperature, we find that rate-explain and analyze-rate always achieve a higher correlation compared with G-Eval. This supports our guideline that \"asking the LLM to explain/analyze outperforms the method proposed in G-Eval.\" Next, we observe that the correlation of G-Eval when $T = 0.3$ is much lower than that of $T = 1.0$ . This shows that G-Eval is not robust to sampling temperature. Contrarily, we find that the correlations obtained by rate-explain and analyze-rate do not significantly change for different sampling", + "bbox": [ + 112, + 709, + 489, + 919 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "temperatures for almost all cases. This shows that rate-explain and analyze-rate are more robust than G-Eval with respect to the sampling temperature.", + "bbox": [ + 507, + 84, + 882, + 133 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4.2 Changing the Prompts", + "text_level": 1, + "bbox": [ + 507, + 145, + 747, + 161 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We check if our guideline holds if we change the prompt given to ChatGPT. In this experiment, we changed the prompts to ChatGPT by appending some instructions before the descriptions of the rating task. We tried with two prompts: (1) the HHH prompts and (2) the human annotator prompts. The HHH prompt is designed by Bai et al. (2022) to align the output of LLMs to be more harmless, honest, and helpful. The human annotator prompt is inspired by Chiang and Lee (2023), who use a similar prompt to make the LLM behave as a human annotator. These two prompts will be inserted before the prompt we originally used in our paper. We use these two prompts to inject persona into the LLM. This is inspired by Zeng et al. (2023), which shows that the output of GPT3 can be different when prompted with a different persona. The prompts are detailed in Appendix F.3.", + "bbox": [ + 507, + 166, + 884, + 455 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The results are shown in Table 6 and summarized as follows: rate-explain and analyze-rate consistently outperform the G-eval when using the human annotator prompts and the HHH prompts. This indicates that our guidelines are robust toward different prompts. We also find that the correlations of G-Eval significantly drop when adding the human-annotator prompts or HHH prompts. On the other hand, the correlation for rate-explain and analyze-rate do not significantly decrease when adding the human-annotator prompt and the HHH prompt. This shows that asking the LLM to explain is more robust to the variation of the prompts.", + "bbox": [ + 507, + 457, + 884, + 667 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 682, + 640, + 697 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We study how to better use ChatGPT as an automatic evaluation tool by scrutinizing LLM evaluation and G-Eval. We provide concrete guidelines and show that by using those guidelines, the correlations of several evaluated attributes given by ChatGPT, a publicly usable model, can be higher than or comparable to the ratings given by GPT-4, a highly restricted and pricey model. We also show that the evaluation results based on our guidelines improve the best correlation that ChatGPT's rating can achieve. We believe our results and guidelines help future researchers better use LLMs for evaluation.", + "bbox": [ + 507, + 709, + 884, + 917 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "8932", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 114, + 84, + 220, + 98 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "There are three main limitations of this paper.", + "bbox": [ + 112, + 109, + 453, + 124 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. We only use ChatGPT to conduct the experiments in this paper. We explain why we chose ChatGPT in Section 2.3. We believe that using ChatGPT is already enough since we show that the correlations obtained by using ChatGPT are already comparable to or better than the previous SoTA results obtained by GPT-4.", + "2. We only conduct analysis using two tasks, while we know that NLP has more diverse tasks. We do not guarantee that our observations can generalize to all the other datasets. We recommend the users verify the effectiveness of using LLM to evaluate the tasks of interest.", + "3. We cannot fairly compare our results with Liu et al. (2023), the previous SoTA results, due to multiple reasons. We explain those reasons in Appendix A." + ], + "bbox": [ + 127, + 134, + 490, + 445 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 114, + 456, + 265, + 470 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Our paper follows the ACL Code of Ethics. We do not see a particular harmful outcome of our paper. The code and datasets for reproducing our experiments can be found at https://github.com/d223302/A-Closer-Look-To-LLM-Evaluation/.", + "bbox": [ + 112, + 482, + 489, + 577 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 114, + 590, + 285, + 606 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We want to thank the reviews for providing detailed feedback and actionable suggestions, which helped us strengthen our paper. We also want to thank the senior committee members for monitoring the reviewing process. Cheng-Han Chiang is supported by a Ph.D. scholarship program by Delta Electronics.", + "bbox": [ + 112, + 614, + 489, + 727 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 114, + 753, + 213, + 769 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance.", + "Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A" + ], + "bbox": [ + 115, + 776, + 489, + 917 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861.", + "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback.", + "Ondrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Computational Linguistics.", + "Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15607-15631, Toronto, Canada. Association for Computational Linguistics.", + "Alexander R Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391-409.", + "Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anushree Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations.", + "Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 172-176, Doha, Qatar. Association for Computational Linguistics.", + "Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1183-1191, Denver, Colorado. Association for Computational Linguistics.", + "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28.", + "Fan Huang, Haewoon Kwak, and Jisun An. 2023. Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech. arXiv preprint arXiv:2302.07736." + ], + "bbox": [ + 510, + 85, + 884, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "8933", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems.", + "Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634.", + "Matouš Macháček and Ondřej Bojar. 2014. Results of the WMT14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 293-301, Baltimore, Maryland, USA. Association for Computational Linguistics.", + "Shikib Mehri and Maxine Eskenazi. 2020. Usr: An unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681-707.", + "OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Accessed on January 10, 2023.", + "OpenAI. 2023. Gpt-4 technical report.", + "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744.", + "Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations.", + "Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048.", + "Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In International Conference on Learning Representations.", + "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian richter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems." + ], + "bbox": [ + 115, + 85, + 489, + 917 + ], + "page_idx": 6 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Andy Zeng, Maria Attarian, brian richter, Krzysztof Marcin Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael S Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Pete Florence. 2023. Socratic models: Composing zero-shot multimodal reasoning with language. In The Eleventh International Conference on Learning Representations.", + "Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert*. In International Conference on Learning Representations.", + "Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2023-2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics." + ], + "bbox": [ + 510, + 85, + 882, + 368 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "A Why We Cannot Fairly Compare with the Results in Liu et al. (2023)", + "text_level": 1, + "bbox": [ + 510, + 382, + 875, + 414 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As a work highly related to G-Eval, we would really like to compare our results with G-Eval. However, we encounter difficulties when comparing our results with those in Liu et al. (2023) for the following reasons.", + "bbox": [ + 510, + 424, + 882, + 502 + ], + "page_idx": 6 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- G-Eval proposes to use GPT-4 as the evaluation tool, while it is currently a highly restricted model, and we only have limited access to it.", + "- G-Eval only releases the prompts for SummEval. We need to construct the prompts for Topical-Chat based on the human evaluation instructions released by Mehri and Eskenazi (2020). It is possible that the prompts we use for Topical-Chat are different from the prompts used in Liu et al. (2023), making their results incomparable to ours.", + "- The prompts of fluency in SummEval released by Liu et al. (2023) in here is problematic so we need to construct new prompts for fluency. Refer to Appendix F.1 for detailed explanations. This makes us unable to directly compare our results with the results in Liu et al. (2023).", + "- We cannot reproduce the numbers on the paper of G-Eval even when using their official implementation and the GPT-4 responses they release. This means that the only thing we" + ], + "bbox": [ + 531, + 514, + 882, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "8934", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "do is calculate the correlation coefficient using the data and code released on the official GitHub of G-Eval, but the numbers are quite different from the results in G-Eval's paper. Moreover, the results of fluency they provide is the result not using auto CoT, but the results of the other three attributes for SummEval use auto CoT. That is why we use a question mark for the auto CoT field in Table 1.", + "bbox": [ + 147, + 84, + 489, + 227 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "- The Table 2 in Liu et al. (2023) seems to be wrong. The caption (Spearman's $\\rho$ and Kendall's $\\tau$ ) does not match the headers ( $r$ and $\\rho$ ). This makes us hard to compare their results with ours reliably.", + "bbox": [ + 136, + 242, + 489, + 322 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "B Supplementary Results for Topical-Chat", + "text_level": 1, + "bbox": [ + 114, + 335, + 383, + 368 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 2 is the supplementary results for Topical-Chat that we referred to in the main content. We plan to move Table 2 to the main content using the additional one page in the camera-ready version if the paper is accepted. See how Pearson's $r$ and Kendall's $\\tau$ are calculated in Appendix C.", + "bbox": [ + 112, + 378, + 489, + 475 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "B.1 Is Auto CoT Useful When ChatGPT Is Asked to Explain?", + "text_level": 1, + "bbox": [ + 112, + 487, + 465, + 519 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In Table 3, we show the results when we add the evaluation steps generated by auto CoT when we ask ChatGPT when prompting with (rate-explain). We find that on groundedness, using auto CoT is worse. However, for the other three attributes, using auto CoT is better. This again shows that auto CoT is not particularly useful.", + "bbox": [ + 112, + 524, + 489, + 637 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "C Calculation of Correlation Coefficient", + "text_level": 1, + "bbox": [ + 112, + 651, + 478, + 665 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we calculate Pearson's $r$ and Kendall's $\\tau$ between human ratings and ChatGPT's ratings. Whether to use Spearman's rank correlation or Pearson's (linear) correlation to evaluate the alignment between human ratings and an automatic evaluation metric is long-standing, but there has been an increasing trend towards Pearson's correlation since 2014 (Macháček and Bojar, 2014; Graham and Baldwin, 2014; Zhang* et al., 2020). We use the pearsonr and Kendall tau in scipy.stats for calculating the correlation coefficients. For each attribute of each sample, the rating of ChatGPT is obtained by 20 samples; we set the decoding temperature to 1 and the top- $p$ in nucleus sampling to 1, following G-Eval (Liu et al., 2023).", + "bbox": [ + 112, + 677, + 489, + 917 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Consider a dataset with $N$ source documents, and each source document has $M$ corresponding target documents. We also have the human ratings for $N \\cdot M$ target documents on a specific attribute. While each attribute of each target document is rated by more than one human rater, we average those ratings when calculating the correlation coefficient. So the $N \\cdot M$ ratings are the average ratings from different raters. In the case of SummEval, we have $N = 100$ source documents and $M = 16$ summaries generated by 16 summarization models. There are two different methods for calculating correlation coefficients.", + "bbox": [ + 507, + 84, + 884, + 292 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "C.0.1 Method 1: Dataset-Level Correlation Coefficient", + "text_level": 1, + "bbox": [ + 507, + 304, + 865, + 332 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this method, we first obtain the ratings on $N \\cdot M$ target documents from ChatGPT. We then calculate the correlation coefficient between the $N \\cdot M$ ChatGPT's ratings and the $N \\cdot M$ average human ratings. In this case, the correlation coefficient is calculated among two $N \\cdot M$ vectors, meaning that the correlation coefficient is calculated across the entire dataset.", + "bbox": [ + 507, + 338, + 884, + 467 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "C.0.2 Method 2: Document-Level Correlation Coefficient", + "text_level": 1, + "bbox": [ + 507, + 479, + 882, + 508 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this method, for each source document, we obtain the ratings of its $M$ target documents using ChatGPT. Next, we calculate the correlation coefficient between these $M$ ChatGPT ratings and the corresponding $M$ human ratings. After iterating the above process over all the $N$ source documents, we obtain the $N$ correlation coefficients. We average the $N$ correlation coefficients as the final correlation coefficient. In this case, the correlation coefficient is calculated at the document-level and averaged over the whole dataset.", + "bbox": [ + 507, + 514, + 882, + 690 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "C.1 How We Calculate the Correlation Coefficient", + "text_level": 1, + "bbox": [ + 507, + 703, + 831, + 733 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In Table 1 and 2 in this paper, we use Method 1 (Subsection C.0.1) to calculate Pearson's correlation, following the recommendation in Graham et al. (2015). Calculating the correlation coefficient on the dataset level is also used in LLM evaluation (Chiang and Lee, 2023). Calculating a single correlation coefficient on the dataset level allows us to use William's test to test whether two Pearson's $r$ are significantly different.", + "bbox": [ + 507, + 741, + 884, + 885 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "For Kendall's $\\tau$ in Table 1 and 2, we follow most prior works (Zhong et al., 2022; Liu et al., 2023) to", + "bbox": [ + 507, + 887, + 882, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8935", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/6d4e48286d55641e6ef93a3617592e48d0fbdc04dab5497f8cfab665b573f607.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.2XScore only0.3930.3580.4680.3910.5490.5130.3110.566
rate-explain0.5540.4780.5120.4290.6130.5660.5550.664
Xrate-explain0.5240.470.4770.4160.5670.5240.580.693
", + "bbox": [ + 136, + 80, + 860, + 170 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 3: The Pearson's $r$ and Kendall's $\\tau$ correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's $r$ statistically significantly higher than auto CoT + score only. We **underline** the Pearson's $r$ comparable auto CoT + score only.", + "bbox": [ + 112, + 179, + 884, + 236 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "calculate Kendall's $\\tau$ using Method 2 (document-level, Section C.0.2) to understand if ChatGPT can differentiate the quality difference between different system outputs for the same source document.", + "bbox": [ + 110, + 261, + 487, + 325 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In fact, we find that Pearson's $r$ calculated by Method 1 and Method 2 are highly correlated. In Table 4, we show the result of Topical-Chat while we use Method 2 to calculate Pearson's $r$ ; Kendall's $\\tau$ is still calculated by Method 2. Comparing the results of Pearson's $r$ in Table 2 and Table 4, one can easily see that when a method have significantly higher Pearson's $r$ in Table 2, it will also have significantly higher Pearson's $r$ . We present the $r$ calculated by Method 1 because it makes more sense when calculating statistical significance when the correlation coefficient is calculated at the dataset-level (Graham et al., 2015).", + "bbox": [ + 110, + 326, + 487, + 535 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "D Results of Changing the Temperature and Prompts", + "text_level": 1, + "bbox": [ + 112, + 546, + 478, + 580 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We show the results of varying the temperature used to sample the ChatGPT output in Table 5. In the experiments in this section, we only sample $N = 5$ samples from the ChatGPT since we find that G-eval and our proposed guidelines are quite robust to the number of samples when $N \\geq 5$ .", + "bbox": [ + 112, + 588, + 487, + 684 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "E Datasets", + "text_level": 1, + "bbox": [ + 112, + 696, + 226, + 711 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "E.1 SummEval", + "text_level": 1, + "bbox": [ + 112, + 721, + 252, + 736 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "SummEval (Fabbri et al., 2021) is a dataset for the meta-evaluation of summarization. It contains 100 source documents, each with 16 summaries obtained from different summarization models. Each of the 1600 summaries is rated by three workers recruited on Amazon Mturk and two experts in summarization. Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Each attribute is rated based on a 5-point Likert scale.", + "bbox": [ + 112, + 741, + 489, + 917 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We download the source documents, summaries, and human ratings from the GitHub repository of G-Eval (https://github.com/nlpyang/geval/tree/8f54105/data). SummEval was released under MIT License, and our usage for research does not violate the dataset's initial intention.", + "bbox": [ + 507, + 261, + 884, + 357 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "E.2 Topical-Chat", + "text_level": 1, + "bbox": [ + 507, + 372, + 662, + 387 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Topical-Chat (Gopalakrishnan et al., 2019) is a knowledge-grounded open-domain dialogue dataset. The dataset consists of a dialogue context (history), an interesting fact related to the topic of the conversation, and a response. Mehri and Eskenazi (2020) releases high-quality human annotations on the quality of responses. They construct the dataset as follows: they first sample 60 dialogues context from Topical-Chat, and for each dialogue context and corresponding fun fact, they use a transformer model to generate four responses using four decoding methods. Each dialogue content has two additional responses: the human response and the ground truth response. Thus, there are a total of 360 dialogue-response pairs. Those pairs are evaluated based on six attributes, and we follow Zhong et al. (2022) and Liu et al. (2023) to only use four attributes: naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge). We obtain the human ratings of Topical-Chat from the Github repository of UniEval (Zhong et al., 2022): https://github.com/maszhongming/UniEval/blob/main/reproduce/data/dialogue/topical chatting.json.", + "bbox": [ + 505, + 394, + 884, + 812 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "F Prompts", + "text_level": 1, + "bbox": [ + 507, + 827, + 623, + 843 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We list the prompts we use in this section. In the main content of the paper and in the following parts, we use different highlight colors to represent different parts of the prompt. A prompt is composed", + "bbox": [ + 507, + 854, + 884, + 919 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "8936", + "bbox": [ + 480, + 928, + 521, + 940 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/d773d135366523f8ed15bb4f352947ae19cf8a5ddfa1ed4d54eba1ec2b54f9a4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
GPT-4†Score only0.549-0.594-0.627-0.531-
3.1Score only0.4450.3580.4980.3910.5790.5130.6850.566
X0.4310.3310.5070.4040.6310.5350.6660.582
3.2XScore only0.4310.3310.5070.4040.6310.5350.6660.582
XFree Text0.5720.4760.5230.4260.6760.5570.7470.666
XRate-explain0.6210.5120.4720.4250.610.5090.7710.663
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
", + "bbox": [ + 122, + 80, + 877, + 234 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Table 4: The Pearson's $r$ and Kendall's $\\tau$ correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. Note that in this table, both Pearson's $r$ and Kendall's $\\tau$ are calculated by Method 2 in Appendix C.0.2. All the results in this table, except the first row, are from ChatGPT. The results of GPT-4 are from Liu et al. (2023) but should not be compared with our results since the prompts they use may be different from the prompt we use. Still, we can see that for naturalness, engagingness, and groundedness, the results of rate-explain and analyze-rate is better or comparable to GPT-4.", + "bbox": [ + 110, + 244, + 884, + 331 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "of four parts: (1) the descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence used to prompt the LLM to give the rating.", + "bbox": [ + 112, + 356, + 487, + 435 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "The prompts for different attributes of the same dataset share the same descriptions of the rating task. Different attributes use different definition and rating criteria. In G-Eval, the prompts also compose of the evaluation steps generated by auto CoT.", + "bbox": [ + 112, + 436, + 487, + 531 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "F.1 Prompts for SummEval", + "text_level": 1, + "bbox": [ + 112, + 544, + 347, + 558 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "The descriptions of the rating task, the definition and rating criteria, the evalua-", + "bbox": [ + 112, + 565, + 490, + 596 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "tion steps for coherence, consistency, and relevance in SummEval is from the prompts released by G-Eval in their GitHub repository (https://github.com/nlpyang/geval/tree/8f54105/prompts/summeval). While G-Eval also releases the prompt they use for fluency, we find something highly problematic in the prompt they use. The prompt for fluency asks the LLM to rate fluency on a scale of 1 to 3 (https://github.com/nlpyang/geval/blob/", + "bbox": [ + 112, + 598, + 495, + 758 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "8f54105061e00377fbbb909153892d5bfb5b3623a/prompts/summeval/fluDetailed.txt), while the original rating scale in SummEval is 1 to 5. We also find that the original rating criteria used in G-Eval for fluency differ largely from the rating criteria of fluency used for human evaluation in SummEval. Through our experiment, we find that the misalignment of evaluation criteria and evaluation scale significantly decreases Pearson's $r$ with human ratings when using analyze-rate to", + "bbox": [ + 112, + 758, + 495, + 917 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "prompt ChatGPT to output. This is likely because ChatGPT tends to stick to the rating criteria when prompted with analyze-rate, and when using the rating criteria different from the criteria that are used to instruct the human raters, the scores generated by ChatGPT deviates more from the human ratings. This highlights the importance of using the same instructions to the LLM as those instructions used in the human evaluation, as emphasized in Chiang and Lee (2023).", + "bbox": [ + 507, + 356, + 882, + 516 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "First, we show an example prompt for coherence. This prompt corresponds to the score only + auto CoT in Table 1.", + "bbox": [ + 507, + 518, + 882, + 565 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Coherence", + "text_level": 1, + "bbox": [ + 509, + 581, + 600, + 594 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "You will be given one summary written for a news article.", + "bbox": [ + 507, + 598, + 880, + 627 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Your task is to rate the summary on one metric.", + "bbox": [ + 507, + 630, + 880, + 659 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.", + "bbox": [ + 507, + 662, + 880, + 724 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Evaluation Criteria:", + "bbox": [ + 507, + 726, + 695, + 740 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Coherence (1- 5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby \"the summary should be well- structured and well- organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.\"", + "bbox": [ + 507, + 741, + 880, + 885 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "bbox": [ + 509, + 888, + 670, + 903 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "1. Read the news article carefully and", + "bbox": [ + 509, + 904, + 880, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "8937", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/a4dd450a4843fc8ab986d86a26fbe1de3d385becff0e9ae96add2b2ac334ff37.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3560.2900.2610.263
XRate-explain0.5480.4820.4230.487
XAnalyze-rate0.5890.4390.4380.319
", + "bbox": [ + 193, + 143, + 803, + 218 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/44437fe3adaba5df160e8b538a01a78f4f10814b4513e52f244696d7f9ed5e52.jpg", + "table_caption": [ + "(a) Temperature $T = 0.3$" + ], + "table_footnote": [], + "table_body": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3940.2560.2880.334
XRate-explain0.5260.4680.4140.485
XAnalyze-rate0.6050.4480.4410.392
", + "bbox": [ + 193, + 247, + 805, + 322 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/02fe5cbf4befb09edd95946d99450d1d099d99a0bbf681cb91df9fcb1a628602.jpg", + "table_caption": [ + "(b) Temperature $T = 0.7$" + ], + "table_footnote": [], + "table_body": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.4500.3700.3190.403
XRate-explain0.5570.4730.4520.509
XAnalyze-rate0.6350.5340.4790.444
", + "bbox": [ + 193, + 351, + 805, + 426 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/3156288a0971415364076d9ec98426acc8b46ac9ba29c4eba29bcaecd6caaa30.jpg", + "table_caption": [ + "(c) Temperature $T = 1.0$ (The result in Table 1)", + "Table 5: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate at different temperatures. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable)." + ], + "table_footnote": [], + "table_body": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3080.2480.2650.345
XRate-explaintextbf0.5260.4680.4140.485
XAnalyze-rate0.5890.5240.4590.416
", + "bbox": [ + 193, + 617, + 805, + 690 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/02e0915a2af0e474e11e070df2cb7bb086a6575988d27a30b7a030f0f723b3f7.jpg", + "table_caption": [ + "(a) Results when prompted with the human evaluator prompts." + ], + "table_footnote": [], + "table_body": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3250.2060.2810.301
XRate-explain0.5960.4650.4030.478
XAnalyze-rate0.5960.4930.4750.406
", + "bbox": [ + 193, + 720, + 805, + 794 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "(b) Results when prompted with the HHH prompts.", + "bbox": [ + 339, + 797, + 653, + 810 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Table 6: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate when using different prompts. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable).", + "bbox": [ + 112, + 822, + 882, + 852 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "8938", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "identify the main topic and key points. 2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order.", + "3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria." + ], + "bbox": [ + 112, + 84, + 485, + 244 + ], + "page_idx": 11 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "Example: \nSource Text: {{Document}} \nSummary: {{Summary}} \nEvaluation Form (scores ONLY): - Coherence:", + "guess_lang": "yaml", + "bbox": [ + 114, + 246, + 393, + 324 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "F.1.1 Different Output Prompts", + "text_level": 1, + "bbox": [ + 114, + 336, + 379, + 351 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "For different output prompts, which is the ablation in Section 3.2 and the last block in Table 1 and 2, we only change the yellow parts (the last part) in the example prompt above. There are four output prompts used in Section 3.2: score only, free text, rate-explain, and analyze-rate. The prompts for free text is attribute-dependent, and we list them in the Their corresponding output prompts are listed as follows:", + "bbox": [ + 110, + 356, + 487, + 500 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Score only", + "text_level": 1, + "bbox": [ + 114, + 513, + 201, + 527 + ], + "page_idx": 11 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "Evaluation Form (scores ONLY): - {Attribute}:", + "guess_lang": "txt", + "bbox": [ + 115, + 529, + 401, + 558 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Rate-explain", + "text_level": 1, + "bbox": [ + 114, + 571, + 218, + 587 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Evaluation Form (Answer by starting with \"Rating:\" and then give the explanation of the rating on the next line by \"Rationale:\"):", + "bbox": [ + 112, + 587, + 485, + 650 + ], + "page_idx": 11 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "- {Attribute}:", + "guess_lang": "txt", + "bbox": [ + 115, + 652, + 245, + 667 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Analyze-rate", + "text_level": 1, + "bbox": [ + 114, + 678, + 220, + 694 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Evaluation Form (Answer by starting with \"Analysis:\" to analyze the given example regarding the evaluation criteria as concise as possible, and then give the numeric rating on the next line by \"Rating:):", + "bbox": [ + 112, + 695, + 485, + 790 + ], + "page_idx": 11 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "- {Attribute}:", + "guess_lang": "txt", + "bbox": [ + 115, + 791, + 245, + 806 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "F.1.2 Attribute-Dependent Prompts", + "text_level": 1, + "bbox": [ + 114, + 818, + 410, + 834 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. We use different colors to denote different parts in the prompt.", + "bbox": [ + 110, + 838, + 489, + 917 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Note that the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated.", + "bbox": [ + 507, + 84, + 884, + 149 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Coherence", + "text_level": 1, + "bbox": [ + 509, + 164, + 600, + 177 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Evaluation Criteria:", + "bbox": [ + 509, + 180, + 695, + 193 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Coherence (1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby \"the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.\"", + "bbox": [ + 507, + 196, + 882, + 356 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "text_level": 1, + "bbox": [ + 509, + 373, + 668, + 388 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the news article carefully and identify the main topic and key points.", + "2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order.", + "3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria." + ], + "bbox": [ + 509, + 388, + 880, + 565 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Question:", + "text_level": 1, + "bbox": [ + 509, + 582, + 594, + 596 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "How coherent is the summary? That is, how well do the sentences in the summary fit together? (On a scale of 1-5, with 1 being the lowest)", + "bbox": [ + 507, + 598, + 882, + 662 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Consistency", + "text_level": 1, + "bbox": [ + 509, + 678, + 608, + 692 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Evaluation Criteria:", + "bbox": [ + 509, + 694, + 695, + 708 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Consistency (1-5) - the factual alignment between the summary and the summarized source. A factually consistent summary contains only statements that are entailed by the source document. Annotators were also asked to penalize summaries that contained hallucinated facts.", + "bbox": [ + 507, + 709, + 880, + 821 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "text_level": 1, + "bbox": [ + 509, + 839, + 668, + 853 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the news article carefully and identify the main facts and details it presents.", + "2. Read the summary and compare it to the" + ], + "bbox": [ + 509, + 854, + 880, + 917 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "8939", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "article. Check if the summary contains any factual errors that are not supported by the article.", + "bbox": [ + 114, + 85, + 485, + 131 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "3. Assign a score for consistency based on the Evaluation Criteria.", + "bbox": [ + 114, + 133, + 485, + 162 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Question:", + "text_level": 1, + "bbox": [ + 114, + 181, + 200, + 195 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "How consistent is the summary with the source document in terms of the factual alignment? (On a scale of 1-5, with 1 being the lowest)", + "bbox": [ + 112, + 197, + 485, + 260 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Fluency", + "text_level": 1, + "bbox": [ + 114, + 277, + 181, + 291 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Evaluation Criteria:", + "bbox": [ + 114, + 293, + 300, + 307 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Fluency (1-5): This rating measures the quality of individual sentences, are they well-written and grammatically correct. Consider the quality of individual sentences.", + "bbox": [ + 112, + 309, + 485, + 387 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Evaluation steps:", + "bbox": [ + 114, + 406, + 273, + 420 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the given summary.", + "2. Evaluate the fluency of the summary on a scale of 1-5 based on the criteria provided.", + "3. Provide the rating." + ], + "bbox": [ + 115, + 422, + 485, + 501 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Question:", + "text_level": 1, + "bbox": [ + 114, + 519, + 200, + 532 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Based on the evaluation criteria, how fluent is the summary? (On a scale of 1-5, with 1 being the lowest)", + "bbox": [ + 112, + 533, + 485, + 581 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Relevance", + "text_level": 1, + "bbox": [ + 114, + 598, + 200, + 611 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Evaluation Criteria:", + "bbox": [ + 114, + 614, + 300, + 627 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Relevance (1-5) - selection of important content from the source. The summary should include only important information from the source document. Annotators were instructed to penalize summaries which contained redundancies and excess information.", + "bbox": [ + 112, + 630, + 485, + 740 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "text_level": 1, + "bbox": [ + 114, + 759, + 273, + 772 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the summary and the source document carefully.", + "2. Compare the summary to the source document and identify the main points of the article.", + "3. Assess how well the summary covers the main points of the article, and how much irrelevant or redundant information it contains." + ], + "bbox": [ + 115, + 775, + 485, + 917 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "4. Assign a relevance score from 1 to 5.", + "bbox": [ + 509, + 85, + 872, + 99 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Question:", + "text_level": 1, + "bbox": [ + 509, + 117, + 594, + 131 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "On a scale of 1-5, with 1 being the lowest, is the summary relevant to the source document and does the summary only contain the important information of the source document?", + "bbox": [ + 509, + 133, + 880, + 211 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "F.2 Prompts for Topical-Chat", + "text_level": 1, + "bbox": [ + 509, + 227, + 757, + 241 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "First, we show an example prompt for naturalness. This prompt corresponds to the score only + auto CoT in Table 2.", + "bbox": [ + 507, + 248, + 882, + 294 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Naturalness", + "text_level": 1, + "bbox": [ + 509, + 307, + 608, + 321 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "You will be given a conversation between two individuals. You will then be given one potential response for the next turn in the conversation. The response concerns an interesting fact, which will be provided as well.", + "bbox": [ + 509, + 324, + 880, + 418 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Your task is to rate the responses on one metric.", + "bbox": [ + 509, + 420, + 880, + 450 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.", + "bbox": [ + 509, + 453, + 882, + 514 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Evaluation Crieteria:", + "text_level": 1, + "bbox": [ + 510, + 533, + 705, + 546 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Naturalness (1-3) Is the response naturally written??", + "bbox": [ + 509, + 550, + 880, + 580 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- A score of 1 (bad) means that the response is unnatural.", + "- A score of 2 (ok) means the response is strange, but not entirely unnatural.", + "- A score of 3 (good) means that the response is natural." + ], + "bbox": [ + 510, + 582, + 880, + 676 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "text_level": 1, + "bbox": [ + 510, + 694, + 668, + 709 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the conversation between the two individuals.", + "2. Read the potential response for the next turn in the conversation.", + "3. Evaluate the response based on its naturalness, using the provided criteria.", + "4. Assign a rating score of 1, 2, or 3 based on the evaluation." + ], + "bbox": [ + 510, + 711, + 880, + 837 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Example:", + "text_level": 1, + "bbox": [ + 509, + 854, + 584, + 869 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Conversation History:", + "bbox": [ + 509, + 872, + 704, + 885 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "{{Document}}", + "bbox": [ + 509, + 889, + 618, + 902 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Corresponding Fact:", + "bbox": [ + 509, + 904, + 685, + 917 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "8940", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "{{Fact}}", + "bbox": [ + 112, + 85, + 189, + 99 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Response:", + "bbox": [ + 112, + 102, + 200, + 116 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "{{Response}}", + "bbox": [ + 112, + 117, + 225, + 131 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Evaluation Form (scores ONLY):", + "bbox": [ + 112, + 149, + 393, + 164 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- Naturalness:", + "bbox": [ + 115, + 166, + 245, + 180 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "F.2.1 Different Output Prompts", + "text_level": 1, + "bbox": [ + 114, + 191, + 379, + 206 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "For Topical-Chat, we also conduct ablations on different output prompts. Those different output prompts for score only, rate-explain, analyze-rate are the same as those listed in Section F.1.1. We do not list them here to save some space. The exact prompts we use can be found in the supplementary data of this paper.", + "bbox": [ + 112, + 210, + 485, + 323 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "F.2.2 Attribute-Dependent Prompts", + "text_level": 1, + "bbox": [ + 112, + 332, + 410, + 348 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. Again, the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated.", + "bbox": [ + 112, + 351, + 485, + 479 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Naturalness", + "text_level": 1, + "bbox": [ + 114, + 491, + 213, + 504 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Evaluation Crieteria:", + "bbox": [ + 112, + 507, + 310, + 521 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Naturalness (1-3) Is the response naturally written??", + "bbox": [ + 112, + 523, + 485, + 552 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- A score of 1 (bad) means that the response is unnatural.", + "bbox": [ + 114, + 556, + 485, + 585 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- A score of 2 (ok) means the response is strange, but not entirely unnatural.", + "bbox": [ + 114, + 588, + 485, + 618 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- A score of 3 (good) means that the response is natural.", + "bbox": [ + 114, + 620, + 485, + 650 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "text_level": 1, + "bbox": [ + 114, + 668, + 273, + 683 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the conversation between the two individuals.", + "2. Read the potential response for the next turn in the conversation.", + "3. Evaluate the response based on its naturalness, using the provided criteria.", + "4. Assign a rating score of 1, 2, or 3 based on the evaluation." + ], + "bbox": [ + 114, + 684, + 485, + 810 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Question:", + "text_level": 1, + "bbox": [ + 114, + 829, + 200, + 843 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "How natural is the reponse? (On a scale of 1-3, with 1 being the lowest)", + "bbox": [ + 112, + 845, + 485, + 876 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Coherence", + "text_level": 1, + "bbox": [ + 114, + 887, + 203, + 901 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Evaluation Crieteria:", + "bbox": [ + 114, + 903, + 310, + 917 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Coherence (1-3) Does the response serve as a valid continuation of the conversation history?", + "bbox": [ + 507, + 84, + 880, + 131 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- A score of 1 (no) means that the response drastically changes topic or ignores the conversation history.", + "- A score of 2 (somewhat) means the response refers to the conversation history in a limited capacity (e.g., in a generic way) and shifts the conversation topic.", + "- A score of 3 (yes) means the response is on topic and strongly acknowledges the conversation history." + ], + "bbox": [ + 507, + 133, + 880, + 309 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "text_level": 1, + "bbox": [ + 509, + 326, + 670, + 341 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the conversation history.", + "2. Read the potential response.", + "3. Evaluate the coherence of the response based on the conversation history.", + "4. Assign a score of 1, 2, or 3 for coherence." + ], + "bbox": [ + 509, + 343, + 880, + 436 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Question:", + "text_level": 1, + "bbox": [ + 509, + 455, + 594, + 469 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Does the response serve as a valid continuation of the conversation history? (On a scale of 1-3, with 1 meaning the response is invalid and 3 meaning the response is coherent)", + "bbox": [ + 507, + 470, + 880, + 550 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Engagingness", + "text_level": 1, + "bbox": [ + 509, + 565, + 621, + 580 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Evaluation Crieteria:", + "bbox": [ + 509, + 582, + 705, + 596 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Engagingness (1-3) Is the response dull/interesting?", + "bbox": [ + 509, + 598, + 880, + 629 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- A score of 1 (dull) means that the response is generic and dull.", + "- A score of 2 (somewhat interesting) \nmeans the response is somewhat interesting and could engage you in the conversation (e.g., an opinion, thought)", + "- A score of 3 (interesting) means the response is very interesting or presents an interesting fact" + ], + "bbox": [ + 509, + 630, + 880, + 772 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "text_level": 1, + "bbox": [ + 509, + 791, + 670, + 806 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the conversation, the corresponding fact and the response carefully.", + "2. Rate the response on a scale of 1-3 for engagingness, according to the criteria above." + ], + "bbox": [ + 509, + 807, + 880, + 885 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Question:", + "bbox": [ + 510, + 904, + 594, + 917 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "8941", + "bbox": [ + 480, + 928, + 517, + 940 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Is the response interesting and engaging? (On a scale of 1-3, with 1 meaning dull and 3 meaning interesting)", + "bbox": [ + 114, + 84, + 485, + 131 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Groundedness", + "text_level": 1, + "bbox": [ + 114, + 142, + 231, + 155 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Evaluation Crieteria:", + "bbox": [ + 114, + 158, + 310, + 172 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Groundedness (0- 1) given the fact that this response is conditioned on, determine whether this response uses that fact.", + "bbox": [ + 114, + 174, + 485, + 236 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "- A score of 0 (no) means the response does not mention or refer to the fact at all", + "bbox": [ + 114, + 239, + 485, + 284 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "- A score of 1 (yes) means the response uses the fact well", + "bbox": [ + 114, + 287, + 485, + 317 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Evaluation Steps:", + "bbox": [ + 114, + 335, + 273, + 349 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Read the conversation between the two individuals.", + "2. Identify the fact that is provided for the potential response.", + "3. Read the potential response.", + "4. Determine if the potential response uses or mentions the fact.", + "5. Assign a score of 0 or 1 for groundedness based on whether the response uses the fact." + ], + "bbox": [ + 114, + 351, + 485, + 510 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Question:", + "bbox": [ + 114, + 527, + 200, + 541 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Given the fact that this response is conditioned on, does the response use the fact? (On a scale of 0-1, with 0 meaning no and 1 meaning yes)", + "bbox": [ + 112, + 544, + 485, + 607 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "F.3 Prompts for Section 3.4.2", + "text_level": 1, + "bbox": [ + 114, + 619, + 357, + 633 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "HHH prompts You are an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed.", + "bbox": [ + 112, + 640, + 489, + 750 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Human annotator prompts Assume that you are a professional and careful human evaluator. You are recruited and paid to conduct the following task. You need to strictly follow the task instruction and ensure that you are doing the job with high-quality.", + "bbox": [ + 112, + 760, + 485, + 872 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "8942", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 14 + } +] \ No newline at end of file diff --git a/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_model.json b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a6beaebd38fcf05a0b052bf031c011f17342359a --- /dev/null +++ b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_model.json @@ -0,0 +1,3827 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.122, + 0.09, + 0.877, + 0.112 + ], + "angle": 0, + "content": "A Closer Look into Automatic Evaluation Using Large Language Models" + }, + { + "type": "text", + "bbox": [ + 0.247, + 0.138, + 0.42, + 0.155 + ], + "angle": 0, + "content": "Cheng-Han Chiang" + }, + { + "type": "text", + "bbox": [ + 0.22, + 0.155, + 0.448, + 0.171 + ], + "angle": 0, + "content": "National Taiwan University," + }, + { + "type": "text", + "bbox": [ + 0.303, + 0.172, + 0.365, + 0.186 + ], + "angle": 0, + "content": "Taiwan" + }, + { + "type": "text", + "bbox": [ + 0.243, + 0.188, + 0.427, + 0.204 + ], + "angle": 0, + "content": "dcml0714@gmail.com" + }, + { + "type": "text", + "bbox": [ + 0.611, + 0.138, + 0.722, + 0.154 + ], + "angle": 0, + "content": "Hung-yi Lee" + }, + { + "type": "text", + "bbox": [ + 0.553, + 0.155, + 0.78, + 0.171 + ], + "angle": 0, + "content": "National Taiwan University," + }, + { + "type": "text", + "bbox": [ + 0.636, + 0.172, + 0.697, + 0.186 + ], + "angle": 0, + "content": "Taiwan" + }, + { + "type": "text", + "bbox": [ + 0.564, + 0.188, + 0.769, + 0.204 + ], + "angle": 0, + "content": "hungyilee@ntu.edu.tw" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.269 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.282, + 0.462, + 0.567 + ], + "angle": 0, + "content": "Using large language models (LLMs) to evaluate text quality has recently gained popularity. Some prior works explore the idea of using LLMs for evaluation, while they differ in some details of the evaluation process. In this paper, we analyze LLM evaluation (Chiang and Lee, 2023)1 and G-Eval (Liu et al., 2023), and we discuss how those details in the evaluation process change how well the ratings given by LLMs correlate with human ratings. We find that the auto Chain-of-Thought (CoT) used in G-Eval does not always make G-Eval more aligned with human ratings. We also show that forcing the LLM to output only a numeric rating, as in G-Eval, is suboptimal. Last, we reveal that asking the LLM to explain its own ratings consistently improves the correlation between the ChatGPT and human ratings and pushes state-of-the-art (SoTA) correlations on two meta-evaluation datasets." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.582, + 0.26, + 0.597 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.608, + 0.49, + 0.802 + ], + "angle": 0, + "content": "Large language models (LLMs) trained with task instructions and human feedback can follow natural language instructions to complete a task (Askell et al., 2021; Sanh et al., 2022; Wei et al., 2022a; Ouyang et al., 2022). Recently, the instruction-following ability of LLMs makes them promising candidates for automatic evaluation (Chiang and Lee, 2023; Liu et al., 2023; Wang et al., 2023; Huang et al., 2023). By simply instructing the LLMs on how to rate and giving the LLMs the sample to be rated, the LLM can follow the instructions and provide a rating of the sample." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.803, + 0.489, + 0.884 + ], + "angle": 0, + "content": "Chiang and Lee (2023) propose LLM evaluation and Liu et al. (2023) propose \\( G \\)-Eval; both of which use LLMs to evaluate samples by giving the LLM instructions, and they both show that some LLMs can yield evaluation results that are aligned to the" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.253, + 0.885, + 0.395 + ], + "angle": 0, + "content": "evaluation results of humans. Still, LLM evaluation and G-Eval differ in some specific design choices in the evaluation procedure. Since Chiang and Lee (2023) and Liu et al. (2023) use distinct tasks, it is hard to know how the differences between LLM evaluation and G-Eval affect the evaluation results. This makes practitioners in the future hard to determine how to conduct an automatic evaluation using LLMs." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.398, + 0.887, + 0.575 + ], + "angle": 0, + "content": "Given that LLM evaluation and G-Eval have already received significant attention shortly after publication, these methods will likely revolutionize the evaluation in NLP. Therefore, conducting a detailed analysis of these approaches is essential and timely. This paper aims to identify the crucial components in LLM evaluation and G-Eval that contribute to stronger correlations with human ratings. Based on our analysis, we provide guidelines on how to use LLMs for automatic evaluations. We have the following findings:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.583, + 0.884, + 0.631 + ], + "angle": 0, + "content": "- Auto-CoT (proposed by G-Eval) does not always improve the correlation between LLM and human ratings." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.64, + 0.885, + 0.672 + ], + "angle": 0, + "content": "- Making the LLMs output only a single numeric rating is suboptimal." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.682, + 0.885, + 0.73 + ], + "angle": 0, + "content": "- Asking the LLMs to rationalize their own ratings significantly improves the correlation between the LLMs' ratings and human ratings." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.74, + 0.885, + 0.819 + ], + "angle": 0, + "content": "- On two datasets, we improve the best correlation that ChatGPT's rating can achieve, and some correlations even exceed prior SoTA correlations obtained using the ratings of GPT-4 in Liu et al. (2023)." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.583, + 0.885, + 0.819 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.831, + 0.702, + 0.848 + ], + "angle": 0, + "content": "2 Experiment Setup" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.855, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Our paper studies what components in LLM evaluation and G-Eval make the ratings generated by LLM correlate with human ratings better, and we aim to improve the correlation." + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.893, + 0.488, + 0.92 + ], + "angle": 0, + "content": "1In this paper, the term LLM evaluation is used to refer to the specific method proposed by Chiang and Lee (2023)." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8928" + }, + { + "type": "footer", + "bbox": [ + 0.218, + 0.946, + 0.78, + 0.959 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8928-8942" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.959, + 0.72, + 0.973 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.114, + 0.085, + 0.482, + 0.099 + ], + "angle": 0, + "content": "2.1 LLM as an Automatic Evaluation Metric" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.106, + 0.489, + 0.33 + ], + "angle": 0, + "content": "Both LLM evaluation (Chiang and Lee, 2023) and G-Eval (Liu et al., 2023) propose to ask LLMs to rate a sample regarding some attributes of the sample (e.g., fluency, grammaticality) using a \\(k\\)-point Likert scale. They give the LLMs (1) descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence that prompts the LLM to give the rating2. The LLM outputs a sequence containing the rating. Unless specified, we follow prior works to sample \\(N = 20\\) sequences from the LLM and average those ratings as the final rating. While the two methods share the core concept, they differ in two details." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.332, + 0.49, + 0.572 + ], + "angle": 0, + "content": "Difference 1: Auto Chain-of-Thought The task descriptions and rating criteria in LLM evaluation and G-Eval are all human-written. However, Liu et al. (2023) argue that some evaluated attributes require more than simple definition and evaluation criteria, so they use LLMs to determine the evaluation steps. Specifically, they concatenate the task description, definition, and criteria of the attributes and append a line \"Evaluation steps:\" to prompt the LLM. The LLM then generates an ordered list containing the step-by-step evaluation steps. They dub this process auto chain-of-thought \\((CoT)\\). G-Eval uses human-written task instructions and auto-CoT-generated evaluation steps to prompt the LLM to rate the sample." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.573, + 0.49, + 0.734 + ], + "angle": 0, + "content": "Difference 2: Prompts for Output At the end of the input to LLMs, G-Eval uses the prompt {\"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ score~only}:\" to restrict the LLM to output only the numeric rating; the placeholder will be replaced by the evaluated attributes. In contrast, LLM evaluation uses the following question to ask the LLM to assign the rating: \"How {\"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ is the sample? (on a scale of 1-k, with 1 being the lowest)\". The LLM's output form is not restricted." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.745, + 0.466, + 0.76 + ], + "angle": 0, + "content": "2.2 Meta-Evaluating an Evaluation Metric" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.765, + 0.489, + 0.862 + ], + "angle": 0, + "content": "Given a sample, an evaluation metric assigns it a rating. To evaluate an evaluation metric, we need a dataset containing human ratings for samples in the dataset. We calculate the correlation coefficient between the ratings obtained by the evaluation metric and the human ratings. A higher correlation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.213 + ], + "angle": 0, + "content": "indicates the evaluation metric better aligns with human ratings. We adopt Pearson \\( r \\) and Kendall's \\( \\tau \\) as they are widely used in meta-evaluations (Graham et al., 2015; Bojar et al., 2017; Zhang* et al., 2020). In our paper, all the correlation refers to the correlation coefficient between the ratings of LLM and human ratings. Details on the calculation of correlation coefficients are in Appendix C." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.214, + 0.885, + 0.488 + ], + "angle": 0, + "content": "We use SummEval (Fabbri et al., 2021) and Topical-Chat (Gopalakrishnan et al., 2019; Mehri and Eskenazi, 2020) as the meta-evaluation datasets, following Liu et al. (2023). SummEval is a meta-evaluation dataset for summarization derived from the CNN/DailyMail dataset (Hermann et al., 2015). Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Topical-Chat is a dataset that evaluates the quality of a response given the dialogue history and a piece of knowledge relating to the dialogue. We follow Zhong et al. (2022) to evaluate the naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge) of the response. The dataset details are in Appendix E." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.5, + 0.747, + 0.515 + ], + "angle": 0, + "content": "2.3 Large Language Models" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.521, + 0.884, + 0.713 + ], + "angle": 0, + "content": "An LLM used as an evaluation metric should be affordable and accessible to whoever wants to use it. Based on this principle, we use ChatGPT (gpt3.5-turbo-0613) (OpenAI, 2022) for evaluation since it has lower cost and improved performance compared with other GPT-3.5 models. ChatGPT is also used in LLM evaluation and G-Eval. While Liu et al. (2023) further use GPT-4 (OpenAI, 2023) in their experiments, we cannot use GPT-4 in our experiments since most people, including us, have limited or no access to GPT-4, making it utterly unsuitable as an evaluation metric." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.715, + 0.884, + 0.827 + ], + "angle": 0, + "content": "In our preliminary experiments, we also try to use the best open LLM (at the time of writing this manuscript) on Open LLM leaderboard, the falcon-40b-instruct model (Almazrouei et al., 2023), but we find it cannot follow the instructions and rate the samples very well. Hence, we exclude open LLMs in our paper." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.84, + 0.857, + 0.856 + ], + "angle": 0, + "content": "3 Better Usage of LLM for Evaluation" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.866, + 0.779, + 0.882 + ], + "angle": 0, + "content": "3.1 Is Auto CoT Always Useful?" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.888, + 0.884, + 0.92 + ], + "angle": 0, + "content": "Liu et al. (2023) shows that adding the evaluation steps generated by auto CoT improves the correla-" + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.869, + 0.489, + 0.919 + ], + "angle": 0, + "content": "2In our paper, we use different highlight colors to represent different parts of the prompt, as shown in the above text. Additionally, we use cyan to represent the parts generated by auto Chain-of-Thought" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8929" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.131, + 0.082, + 0.871, + 0.235 + ], + "angle": 0, + "content": "
Sec.AblationsCoherenceConsistencyFluencyRelevance
CoTOutputrτrτrτrτ
GPT-4†?‡Score only0.5810.4630.5750.4190.60.4570.5990.409
3.1Score only0.450.3590.370.2860.3190.2030.4030.327
X0.3440.2480.3280.1850.3610.1770.3530.248
3.2XScore only0.3440.2480.3280.1850.3610.1770.3530.248
XFree Text0.460.3420.4760.3340.4770.2730.3240.228
XRate-explain0.5570.440.4730.3370.4510.3060.5090.348
XAnalyze-rate0.6350.4760.5370.340.4790.3020.4440.305
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.245, + 0.885, + 0.348 + ], + "angle": 0, + "content": "Table 1: The Pearson's \\( r \\) and Kendall's \\( \\tau \\) correlation coefficient between LLMs' ratings and human ratings for SummEval. All the results in this table, except the first row, are from ChatGPT. We consider auto CoT + score only using ChatGPT proposed in G-Eval as the baseline of this paper. We boldface the Pearson's \\( r \\) statistically significantly higher than the baseline (except GPT-4). †: results from Liu et al. (2023). Some numbers are different because we re-calculate the correlations based on the GPT-4 responses Liu et al. (2023) released. ‡: The results of GPT-4 cannot serve as a reasonable comparison since we find something odd in the prompts Liu et al. (2023) use, which we elaborate in Appendix A." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.371, + 0.491, + 0.724 + ], + "angle": 0, + "content": "tion on SummEval when using GPT-4 for evaluation. By scrutinizing their results, we find that the correlations when using auto CoT and not using it often differ by less than 0.02. This raises two questions: (1) Is this difference statistically significant? (2) Does auto CoT yield higher correlations for different LLMs and datasets? To answer these questions, we use ChatGPT to rate the samples in SummEval and Topical-Chat using two sets of prompts, one with the evaluation steps generated using auto CoT and one without those evaluation steps. In this experiment, we follow G-Eval and restrict ChatGPT to output only a numeric score. Following Graham and Baldwin (2014), we use William's test for significance to see if the Pearson's \\( r \\) of using and not using auto CoT is statistically significantly different. We try to follow the prompts used in G-Eval when possible; still, we have to construct some prompts since Liu et al. (2023) only release part of the prompts and some of which are problematic. We list all the prompts and how they are obtained in Appendix F." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.727, + 0.49, + 0.919 + ], + "angle": 0, + "content": "The experiment results for SummEval are shown in the block in blue in Table 1. We also list the best results of G-Eval using GPT-4 from Liu et al. (2023) in the first row of Table 1 only for reference. Comparing our results with GPT-4 is unfair since we use ChatGPT, which is weaker than GPT-4. A more reasonable baseline for our paper is the \"auto CoT + score only\" using ChatGPT on the second row, which is the method proposed by G-Eval and shows the highest correlation that ChatGPT can achieve in Liu et al. (2023). The numbers here differ from results in Liu et al. (2023) because" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.371, + 0.852, + 0.386 + ], + "angle": 0, + "content": "we carefully reproduce their results ourselves." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.386, + 0.885, + 0.499 + ], + "angle": 0, + "content": "Back to Table 1, we can see that auto CoT leads to higher correlations for coherence, consistency, and relevance. By William's test, these higher correlations reach statistical significance with \\( p \\)-values less than 0.05. However, using auto CoT results in a lower Pearson's \\( r \\) for fluency, and this inferiority in Pearson's \\( r \\) is also statistically significant." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.5, + 0.886, + 0.709 + ], + "angle": 0, + "content": "The results for Topical-Chat are illustrated in Table 2. For Topical-Chat, the Pearson's \\( r \\) of using and not using auto CoT are very close for all four attributes except groundedness, with differences less than 0.025, and these differences are not statistically significant. For groundedness, auto CoT even drastically decreases the correlation. In summary, using auto CoT does not yield consistent and meaningful improvements compared with not using CoT. This should not be surprising since the evaluation steps generated with auto CoT often merely paraphrases the evaluation criterion and instructions given to the LLM." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.721, + 0.715, + 0.737 + ], + "angle": 0, + "content": "3.2 Prompt for Outputs" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.743, + 0.885, + 0.92 + ], + "angle": 0, + "content": "In this section, we explore if the difference in how ChatGPT is prompted to output makes it's ratings better aligned with human ratings. We use two sets of prompts that share the same task descriptions and evaluation criteria but differ in how they prompt the LLM to generate the output. One uses \"score only\", as in G-Eval. The other replaces the \"score only\" with \"How {{placeholder}}\" is the sample? (on a scale of 1-k, with 1 being the lowest), as in LLM evaluation. We call the latter prompts free text since they do not" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.941 + ], + "angle": 0, + "content": "8930" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.136, + 0.082, + 0.863, + 0.219 + ], + "angle": 0, + "content": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.1Score only0.3930.3580.4680.3910.5490.5130.3110.566
X0.4080.3310.4430.4040.5570.5350.3580.582
3.2XScore only0.4080.3310.4430.4040.5570.5350.3580.582
XFree Text0.4640.4760.5240.4260.6110.5570.5630.666
XRate-explain0.5240.470.4770.4160.5670.5240.580.693
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.229, + 0.885, + 0.286 + ], + "angle": 0, + "content": "Table 2: The Pearson's \\( r \\) and Kendall's \\( \\tau \\) correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's \\( r \\) statistically significantly higher than auto CoT + score only. We **underline** the Pearson's \\( r \\) comparable auto CoT + score only." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.312, + 0.295, + 0.327 + ], + "angle": 0, + "content": "restrict the output form." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.332, + 0.49, + 0.508 + ], + "angle": 0, + "content": "The results for SummEval are shown in the yellow blocks in Table 1, and the results for TopicalChat are shown in Table 2. We find that allowing ChatGPT to respond to the question freely yields Pearson's \\( r \\) and Kendall's \\( \\tau \\) much higher than restricting the model to output a single numeric score for almost all attributes of both datasets. The higher Pearson's \\( r \\) of free text compared with score only is statistically significant. The only exception is the relevance of SummEval, where free text yields slightly lower correlations." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.513, + 0.49, + 0.675 + ], + "angle": 0, + "content": "Initially, we thought ChatGPT aligns better with human ratings in free text because it can generate natural language explanations to justify their rating, making the ratings more correlated with human ratings. However, we observe that the responses of ChatGPT when prompted with free text mostly contain a single numeric rating, which is the same behavior when it is instructed by score only. This means that what the model is allowed to generate is more important than what it really generates." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.679, + 0.49, + 0.921 + ], + "angle": 0, + "content": "The above observations make us curious if the correlations can be higher if ChatGPT is instructed to justify its ratings. Inspired by chain-of-thought in Wei et al. (2022b) and Kojima et al. (2022) (not the auto CoT in G-Eval), we ask ChatGPT to provide their reasoning and rationales on the ratings. Instead of asking ChatGPT to output only a score, we construct two types of prompts that ask ChatGPT to rationalize its decision. The first type of prompt, called analyze-rate, asks ChatGPT to analyze the samples regarding the evaluated criteria first and give the rating. The second type of prompt, called rate-explain, asks ChatGPT to provide the numeric ratings first and explain why it gives such a rating. analyze-rate is more like the zero-shot" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.312, + 0.883, + 0.344 + ], + "angle": 0, + "content": "chain-of-thought (Kojima et al., 2022). Refer to Appendix F.1.1 for the exact prompts we use." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.35, + 0.885, + 0.672 + ], + "angle": 0, + "content": "The results of asking ChatGPT to explain/analyze how they rate the sample are shown in the last two rows in Table 1 and Appendix Table 2. We find that for all attributes of both datasets, rate-explain and anlyze-rate both lead to correlations stronger than or at least comparable to the correlation of asking ChatGPT to output only a numeric rating (score only). By asking ChatGPT to explain/analyze, we improve the best correlations that can be achieved by ChatGPT in Liu et al. (2023) (the Auto-CoT + score only). Moreover, when asked to explain/analyze when rating, ChatGPT's correlation can be better than or comparable to the state-of-the-art correlation coefficients obtained from GPT-4 in Liu et al. (2023) for coherence of SummEval and three attributes of Topical-Chat. We hypothesize that some attributes (e.g., coherence for SummEval) are harder for ChatGPT to rate, so the correlations for these attributes show a larger improvement when ChatGPT explains how it rates the sample." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.679, + 0.887, + 0.921 + ], + "angle": 0, + "content": "In rate-explain, the output of ChatGPT contains a numeric rating followed by some explanations. As an auto-regressive language model, ChatGPT cannot depend on the explanation when generating the rating due to causal attention. If we stop the generation after ChatGPT generates the ratings, the output of rate-explain will only contain the ratings, just like the output forms in score only. Although the ratings in rate-explain do not depend on ChatGPT's rationales for the ratings, the ratings still correlate better with human ratings, compared with the ratings in score only. We think this is because when ChatGPT knows it needs to explain the ratings, it tends to generate ratings that are easier for it to explain, and a rating that is more" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.942 + ], + "angle": 0, + "content": "8931" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.488, + 0.117 + ], + "angle": 0, + "content": "aligned to humans' rating is easier for ChatGPT to explain." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.129, + 0.33, + 0.144 + ], + "angle": 0, + "content": "3.3 Empirical Guidelines" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.15, + 0.49, + 0.375 + ], + "angle": 0, + "content": "Based on the analysis and results in this section, we provide the following guideline: Always ask ChatGPT to explain/analyze when rating. We do not see rate-explain to be significantly better (or worse) than analyze-rate, so it is hard to determine which one to use. A valid method is sampling some ratings using rate-explain and sampling some ratings using analyze-rate and averaging the ratings from the two prompts as the final rating. Using auto CoT is optional since it does not always lead to higher correlations with human ratings. We also find that using auto CoT does not always improve the correlations when ChatGPT is asked to explain; this result is shown in Appendix Table 3." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.387, + 0.388, + 0.401 + ], + "angle": 0, + "content": "3.4 Robustness of the Guidelines" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.408, + 0.49, + 0.552 + ], + "angle": 0, + "content": "LLMs are notorious for their performance fluctuation due to the input prompts, and the sequence generated by LLMs can be different when changing the hyperparameters used in decoding. To verify the validity of our empirical guidelines, we conduct the following two sets of experiments: (1) we vary the temperature used in sampling the output from ChatGPT, and (2) we vary the prompt given to ChatGPT." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.563, + 0.373, + 0.578 + ], + "angle": 0, + "content": "3.4.1 Varying the Temperature" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.582, + 0.49, + 0.709 + ], + "angle": 0, + "content": "We check if our guideline holds if we change the temperature \\( T \\) during generation. We compare Pearson's \\( r \\) when using the method proposed in G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate under different temperatures used when generating the output from ChatGPT. We follow Chiang and Lee (2023) and use two temperatures: 0.7 and 0.3." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.711, + 0.49, + 0.92 + ], + "angle": 0, + "content": "The results are shown in Appendix Table 5 and summarized as follows: First, when fixing the sampling temperature, we find that rate-explain and analyze-rate always achieve a higher correlation compared with G-Eval. This supports our guideline that \"asking the LLM to explain/analyze outperforms the method proposed in G-Eval.\" Next, we observe that the correlation of G-Eval when \\( T = 0.3 \\) is much lower than that of \\( T = 1.0 \\). This shows that G-Eval is not robust to sampling temperature. Contrarily, we find that the correlations obtained by rate-explain and analyze-rate do not significantly change for different sampling" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.883, + 0.134 + ], + "angle": 0, + "content": "temperatures for almost all cases. This shows that rate-explain and analyze-rate are more robust than G-Eval with respect to the sampling temperature." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.146, + 0.749, + 0.162 + ], + "angle": 0, + "content": "3.4.2 Changing the Prompts" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.167, + 0.885, + 0.456 + ], + "angle": 0, + "content": "We check if our guideline holds if we change the prompt given to ChatGPT. In this experiment, we changed the prompts to ChatGPT by appending some instructions before the descriptions of the rating task. We tried with two prompts: (1) the HHH prompts and (2) the human annotator prompts. The HHH prompt is designed by Bai et al. (2022) to align the output of LLMs to be more harmless, honest, and helpful. The human annotator prompt is inspired by Chiang and Lee (2023), who use a similar prompt to make the LLM behave as a human annotator. These two prompts will be inserted before the prompt we originally used in our paper. We use these two prompts to inject persona into the LLM. This is inspired by Zeng et al. (2023), which shows that the output of GPT3 can be different when prompted with a different persona. The prompts are detailed in Appendix F.3." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.458, + 0.885, + 0.668 + ], + "angle": 0, + "content": "The results are shown in Table 6 and summarized as follows: rate-explain and analyze-rate consistently outperform the G-eval when using the human annotator prompts and the HHH prompts. This indicates that our guidelines are robust toward different prompts. We also find that the correlations of G-Eval significantly drop when adding the human-annotator prompts or HHH prompts. On the other hand, the correlation for rate-explain and analyze-rate do not significantly decrease when adding the human-annotator prompt and the HHH prompt. This shows that asking the LLM to explain is more robust to the variation of the prompts." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.683, + 0.642, + 0.698 + ], + "angle": 0, + "content": "4 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.71, + 0.885, + 0.919 + ], + "angle": 0, + "content": "We study how to better use ChatGPT as an automatic evaluation tool by scrutinizing LLM evaluation and G-Eval. We provide concrete guidelines and show that by using those guidelines, the correlations of several evaluated attributes given by ChatGPT, a publicly usable model, can be higher than or comparable to the ratings given by GPT-4, a highly restricted and pricey model. We also show that the evaluation results based on our guidelines improve the best correlation that ChatGPT's rating can achieve. We believe our results and guidelines help future researchers better use LLMs for evaluation." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8932" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.221, + 0.099 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.11, + 0.454, + 0.126 + ], + "angle": 0, + "content": "There are three main limitations of this paper." + }, + { + "type": "text", + "bbox": [ + 0.131, + 0.135, + 0.49, + 0.248 + ], + "angle": 0, + "content": "1. We only use ChatGPT to conduct the experiments in this paper. We explain why we chose ChatGPT in Section 2.3. We believe that using ChatGPT is already enough since we show that the correlations obtained by using ChatGPT are already comparable to or better than the previous SoTA results obtained by GPT-4." + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.258, + 0.491, + 0.37 + ], + "angle": 0, + "content": "2. We only conduct analysis using two tasks, while we know that NLP has more diverse tasks. We do not guarantee that our observations can generalize to all the other datasets. We recommend the users verify the effectiveness of using LLM to evaluate the tasks of interest." + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.382, + 0.489, + 0.446 + ], + "angle": 0, + "content": "3. We cannot fairly compare our results with Liu et al. (2023), the previous SoTA results, due to multiple reasons. We explain those reasons in Appendix A." + }, + { + "type": "list", + "bbox": [ + 0.129, + 0.135, + 0.491, + 0.446 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.457, + 0.266, + 0.472 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.483, + 0.49, + 0.578 + ], + "angle": 0, + "content": "Our paper follows the ACL Code of Ethics. We do not see a particular harmful outcome of our paper. The code and datasets for reproducing our experiments can be found at https://github.com/d223302/A-Closer-Look-To-LLM-Evaluation/." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.591, + 0.287, + 0.607 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.615, + 0.49, + 0.728 + ], + "angle": 0, + "content": "We want to thank the reviews for providing detailed feedback and actionable suggestions, which helped us strengthen our paper. We also want to thank the senior committee members for monitoring the reviewing process. Cheng-Han Chiang is supported by a Ph.D. scholarship program by Delta Electronics." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.755, + 0.214, + 0.77 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.777, + 0.49, + 0.87 + ], + "angle": 0, + "content": "Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.879, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A" + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.777, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.529, + 0.086, + 0.884, + 0.113 + ], + "angle": 0, + "content": "general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.122, + 0.885, + 0.279 + ], + "angle": 0, + "content": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.288, + 0.885, + 0.355 + ], + "angle": 0, + "content": "Ondrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.363, + 0.885, + 0.443 + ], + "angle": 0, + "content": "Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15607-15631, Toronto, Canada. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.452, + 0.884, + 0.518 + ], + "angle": 0, + "content": "Alexander R Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391-409." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.526, + 0.884, + 0.593 + ], + "angle": 0, + "content": "Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anushree Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.602, + 0.882, + 0.681 + ], + "angle": 0, + "content": "Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 172-176, Doha, Qatar. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.69, + 0.885, + 0.782 + ], + "angle": 0, + "content": "Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1183-1191, Denver, Colorado. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.79, + 0.884, + 0.857 + ], + "angle": 0, + "content": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.866, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Fan Huang, Haewoon Kwak, and Jisun An. 2023. Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech. arXiv preprint arXiv:2302.07736." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.885, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8933" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.14 + ], + "angle": 0, + "content": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.147, + 0.49, + 0.2 + ], + "angle": 0, + "content": "Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.208, + 0.49, + 0.275 + ], + "angle": 0, + "content": "Matouš Macháček and Ondřej Bojar. 2014. Results of the WMT14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 293-301, Baltimore, Maryland, USA. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.282, + 0.488, + 0.348 + ], + "angle": 0, + "content": "Shikib Mehri and Maxine Eskenazi. 2020. Usr: An unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681-707." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.356, + 0.486, + 0.384 + ], + "angle": 0, + "content": "OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Accessed on January 10, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.391, + 0.377, + 0.405 + ], + "angle": 0, + "content": "OpenAI. 2023. Gpt-4 technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.414, + 0.488, + 0.492 + ], + "angle": 0, + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.5, + 0.488, + 0.709 + ], + "angle": 0, + "content": "Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.717, + 0.488, + 0.77 + ], + "angle": 0, + "content": "Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.779, + 0.488, + 0.845 + ], + "angle": 0, + "content": "Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.853, + 0.488, + 0.919 + ], + "angle": 0, + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian richter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.884, + 0.191 + ], + "angle": 0, + "content": "Andy Zeng, Maria Attarian, brian richter, Krzysztof Marcin Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael S Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Pete Florence. 2023. Socratic models: Composing zero-shot multimodal reasoning with language. In The Eleventh International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.201, + 0.884, + 0.254 + ], + "angle": 0, + "content": "Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert*. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.264, + 0.884, + 0.369 + ], + "angle": 0, + "content": "Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2023-2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.884, + 0.369 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.512, + 0.383, + 0.877, + 0.415 + ], + "angle": 0, + "content": "A Why We Cannot Fairly Compare with the Results in Liu et al. (2023)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.425, + 0.884, + 0.504 + ], + "angle": 0, + "content": "As a work highly related to G-Eval, we would really like to compare our results with G-Eval. However, we encounter difficulties when comparing our results with those in Liu et al. (2023) for the following reasons." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.516, + 0.884, + 0.579 + ], + "angle": 0, + "content": "- G-Eval proposes to use GPT-4 as the evaluation tool, while it is currently a highly restricted model, and we only have limited access to it." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.592, + 0.883, + 0.719 + ], + "angle": 0, + "content": "- G-Eval only releases the prompts for SummEval. We need to construct the prompts for Topical-Chat based on the human evaluation instructions released by Mehri and Eskenazi (2020). It is possible that the prompts we use for Topical-Chat are different from the prompts used in Liu et al. (2023), making their results incomparable to ours." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.731, + 0.883, + 0.843 + ], + "angle": 0, + "content": "- The prompts of fluency in SummEval released by Liu et al. (2023) in here is problematic so we need to construct new prompts for fluency. Refer to Appendix F.1 for detailed explanations. This makes us unable to directly compare our results with the results in Liu et al. (2023)." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.855, + 0.883, + 0.919 + ], + "angle": 0, + "content": "- We cannot reproduce the numbers on the paper of G-Eval even when using their official implementation and the GPT-4 responses they release. This means that the only thing we" + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.516, + 0.884, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8934" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.149, + 0.085, + 0.49, + 0.228 + ], + "angle": 0, + "content": "do is calculate the correlation coefficient using the data and code released on the official GitHub of G-Eval, but the numbers are quite different from the results in G-Eval's paper. Moreover, the results of fluency they provide is the result not using auto CoT, but the results of the other three attributes for SummEval use auto CoT. That is why we use a question mark for the auto CoT field in Table 1." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.243, + 0.49, + 0.323 + ], + "angle": 0, + "content": "- The Table 2 in Liu et al. (2023) seems to be wrong. The caption (Spearman's \\(\\rho\\) and Kendall's \\(\\tau\\)) does not match the headers (\\(r\\) and \\(\\rho\\)). This makes us hard to compare their results with ours reliably." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.336, + 0.384, + 0.369 + ], + "angle": 0, + "content": "B Supplementary Results for Topical-Chat" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.379, + 0.49, + 0.476 + ], + "angle": 0, + "content": "Table 2 is the supplementary results for Topical-Chat that we referred to in the main content. We plan to move Table 2 to the main content using the additional one page in the camera-ready version if the paper is accepted. See how Pearson's \\( r \\) and Kendall's \\( \\tau \\) are calculated in Appendix C." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.488, + 0.467, + 0.52 + ], + "angle": 0, + "content": "B.1 Is Auto CoT Useful When ChatGPT Is Asked to Explain?" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.525, + 0.49, + 0.638 + ], + "angle": 0, + "content": "In Table 3, we show the results when we add the evaluation steps generated by auto CoT when we ask ChatGPT when prompting with (rate-explain). We find that on groundedness, using auto CoT is worse. However, for the other three attributes, using auto CoT is better. This again shows that auto CoT is not particularly useful." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.652, + 0.48, + 0.667 + ], + "angle": 0, + "content": "C Calculation of Correlation Coefficient" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.678, + 0.49, + 0.919 + ], + "angle": 0, + "content": "In this paper, we calculate Pearson's \\( r \\) and Kendall's \\( \\tau \\) between human ratings and ChatGPT's ratings. Whether to use Spearman's rank correlation or Pearson's (linear) correlation to evaluate the alignment between human ratings and an automatic evaluation metric is long-standing, but there has been an increasing trend towards Pearson's correlation since 2014 (Macháček and Bojar, 2014; Graham and Baldwin, 2014; Zhang* et al., 2020). We use the pearsonr and Kendall tau in scipy.stats for calculating the correlation coefficients. For each attribute of each sample, the rating of ChatGPT is obtained by 20 samples; we set the decoding temperature to 1 and the top- \\( p \\) in nucleus sampling to 1, following G-Eval (Liu et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.293 + ], + "angle": 0, + "content": "Consider a dataset with \\(N\\) source documents, and each source document has \\(M\\) corresponding target documents. We also have the human ratings for \\(N \\cdot M\\) target documents on a specific attribute. While each attribute of each target document is rated by more than one human rater, we average those ratings when calculating the correlation coefficient. So the \\(N \\cdot M\\) ratings are the average ratings from different raters. In the case of SummEval, we have \\(N = 100\\) source documents and \\(M = 16\\) summaries generated by 16 summarization models. There are two different methods for calculating correlation coefficients." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.305, + 0.866, + 0.334 + ], + "angle": 0, + "content": "C.0.1 Method 1: Dataset-Level Correlation Coefficient" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.34, + 0.885, + 0.468 + ], + "angle": 0, + "content": "In this method, we first obtain the ratings on \\( N \\cdot M \\) target documents from ChatGPT. We then calculate the correlation coefficient between the \\( N \\cdot M \\) ChatGPT's ratings and the \\( N \\cdot M \\) average human ratings. In this case, the correlation coefficient is calculated among two \\( N \\cdot M \\) vectors, meaning that the correlation coefficient is calculated across the entire dataset." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.48, + 0.883, + 0.51 + ], + "angle": 0, + "content": "C.0.2 Method 2: Document-Level Correlation Coefficient" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.515, + 0.884, + 0.692 + ], + "angle": 0, + "content": "In this method, for each source document, we obtain the ratings of its \\(M\\) target documents using ChatGPT. Next, we calculate the correlation coefficient between these \\(M\\) ChatGPT ratings and the corresponding \\(M\\) human ratings. After iterating the above process over all the \\(N\\) source documents, we obtain the \\(N\\) correlation coefficients. We average the \\(N\\) correlation coefficients as the final correlation coefficient. In this case, the correlation coefficient is calculated at the document-level and averaged over the whole dataset." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.705, + 0.833, + 0.734 + ], + "angle": 0, + "content": "C.1 How We Calculate the Correlation Coefficient" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.742, + 0.885, + 0.886 + ], + "angle": 0, + "content": "In Table 1 and 2 in this paper, we use Method 1 (Subsection C.0.1) to calculate Pearson's correlation, following the recommendation in Graham et al. (2015). Calculating the correlation coefficient on the dataset level is also used in LLM evaluation (Chiang and Lee, 2023). Calculating a single correlation coefficient on the dataset level allows us to use William's test to test whether two Pearson's \\( r \\) are significantly different." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.888, + 0.884, + 0.919 + ], + "angle": 0, + "content": "For Kendall's \\(\\tau\\) in Table 1 and 2, we follow most prior works (Zhong et al., 2022; Liu et al., 2023) to" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8935" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.137, + 0.082, + 0.861, + 0.171 + ], + "angle": 0, + "content": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.2XScore only0.3930.3580.4680.3910.5490.5130.3110.566
rate-explain0.5540.4780.5120.4290.6130.5660.5550.664
Xrate-explain0.5240.470.4770.4160.5670.5240.580.693
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.18, + 0.885, + 0.237 + ], + "angle": 0, + "content": "Table 3: The Pearson's \\( r \\) and Kendall's \\( \\tau \\) correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's \\( r \\) statistically significantly higher than auto CoT + score only. We **underline** the Pearson's \\( r \\) comparable auto CoT + score only." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.262, + 0.489, + 0.326 + ], + "angle": 0, + "content": "calculate Kendall's \\(\\tau\\) using Method 2 (document-level, Section C.0.2) to understand if ChatGPT can differentiate the quality difference between different system outputs for the same source document." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.327, + 0.489, + 0.536 + ], + "angle": 0, + "content": "In fact, we find that Pearson's \\( r \\) calculated by Method 1 and Method 2 are highly correlated. In Table 4, we show the result of Topical-Chat while we use Method 2 to calculate Pearson's \\( r \\); Kendall's \\( \\tau \\) is still calculated by Method 2. Comparing the results of Pearson's \\( r \\) in Table 2 and Table 4, one can easily see that when a method have significantly higher Pearson's \\( r \\) in Table 2, it will also have significantly higher Pearson's \\( r \\). We present the \\( r \\) calculated by Method 1 because it makes more sense when calculating statistical significance when the correlation coefficient is calculated at the dataset-level (Graham et al., 2015)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.547, + 0.479, + 0.581 + ], + "angle": 0, + "content": "D Results of Changing the Temperature and Prompts" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.589, + 0.489, + 0.686 + ], + "angle": 0, + "content": "We show the results of varying the temperature used to sample the ChatGPT output in Table 5. In the experiments in this section, we only sample \\( N = 5 \\) samples from the ChatGPT since we find that G-eval and our proposed guidelines are quite robust to the number of samples when \\( N \\geq 5 \\)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.697, + 0.228, + 0.712 + ], + "angle": 0, + "content": "E Datasets" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.722, + 0.253, + 0.737 + ], + "angle": 0, + "content": "E.1 SummEval" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.743, + 0.49, + 0.919 + ], + "angle": 0, + "content": "SummEval (Fabbri et al., 2021) is a dataset for the meta-evaluation of summarization. It contains 100 source documents, each with 16 summaries obtained from different summarization models. Each of the 1600 summaries is rated by three workers recruited on Amazon Mturk and two experts in summarization. Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Each attribute is rated based on a 5-point Likert scale." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.262, + 0.885, + 0.358 + ], + "angle": 0, + "content": "We download the source documents, summaries, and human ratings from the GitHub repository of G-Eval (https://github.com/nlpyang/geval/tree/8f54105/data). SummEval was released under MIT License, and our usage for research does not violate the dataset's initial intention." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.373, + 0.663, + 0.388 + ], + "angle": 0, + "content": "E.2 Topical-Chat" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.395, + 0.885, + 0.813 + ], + "angle": 0, + "content": "Topical-Chat (Gopalakrishnan et al., 2019) is a knowledge-grounded open-domain dialogue dataset. The dataset consists of a dialogue context (history), an interesting fact related to the topic of the conversation, and a response. Mehri and Eskenazi (2020) releases high-quality human annotations on the quality of responses. They construct the dataset as follows: they first sample 60 dialogues context from Topical-Chat, and for each dialogue context and corresponding fun fact, they use a transformer model to generate four responses using four decoding methods. Each dialogue content has two additional responses: the human response and the ground truth response. Thus, there are a total of 360 dialogue-response pairs. Those pairs are evaluated based on six attributes, and we follow Zhong et al. (2022) and Liu et al. (2023) to only use four attributes: naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge). We obtain the human ratings of Topical-Chat from the Github repository of UniEval (Zhong et al., 2022): https://github.com/maszhongming/UniEval/blob/main/reproduce/data/dialogue/topical chatting.json." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.828, + 0.625, + 0.844 + ], + "angle": 0, + "content": "F Prompts" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.855, + 0.885, + 0.92 + ], + "angle": 0, + "content": "We list the prompts we use in this section. In the main content of the paper and in the following parts, we use different highlight colors to represent different parts of the prompt. A prompt is composed" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.941 + ], + "angle": 0, + "content": "8936" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.123, + 0.082, + 0.878, + 0.235 + ], + "angle": 0, + "content": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
GPT-4†Score only0.549-0.594-0.627-0.531-
3.1Score only0.4450.3580.4980.3910.5790.5130.6850.566
X0.4310.3310.5070.4040.6310.5350.6660.582
3.2XScore only0.4310.3310.5070.4040.6310.5350.6660.582
XFree Text0.5720.4760.5230.4260.6760.5570.7470.666
XRate-explain0.6210.5120.4720.4250.610.5090.7710.663
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
" + }, + { + "type": "table_caption", + "bbox": [ + 0.112, + 0.245, + 0.885, + 0.332 + ], + "angle": 0, + "content": "Table 4: The Pearson's \\( r \\) and Kendall's \\( \\tau \\) correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. Note that in this table, both Pearson's \\( r \\) and Kendall's \\( \\tau \\) are calculated by Method 2 in Appendix C.0.2. All the results in this table, except the first row, are from ChatGPT. The results of GPT-4 are from Liu et al. (2023) but should not be compared with our results since the prompts they use may be different from the prompt we use. Still, we can see that for naturalness, engagingness, and groundedness, the results of rate-explain and analyze-rate is better or comparable to GPT-4." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.357, + 0.489, + 0.436 + ], + "angle": 0, + "content": "of four parts: (1) the descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence used to prompt the LLM to give the rating." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.437, + 0.489, + 0.532 + ], + "angle": 0, + "content": "The prompts for different attributes of the same dataset share the same descriptions of the rating task. Different attributes use different definition and rating criteria. In G-Eval, the prompts also compose of the evaluation steps generated by auto CoT." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.545, + 0.348, + 0.56 + ], + "angle": 0, + "content": "F.1 Prompts for SummEval" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.566, + 0.491, + 0.597 + ], + "angle": 0, + "content": "The descriptions of the rating task, the definition and rating criteria, the evalua-" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.599, + 0.496, + 0.759 + ], + "angle": 0, + "content": "tion steps for coherence, consistency, and relevance in SummEval is from the prompts released by G-Eval in their GitHub repository (https://github.com/nlpyang/geval/tree/8f54105/prompts/summeval). While G-Eval also releases the prompt they use for fluency, we find something highly problematic in the prompt they use. The prompt for fluency asks the LLM to rate fluency on a scale of 1 to 3 (https://github.com/nlpyang/geval/blob/" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.759, + 0.496, + 0.919 + ], + "angle": 0, + "content": "8f54105061e00377fbbb909153892d5bfb5b3623a/prompts/summeval/fluDetailed.txt), while the original rating scale in SummEval is 1 to 5. We also find that the original rating criteria used in G-Eval for fluency differ largely from the rating criteria of fluency used for human evaluation in SummEval. Through our experiment, we find that the misalignment of evaluation criteria and evaluation scale significantly decreases Pearson's \\(r\\) with human ratings when using analyze-rate to" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.357, + 0.883, + 0.517 + ], + "angle": 0, + "content": "prompt ChatGPT to output. This is likely because ChatGPT tends to stick to the rating criteria when prompted with analyze-rate, and when using the rating criteria different from the criteria that are used to instruct the human raters, the scores generated by ChatGPT deviates more from the human ratings. This highlights the importance of using the same instructions to the LLM as those instructions used in the human evaluation, as emphasized in Chiang and Lee (2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.519, + 0.884, + 0.566 + ], + "angle": 0, + "content": "First, we show an example prompt for coherence. This prompt corresponds to the score only + auto CoT in Table 1." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.582, + 0.6, + 0.595 + ], + "angle": 0, + "content": "Coherence" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.599, + 0.882, + 0.628 + ], + "angle": 0, + "content": "You will be given one summary written for a news article." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.631, + 0.882, + 0.661 + ], + "angle": 0, + "content": "Your task is to rate the summary on one metric." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.663, + 0.882, + 0.725 + ], + "angle": 0, + "content": "Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.727, + 0.697, + 0.741 + ], + "angle": 0, + "content": "Evaluation Criteria:" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.743, + 0.882, + 0.887 + ], + "angle": 0, + "content": "Coherence (1- 5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby \"the summary should be well- structured and well- organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.\"" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.889, + 0.671, + 0.904 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.905, + 0.882, + 0.919 + ], + "angle": 0, + "content": "1. Read the news article carefully and" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8937" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.194, + 0.145, + 0.805, + 0.219 + ], + "angle": 0, + "content": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3560.2900.2610.263
XRate-explain0.5480.4820.4230.487
XAnalyze-rate0.5890.4390.4380.319
" + }, + { + "type": "table_caption", + "bbox": [ + 0.421, + 0.222, + 0.578, + 0.236 + ], + "angle": 0, + "content": "(a) Temperature \\( T = 0.3 \\)" + }, + { + "type": "table", + "bbox": [ + 0.194, + 0.248, + 0.806, + 0.323 + ], + "angle": 0, + "content": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3940.2560.2880.334
XRate-explain0.5260.4680.4140.485
XAnalyze-rate0.6050.4480.4410.392
" + }, + { + "type": "table_caption", + "bbox": [ + 0.421, + 0.326, + 0.578, + 0.339 + ], + "angle": 0, + "content": "(b) Temperature \\( T = 0.7 \\)" + }, + { + "type": "table", + "bbox": [ + 0.194, + 0.353, + 0.806, + 0.427 + ], + "angle": 0, + "content": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.4500.3700.3190.403
XRate-explain0.5570.4730.4520.509
XAnalyze-rate0.6350.5340.4790.444
" + }, + { + "type": "table_caption", + "bbox": [ + 0.351, + 0.429, + 0.645, + 0.443 + ], + "angle": 0, + "content": "(c) Temperature \\( T = 1.0 \\) (The result in Table 1)" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.454, + 0.884, + 0.484 + ], + "angle": 0, + "content": "Table 5: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate at different temperatures. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable)." + }, + { + "type": "table", + "bbox": [ + 0.194, + 0.618, + 0.806, + 0.691 + ], + "angle": 0, + "content": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3080.2480.2650.345
XRate-explaintextbf0.5260.4680.4140.485
XAnalyze-rate0.5890.5240.4590.416
" + }, + { + "type": "table_caption", + "bbox": [ + 0.308, + 0.695, + 0.688, + 0.709 + ], + "angle": 0, + "content": "(a) Results when prompted with the human evaluator prompts." + }, + { + "type": "table", + "bbox": [ + 0.194, + 0.721, + 0.806, + 0.795 + ], + "angle": 0, + "content": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3250.2060.2810.301
XRate-explain0.5960.4650.4030.478
XAnalyze-rate0.5960.4930.4750.406
" + }, + { + "type": "table_caption", + "bbox": [ + 0.341, + 0.799, + 0.655, + 0.812 + ], + "angle": 0, + "content": "(b) Results when prompted with the HHH prompts." + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.823, + 0.884, + 0.853 + ], + "angle": 0, + "content": "Table 6: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate when using different prompts. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable)." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8938" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.486, + 0.181 + ], + "angle": 0, + "content": "identify the main topic and key points. 2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.182, + 0.487, + 0.245 + ], + "angle": 0, + "content": "3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria." + }, + { + "type": "list", + "bbox": [ + 0.114, + 0.085, + 0.487, + 0.245 + ], + "angle": 0, + "content": null + }, + { + "type": "code", + "bbox": [ + 0.115, + 0.247, + 0.394, + 0.325 + ], + "angle": 0, + "content": "Example: \nSource Text: {{Document}} \nSummary: {{Summary}} \nEvaluation Form (scores ONLY): - Coherence:" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.337, + 0.38, + 0.353 + ], + "angle": 0, + "content": "F.1.1 Different Output Prompts" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.357, + 0.489, + 0.501 + ], + "angle": 0, + "content": "For different output prompts, which is the ablation in Section 3.2 and the last block in Table 1 and 2, we only change the yellow parts (the last part) in the example prompt above. There are four output prompts used in Section 3.2: score only, free text, rate-explain, and analyze-rate. The prompts for free text is attribute-dependent, and we list them in the Their corresponding output prompts are listed as follows:" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.514, + 0.203, + 0.529 + ], + "angle": 0, + "content": "Score only" + }, + { + "type": "code", + "bbox": [ + 0.116, + 0.53, + 0.402, + 0.56 + ], + "angle": 0, + "content": "Evaluation Form (scores ONLY): - {Attribute}:" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.573, + 0.22, + 0.588 + ], + "angle": 0, + "content": "Rate-explain" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.588, + 0.487, + 0.651 + ], + "angle": 0, + "content": "Evaluation Form (Answer by starting with \"Rating:\" and then give the explanation of the rating on the next line by \"Rationale:\"):" + }, + { + "type": "code", + "bbox": [ + 0.116, + 0.653, + 0.246, + 0.668 + ], + "angle": 0, + "content": "- {Attribute}:" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.68, + 0.221, + 0.695 + ], + "angle": 0, + "content": "Analyze-rate" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.696, + 0.487, + 0.791 + ], + "angle": 0, + "content": "Evaluation Form (Answer by starting with \"Analysis:\" to analyze the given example regarding the evaluation criteria as concise as possible, and then give the numeric rating on the next line by \"Rating:):" + }, + { + "type": "code", + "bbox": [ + 0.116, + 0.793, + 0.246, + 0.807 + ], + "angle": 0, + "content": "- {Attribute}:" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.819, + 0.411, + 0.835 + ], + "angle": 0, + "content": "F.1.2 Attribute-Dependent Prompts" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.839, + 0.49, + 0.919 + ], + "angle": 0, + "content": "The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. We use different colors to denote different parts in the prompt." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.15 + ], + "angle": 0, + "content": "Note that the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.165, + 0.6, + 0.178 + ], + "angle": 0, + "content": "Coherence" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.181, + 0.697, + 0.195 + ], + "angle": 0, + "content": "Evaluation Criteria:" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.197, + 0.884, + 0.357 + ], + "angle": 0, + "content": "Coherence (1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby \"the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.\"" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.374, + 0.669, + 0.389 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.39, + 0.882, + 0.422 + ], + "angle": 0, + "content": "1. Read the news article carefully and identify the main topic and key points." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.422, + 0.882, + 0.501 + ], + "angle": 0, + "content": "2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.503, + 0.882, + 0.566 + ], + "angle": 0, + "content": "3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria." + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.39, + 0.882, + 0.566 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.583, + 0.596, + 0.597 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.599, + 0.884, + 0.663 + ], + "angle": 0, + "content": "How coherent is the summary? That is, how well do the sentences in the summary fit together? (On a scale of 1-5, with 1 being the lowest)" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.679, + 0.609, + 0.693 + ], + "angle": 0, + "content": "Consistency" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.695, + 0.697, + 0.709 + ], + "angle": 0, + "content": "Evaluation Criteria:" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.711, + 0.882, + 0.822 + ], + "angle": 0, + "content": "Consistency (1-5) - the factual alignment between the summary and the summarized source. A factually consistent summary contains only statements that are entailed by the source document. Annotators were also asked to penalize summaries that contained hallucinated facts." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.84, + 0.669, + 0.854 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.856, + 0.882, + 0.903 + ], + "angle": 0, + "content": "1. Read the news article carefully and identify the main facts and details it presents." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.904, + 0.882, + 0.919 + ], + "angle": 0, + "content": "2. Read the summary and compare it to the" + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.856, + 0.882, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8939" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.115, + 0.086, + 0.486, + 0.132 + ], + "angle": 0, + "content": "article. Check if the summary contains any factual errors that are not supported by the article." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.134, + 0.486, + 0.164 + ], + "angle": 0, + "content": "3. Assign a score for consistency based on the Evaluation Criteria." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.183, + 0.2, + 0.196 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.198, + 0.486, + 0.261 + ], + "angle": 0, + "content": "How consistent is the summary with the source document in terms of the factual alignment? (On a scale of 1-5, with 1 being the lowest)" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.278, + 0.182, + 0.292 + ], + "angle": 0, + "content": "Fluency" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.294, + 0.301, + 0.308 + ], + "angle": 0, + "content": "Evaluation Criteria:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.31, + 0.486, + 0.388 + ], + "angle": 0, + "content": "Fluency (1-5): This rating measures the quality of individual sentences, are they well-written and grammatically correct. Consider the quality of individual sentences." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.407, + 0.274, + 0.421 + ], + "angle": 0, + "content": "Evaluation steps:" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.423, + 0.354, + 0.437 + ], + "angle": 0, + "content": "1. Read the given summary." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.439, + 0.486, + 0.484 + ], + "angle": 0, + "content": "2. Evaluate the fluency of the summary on a scale of 1-5 based on the criteria provided." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.487, + 0.319, + 0.502 + ], + "angle": 0, + "content": "3. Provide the rating." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.423, + 0.486, + 0.502 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.52, + 0.2, + 0.533 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.535, + 0.486, + 0.582 + ], + "angle": 0, + "content": "Based on the evaluation criteria, how fluent is the summary? (On a scale of 1-5, with 1 being the lowest)" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.599, + 0.2, + 0.612 + ], + "angle": 0, + "content": "Relevance" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.615, + 0.301, + 0.629 + ], + "angle": 0, + "content": "Evaluation Criteria:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.631, + 0.486, + 0.741 + ], + "angle": 0, + "content": "Relevance (1-5) - selection of important content from the source. The summary should include only important information from the source document. Annotators were instructed to penalize summaries which contained redundancies and excess information." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.76, + 0.274, + 0.774 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.776, + 0.486, + 0.806 + ], + "angle": 0, + "content": "1. Read the summary and the source document carefully." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.808, + 0.486, + 0.854 + ], + "angle": 0, + "content": "2. Compare the summary to the source document and identify the main points of the article." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.856, + 0.486, + 0.919 + ], + "angle": 0, + "content": "3. Assess how well the summary covers the main points of the article, and how much irrelevant or redundant information it contains." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.776, + 0.486, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.086, + 0.873, + 0.1 + ], + "angle": 0, + "content": "4. Assign a relevance score from 1 to 5." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.118, + 0.596, + 0.132 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.134, + 0.882, + 0.212 + ], + "angle": 0, + "content": "On a scale of 1-5, with 1 being the lowest, is the summary relevant to the source document and does the summary only contain the important information of the source document?" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.228, + 0.758, + 0.242 + ], + "angle": 0, + "content": "F.2 Prompts for Topical-Chat" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.249, + 0.884, + 0.296 + ], + "angle": 0, + "content": "First, we show an example prompt for naturalness. This prompt corresponds to the score only + auto CoT in Table 2." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.309, + 0.61, + 0.322 + ], + "angle": 0, + "content": "Naturalness" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.325, + 0.882, + 0.419 + ], + "angle": 0, + "content": "You will be given a conversation between two individuals. You will then be given one potential response for the next turn in the conversation. The response concerns an interesting fact, which will be provided as well." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.422, + 0.882, + 0.451 + ], + "angle": 0, + "content": "Your task is to rate the responses on one metric." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.454, + 0.884, + 0.516 + ], + "angle": 0, + "content": "Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed." + }, + { + "type": "title", + "bbox": [ + 0.511, + 0.535, + 0.706, + 0.548 + ], + "angle": 0, + "content": "Evaluation Crieteria:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.551, + 0.882, + 0.581 + ], + "angle": 0, + "content": "Naturalness (1-3) Is the response naturally written??" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.583, + 0.882, + 0.613 + ], + "angle": 0, + "content": "- A score of 1 (bad) means that the response is unnatural." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.615, + 0.882, + 0.645 + ], + "angle": 0, + "content": "- A score of 2 (ok) means the response is strange, but not entirely unnatural." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.647, + 0.882, + 0.677 + ], + "angle": 0, + "content": "- A score of 3 (good) means that the response is natural." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.583, + 0.882, + 0.677 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.511, + 0.695, + 0.67, + 0.71 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.712, + 0.882, + 0.741 + ], + "angle": 0, + "content": "1. Read the conversation between the two individuals." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.744, + 0.881, + 0.773 + ], + "angle": 0, + "content": "2. Read the potential response for the next turn in the conversation." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.776, + 0.881, + 0.806 + ], + "angle": 0, + "content": "3. Evaluate the response based on its naturalness, using the provided criteria." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.808, + 0.881, + 0.838 + ], + "angle": 0, + "content": "4. Assign a rating score of 1, 2, or 3 based on the evaluation." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.712, + 0.882, + 0.838 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.856, + 0.586, + 0.87 + ], + "angle": 0, + "content": "Example:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.873, + 0.705, + 0.887 + ], + "angle": 0, + "content": "Conversation History:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.89, + 0.62, + 0.903 + ], + "angle": 0, + "content": "{{Document}}" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.905, + 0.686, + 0.919 + ], + "angle": 0, + "content": "Corresponding Fact:" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8940" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.086, + 0.19, + 0.101 + ], + "angle": 0, + "content": "{{Fact}}" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.103, + 0.2, + 0.117 + ], + "angle": 0, + "content": "Response:" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.118, + 0.226, + 0.133 + ], + "angle": 0, + "content": "{{Response}}" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.15, + 0.394, + 0.165 + ], + "angle": 0, + "content": "Evaluation Form (scores ONLY):" + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.167, + 0.246, + 0.181 + ], + "angle": 0, + "content": "- Naturalness:" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.192, + 0.38, + 0.208 + ], + "angle": 0, + "content": "F.2.1 Different Output Prompts" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.211, + 0.487, + 0.324 + ], + "angle": 0, + "content": "For Topical-Chat, we also conduct ablations on different output prompts. Those different output prompts for score only, rate-explain, analyze-rate are the same as those listed in Section F.1.1. We do not list them here to save some space. The exact prompts we use can be found in the supplementary data of this paper." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.334, + 0.411, + 0.349 + ], + "angle": 0, + "content": "F.2.2 Attribute-Dependent Prompts" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.353, + 0.487, + 0.48 + ], + "angle": 0, + "content": "The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. Again, the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.492, + 0.214, + 0.505 + ], + "angle": 0, + "content": "Naturalness" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.508, + 0.312, + 0.522 + ], + "angle": 0, + "content": "Evaluation Crieteria:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.524, + 0.486, + 0.554 + ], + "angle": 0, + "content": "Naturalness (1-3) Is the response naturally written??" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.557, + 0.486, + 0.586 + ], + "angle": 0, + "content": "- A score of 1 (bad) means that the response is unnatural." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.589, + 0.486, + 0.619 + ], + "angle": 0, + "content": "- A score of 2 (ok) means the response is strange, but not entirely unnatural." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.621, + 0.486, + 0.651 + ], + "angle": 0, + "content": "- A score of 3 (good) means that the response is natural." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.669, + 0.275, + 0.684 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.686, + 0.486, + 0.715 + ], + "angle": 0, + "content": "1. Read the conversation between the two individuals." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.718, + 0.486, + 0.747 + ], + "angle": 0, + "content": "2. Read the potential response for the next turn in the conversation." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.75, + 0.486, + 0.78 + ], + "angle": 0, + "content": "3. Evaluate the response based on its naturalness, using the provided criteria." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.782, + 0.486, + 0.812 + ], + "angle": 0, + "content": "4. Assign a rating score of 1, 2, or 3 based on the evaluation." + }, + { + "type": "list", + "bbox": [ + 0.115, + 0.686, + 0.486, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.831, + 0.2, + 0.844 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.846, + 0.486, + 0.877 + ], + "angle": 0, + "content": "How natural is the reponse? (On a scale of 1-3, with 1 being the lowest)" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.888, + 0.204, + 0.902 + ], + "angle": 0, + "content": "Coherence" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.904, + 0.311, + 0.918 + ], + "angle": 0, + "content": "Evaluation Crieteria:" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.882, + 0.133 + ], + "angle": 0, + "content": "Coherence (1-3) Does the response serve as a valid continuation of the conversation history?" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.134, + 0.882, + 0.181 + ], + "angle": 0, + "content": "- A score of 1 (no) means that the response drastically changes topic or ignores the conversation history." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.183, + 0.882, + 0.261 + ], + "angle": 0, + "content": "- A score of 2 (somewhat) means the response refers to the conversation history in a limited capacity (e.g., in a generic way) and shifts the conversation topic." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.263, + 0.882, + 0.31 + ], + "angle": 0, + "content": "- A score of 3 (yes) means the response is on topic and strongly acknowledges the conversation history." + }, + { + "type": "list", + "bbox": [ + 0.509, + 0.134, + 0.882, + 0.31 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.327, + 0.671, + 0.342 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.344, + 0.817, + 0.358 + ], + "angle": 0, + "content": "1. Read the conversation history." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.36, + 0.797, + 0.374 + ], + "angle": 0, + "content": "2. Read the potential response." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.376, + 0.882, + 0.406 + ], + "angle": 0, + "content": "3. Evaluate the coherence of the response based on the conversation history." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.408, + 0.882, + 0.437 + ], + "angle": 0, + "content": "4. Assign a score of 1, 2, or 3 for coherence." + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.344, + 0.882, + 0.437 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.456, + 0.596, + 0.47 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.472, + 0.882, + 0.551 + ], + "angle": 0, + "content": "Does the response serve as a valid continuation of the conversation history? (On a scale of 1-3, with 1 meaning the response is invalid and 3 meaning the response is coherent)" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.567, + 0.622, + 0.581 + ], + "angle": 0, + "content": "Engagingness" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.583, + 0.707, + 0.597 + ], + "angle": 0, + "content": "Evaluation Crieteria:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.599, + 0.882, + 0.63 + ], + "angle": 0, + "content": "Engagingness (1-3) Is the response dull/interesting?" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.631, + 0.882, + 0.661 + ], + "angle": 0, + "content": "- A score of 1 (dull) means that the response is generic and dull." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.663, + 0.882, + 0.726 + ], + "angle": 0, + "content": "- A score of 2 (somewhat interesting) \nmeans the response is somewhat interesting and could engage you in the conversation (e.g., an opinion, thought)" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.728, + 0.882, + 0.774 + ], + "angle": 0, + "content": "- A score of 3 (interesting) means the response is very interesting or presents an interesting fact" + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.631, + 0.882, + 0.774 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.792, + 0.671, + 0.807 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.808, + 0.882, + 0.839 + ], + "angle": 0, + "content": "1. Read the conversation, the corresponding fact and the response carefully." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.841, + 0.882, + 0.887 + ], + "angle": 0, + "content": "2. Rate the response on a scale of 1-3 for engagingness, according to the criteria above." + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.808, + 0.882, + 0.887 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.905, + 0.596, + 0.919 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.519, + 0.941 + ], + "angle": 0, + "content": "8941" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.115, + 0.085, + 0.486, + 0.133 + ], + "angle": 0, + "content": "Is the response interesting and engaging? (On a scale of 1-3, with 1 meaning dull and 3 meaning interesting)" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.143, + 0.233, + 0.156 + ], + "angle": 0, + "content": "Groundedness" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.159, + 0.312, + 0.173 + ], + "angle": 0, + "content": "Evaluation Crieteria:" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.175, + 0.486, + 0.237 + ], + "angle": 0, + "content": "Groundedness (0- 1) given the fact that this response is conditioned on, determine whether this response uses that fact." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.24, + 0.486, + 0.285 + ], + "angle": 0, + "content": "- A score of 0 (no) means the response does not mention or refer to the fact at all" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.288, + 0.486, + 0.318 + ], + "angle": 0, + "content": "- A score of 1 (yes) means the response uses the fact well" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.336, + 0.274, + 0.35 + ], + "angle": 0, + "content": "Evaluation Steps:" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.352, + 0.486, + 0.382 + ], + "angle": 0, + "content": "1. Read the conversation between the two individuals." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.384, + 0.486, + 0.414 + ], + "angle": 0, + "content": "2. Identify the fact that is provided for the potential response." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.417, + 0.402, + 0.431 + ], + "angle": 0, + "content": "3. Read the potential response." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.433, + 0.486, + 0.462 + ], + "angle": 0, + "content": "4. Determine if the potential response uses or mentions the fact." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.465, + 0.486, + 0.511 + ], + "angle": 0, + "content": "5. Assign a score of 0 or 1 for groundedness based on whether the response uses the fact." + }, + { + "type": "list", + "bbox": [ + 0.115, + 0.352, + 0.486, + 0.511 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.529, + 0.2, + 0.542 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.545, + 0.486, + 0.608 + ], + "angle": 0, + "content": "Given the fact that this response is conditioned on, does the response use the fact? (On a scale of 0-1, with 0 meaning no and 1 meaning yes)" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.62, + 0.359, + 0.634 + ], + "angle": 0, + "content": "F.3 Prompts for Section 3.4.2" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.641, + 0.49, + 0.751 + ], + "angle": 0, + "content": "HHH prompts You are an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.762, + 0.486, + 0.873 + ], + "angle": 0, + "content": "Human annotator prompts Assume that you are a professional and careful human evaluator. You are recruited and paid to conduct the following task. You need to strictly follow the task instruction and ensure that you are doing the job with high-quality." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "8942" + } + ] +] \ No newline at end of file diff --git a/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_origin.pdf b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..083e10c1d7846d76d49fbe7f505844f9813074a1 --- /dev/null +++ b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/dbe97091-acd7-407e-a8d4-8552f4605855_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdf2e799ca2e9c75ee6a5260e546f0fd1a9ec7ce73329eda52218c396f55a67a +size 337128 diff --git a/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/full.md b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..127363558b68410158a6a9854cc4315d9df6d106 --- /dev/null +++ b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/full.md @@ -0,0 +1,564 @@ +# A Closer Look into Automatic Evaluation Using Large Language Models + +Cheng-Han Chiang + +National Taiwan University, + +Taiwan + +dcml0714@gmail.com + +Hung-yi Lee + +National Taiwan University, + +Taiwan + +hungyilee@ntu.edu.tw + +# Abstract + +Using large language models (LLMs) to evaluate text quality has recently gained popularity. Some prior works explore the idea of using LLMs for evaluation, while they differ in some details of the evaluation process. In this paper, we analyze LLM evaluation (Chiang and Lee, 2023)1 and G-Eval (Liu et al., 2023), and we discuss how those details in the evaluation process change how well the ratings given by LLMs correlate with human ratings. We find that the auto Chain-of-Thought (CoT) used in G-Eval does not always make G-Eval more aligned with human ratings. We also show that forcing the LLM to output only a numeric rating, as in G-Eval, is suboptimal. Last, we reveal that asking the LLM to explain its own ratings consistently improves the correlation between the ChatGPT and human ratings and pushes state-of-the-art (SoTA) correlations on two meta-evaluation datasets. + +# 1 Introduction + +Large language models (LLMs) trained with task instructions and human feedback can follow natural language instructions to complete a task (Askell et al., 2021; Sanh et al., 2022; Wei et al., 2022a; Ouyang et al., 2022). Recently, the instruction-following ability of LLMs makes them promising candidates for automatic evaluation (Chiang and Lee, 2023; Liu et al., 2023; Wang et al., 2023; Huang et al., 2023). By simply instructing the LLMs on how to rate and giving the LLMs the sample to be rated, the LLM can follow the instructions and provide a rating of the sample. + +Chiang and Lee (2023) propose LLM evaluation and Liu et al. (2023) propose $G$ -Eval; both of which use LLMs to evaluate samples by giving the LLM instructions, and they both show that some LLMs can yield evaluation results that are aligned to the + +evaluation results of humans. Still, LLM evaluation and G-Eval differ in some specific design choices in the evaluation procedure. Since Chiang and Lee (2023) and Liu et al. (2023) use distinct tasks, it is hard to know how the differences between LLM evaluation and G-Eval affect the evaluation results. This makes practitioners in the future hard to determine how to conduct an automatic evaluation using LLMs. + +Given that LLM evaluation and G-Eval have already received significant attention shortly after publication, these methods will likely revolutionize the evaluation in NLP. Therefore, conducting a detailed analysis of these approaches is essential and timely. This paper aims to identify the crucial components in LLM evaluation and G-Eval that contribute to stronger correlations with human ratings. Based on our analysis, we provide guidelines on how to use LLMs for automatic evaluations. We have the following findings: + +- Auto-CoT (proposed by G-Eval) does not always improve the correlation between LLM and human ratings. +- Making the LLMs output only a single numeric rating is suboptimal. +- Asking the LLMs to rationalize their own ratings significantly improves the correlation between the LLMs' ratings and human ratings. +- On two datasets, we improve the best correlation that ChatGPT's rating can achieve, and some correlations even exceed prior SoTA correlations obtained using the ratings of GPT-4 in Liu et al. (2023). + +# 2 Experiment Setup + +Our paper studies what components in LLM evaluation and G-Eval make the ratings generated by LLM correlate with human ratings better, and we aim to improve the correlation. + +# 2.1 LLM as an Automatic Evaluation Metric + +Both LLM evaluation (Chiang and Lee, 2023) and G-Eval (Liu et al., 2023) propose to ask LLMs to rate a sample regarding some attributes of the sample (e.g., fluency, grammaticality) using a $k$ -point Likert scale. They give the LLMs (1) descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence that prompts the LLM to give the rating2. The LLM outputs a sequence containing the rating. Unless specified, we follow prior works to sample $N = 20$ sequences from the LLM and average those ratings as the final rating. While the two methods share the core concept, they differ in two details. + +Difference 1: Auto Chain-of-Thought The task descriptions and rating criteria in LLM evaluation and G-Eval are all human-written. However, Liu et al. (2023) argue that some evaluated attributes require more than simple definition and evaluation criteria, so they use LLMs to determine the evaluation steps. Specifically, they concatenate the task description, definition, and criteria of the attributes and append a line "Evaluation steps:" to prompt the LLM. The LLM then generates an ordered list containing the step-by-step evaluation steps. They dub this process auto chain-of-thought $(CoT)$ . G-Eval uses human-written task instructions and auto-CoT-generated evaluation steps to prompt the LLM to rate the sample. + +Difference 2: Prompts for Output At the end of the input to LLMs, G-Eval uses the prompt {"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ score~only}:" to restrict the LLM to output only the numeric rating; the placeholder will be replaced by the evaluated attributes. In contrast, LLM evaluation uses the following question to ask the LLM to assign the rating: "How {"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ is the sample? (on a scale of 1-k, with 1 being the lowest)". The LLM's output form is not restricted. + +# 2.2 Meta-Evaluating an Evaluation Metric + +Given a sample, an evaluation metric assigns it a rating. To evaluate an evaluation metric, we need a dataset containing human ratings for samples in the dataset. We calculate the correlation coefficient between the ratings obtained by the evaluation metric and the human ratings. A higher correlation + +indicates the evaluation metric better aligns with human ratings. We adopt Pearson $r$ and Kendall's $\tau$ as they are widely used in meta-evaluations (Graham et al., 2015; Bojar et al., 2017; Zhang* et al., 2020). In our paper, all the correlation refers to the correlation coefficient between the ratings of LLM and human ratings. Details on the calculation of correlation coefficients are in Appendix C. + +We use SummEval (Fabbri et al., 2021) and Topical-Chat (Gopalakrishnan et al., 2019; Mehri and Eskenazi, 2020) as the meta-evaluation datasets, following Liu et al. (2023). SummEval is a meta-evaluation dataset for summarization derived from the CNN/DailyMail dataset (Hermann et al., 2015). Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Topical-Chat is a dataset that evaluates the quality of a response given the dialogue history and a piece of knowledge relating to the dialogue. We follow Zhong et al. (2022) to evaluate the naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge) of the response. The dataset details are in Appendix E. + +# 2.3 Large Language Models + +An LLM used as an evaluation metric should be affordable and accessible to whoever wants to use it. Based on this principle, we use ChatGPT (gpt3.5-turbo-0613) (OpenAI, 2022) for evaluation since it has lower cost and improved performance compared with other GPT-3.5 models. ChatGPT is also used in LLM evaluation and G-Eval. While Liu et al. (2023) further use GPT-4 (OpenAI, 2023) in their experiments, we cannot use GPT-4 in our experiments since most people, including us, have limited or no access to GPT-4, making it utterly unsuitable as an evaluation metric. + +In our preliminary experiments, we also try to use the best open LLM (at the time of writing this manuscript) on Open LLM leaderboard, the falcon-40b-instruct model (Almazrouei et al., 2023), but we find it cannot follow the instructions and rate the samples very well. Hence, we exclude open LLMs in our paper. + +# 3 Better Usage of LLM for Evaluation + +# 3.1 Is Auto CoT Always Useful? + +Liu et al. (2023) shows that adding the evaluation steps generated by auto CoT improves the correla- + +
Sec.AblationsCoherenceConsistencyFluencyRelevance
CoTOutputrτrτrτrτ
GPT-4†?‡Score only0.5810.4630.5750.4190.60.4570.5990.409
3.1Score only0.450.3590.370.2860.3190.2030.4030.327
X0.3440.2480.3280.1850.3610.1770.3530.248
3.2XScore only0.3440.2480.3280.1850.3610.1770.3530.248
XFree Text0.460.3420.4760.3340.4770.2730.3240.228
XRate-explain0.5570.440.4730.3370.4510.3060.5090.348
XAnalyze-rate0.6350.4760.5370.340.4790.3020.4440.305
+ +Table 1: The Pearson's $r$ and Kendall's $\tau$ correlation coefficient between LLMs' ratings and human ratings for SummEval. All the results in this table, except the first row, are from ChatGPT. We consider auto CoT + score only using ChatGPT proposed in G-Eval as the baseline of this paper. We boldface the Pearson's $r$ statistically significantly higher than the baseline (except GPT-4). †: results from Liu et al. (2023). Some numbers are different because we re-calculate the correlations based on the GPT-4 responses Liu et al. (2023) released. ‡: The results of GPT-4 cannot serve as a reasonable comparison since we find something odd in the prompts Liu et al. (2023) use, which we elaborate in Appendix A. + +tion on SummEval when using GPT-4 for evaluation. By scrutinizing their results, we find that the correlations when using auto CoT and not using it often differ by less than 0.02. This raises two questions: (1) Is this difference statistically significant? (2) Does auto CoT yield higher correlations for different LLMs and datasets? To answer these questions, we use ChatGPT to rate the samples in SummEval and Topical-Chat using two sets of prompts, one with the evaluation steps generated using auto CoT and one without those evaluation steps. In this experiment, we follow G-Eval and restrict ChatGPT to output only a numeric score. Following Graham and Baldwin (2014), we use William's test for significance to see if the Pearson's $r$ of using and not using auto CoT is statistically significantly different. We try to follow the prompts used in G-Eval when possible; still, we have to construct some prompts since Liu et al. (2023) only release part of the prompts and some of which are problematic. We list all the prompts and how they are obtained in Appendix F. + +The experiment results for SummEval are shown in the block in blue in Table 1. We also list the best results of G-Eval using GPT-4 from Liu et al. (2023) in the first row of Table 1 only for reference. Comparing our results with GPT-4 is unfair since we use ChatGPT, which is weaker than GPT-4. A more reasonable baseline for our paper is the "auto CoT + score only" using ChatGPT on the second row, which is the method proposed by G-Eval and shows the highest correlation that ChatGPT can achieve in Liu et al. (2023). The numbers here differ from results in Liu et al. (2023) because + +we carefully reproduce their results ourselves. + +Back to Table 1, we can see that auto CoT leads to higher correlations for coherence, consistency, and relevance. By William's test, these higher correlations reach statistical significance with $p$ -values less than 0.05. However, using auto CoT results in a lower Pearson's $r$ for fluency, and this inferiority in Pearson's $r$ is also statistically significant. + +The results for Topical-Chat are illustrated in Table 2. For Topical-Chat, the Pearson's $r$ of using and not using auto CoT are very close for all four attributes except groundedness, with differences less than 0.025, and these differences are not statistically significant. For groundedness, auto CoT even drastically decreases the correlation. In summary, using auto CoT does not yield consistent and meaningful improvements compared with not using CoT. This should not be surprising since the evaluation steps generated with auto CoT often merely paraphrases the evaluation criterion and instructions given to the LLM. + +# 3.2 Prompt for Outputs + +In this section, we explore if the difference in how ChatGPT is prompted to output makes it's ratings better aligned with human ratings. We use two sets of prompts that share the same task descriptions and evaluation criteria but differ in how they prompt the LLM to generate the output. One uses "score only", as in G-Eval. The other replaces the "score only" with "How {{placeholder}}" is the sample? (on a scale of 1-k, with 1 being the lowest), as in LLM evaluation. We call the latter prompts free text since they do not + +
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.1Score only0.3930.3580.4680.3910.5490.5130.3110.566
X0.4080.3310.4430.4040.5570.5350.3580.582
3.2XScore only0.4080.3310.4430.4040.5570.5350.3580.582
XFree Text0.4640.4760.5240.4260.6110.5570.5630.666
XRate-explain0.5240.470.4770.4160.5670.5240.580.693
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
+ +Table 2: The Pearson's $r$ and Kendall's $\tau$ correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's $r$ statistically significantly higher than auto CoT + score only. We **underline** the Pearson's $r$ comparable auto CoT + score only. + +restrict the output form. + +The results for SummEval are shown in the yellow blocks in Table 1, and the results for TopicalChat are shown in Table 2. We find that allowing ChatGPT to respond to the question freely yields Pearson's $r$ and Kendall's $\tau$ much higher than restricting the model to output a single numeric score for almost all attributes of both datasets. The higher Pearson's $r$ of free text compared with score only is statistically significant. The only exception is the relevance of SummEval, where free text yields slightly lower correlations. + +Initially, we thought ChatGPT aligns better with human ratings in free text because it can generate natural language explanations to justify their rating, making the ratings more correlated with human ratings. However, we observe that the responses of ChatGPT when prompted with free text mostly contain a single numeric rating, which is the same behavior when it is instructed by score only. This means that what the model is allowed to generate is more important than what it really generates. + +The above observations make us curious if the correlations can be higher if ChatGPT is instructed to justify its ratings. Inspired by chain-of-thought in Wei et al. (2022b) and Kojima et al. (2022) (not the auto CoT in G-Eval), we ask ChatGPT to provide their reasoning and rationales on the ratings. Instead of asking ChatGPT to output only a score, we construct two types of prompts that ask ChatGPT to rationalize its decision. The first type of prompt, called analyze-rate, asks ChatGPT to analyze the samples regarding the evaluated criteria first and give the rating. The second type of prompt, called rate-explain, asks ChatGPT to provide the numeric ratings first and explain why it gives such a rating. analyze-rate is more like the zero-shot + +chain-of-thought (Kojima et al., 2022). Refer to Appendix F.1.1 for the exact prompts we use. + +The results of asking ChatGPT to explain/analyze how they rate the sample are shown in the last two rows in Table 1 and Appendix Table 2. We find that for all attributes of both datasets, rate-explain and anlyze-rate both lead to correlations stronger than or at least comparable to the correlation of asking ChatGPT to output only a numeric rating (score only). By asking ChatGPT to explain/analyze, we improve the best correlations that can be achieved by ChatGPT in Liu et al. (2023) (the Auto-CoT + score only). Moreover, when asked to explain/analyze when rating, ChatGPT's correlation can be better than or comparable to the state-of-the-art correlation coefficients obtained from GPT-4 in Liu et al. (2023) for coherence of SummEval and three attributes of Topical-Chat. We hypothesize that some attributes (e.g., coherence for SummEval) are harder for ChatGPT to rate, so the correlations for these attributes show a larger improvement when ChatGPT explains how it rates the sample. + +In rate-explain, the output of ChatGPT contains a numeric rating followed by some explanations. As an auto-regressive language model, ChatGPT cannot depend on the explanation when generating the rating due to causal attention. If we stop the generation after ChatGPT generates the ratings, the output of rate-explain will only contain the ratings, just like the output forms in score only. Although the ratings in rate-explain do not depend on ChatGPT's rationales for the ratings, the ratings still correlate better with human ratings, compared with the ratings in score only. We think this is because when ChatGPT knows it needs to explain the ratings, it tends to generate ratings that are easier for it to explain, and a rating that is more + +aligned to humans' rating is easier for ChatGPT to explain. + +# 3.3 Empirical Guidelines + +Based on the analysis and results in this section, we provide the following guideline: Always ask ChatGPT to explain/analyze when rating. We do not see rate-explain to be significantly better (or worse) than analyze-rate, so it is hard to determine which one to use. A valid method is sampling some ratings using rate-explain and sampling some ratings using analyze-rate and averaging the ratings from the two prompts as the final rating. Using auto CoT is optional since it does not always lead to higher correlations with human ratings. We also find that using auto CoT does not always improve the correlations when ChatGPT is asked to explain; this result is shown in Appendix Table 3. + +# 3.4 Robustness of the Guidelines + +LLMs are notorious for their performance fluctuation due to the input prompts, and the sequence generated by LLMs can be different when changing the hyperparameters used in decoding. To verify the validity of our empirical guidelines, we conduct the following two sets of experiments: (1) we vary the temperature used in sampling the output from ChatGPT, and (2) we vary the prompt given to ChatGPT. + +# 3.4.1 Varying the Temperature + +We check if our guideline holds if we change the temperature $T$ during generation. We compare Pearson's $r$ when using the method proposed in G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate under different temperatures used when generating the output from ChatGPT. We follow Chiang and Lee (2023) and use two temperatures: 0.7 and 0.3. + +The results are shown in Appendix Table 5 and summarized as follows: First, when fixing the sampling temperature, we find that rate-explain and analyze-rate always achieve a higher correlation compared with G-Eval. This supports our guideline that "asking the LLM to explain/analyze outperforms the method proposed in G-Eval." Next, we observe that the correlation of G-Eval when $T = 0.3$ is much lower than that of $T = 1.0$ . This shows that G-Eval is not robust to sampling temperature. Contrarily, we find that the correlations obtained by rate-explain and analyze-rate do not significantly change for different sampling + +temperatures for almost all cases. This shows that rate-explain and analyze-rate are more robust than G-Eval with respect to the sampling temperature. + +# 3.4.2 Changing the Prompts + +We check if our guideline holds if we change the prompt given to ChatGPT. In this experiment, we changed the prompts to ChatGPT by appending some instructions before the descriptions of the rating task. We tried with two prompts: (1) the HHH prompts and (2) the human annotator prompts. The HHH prompt is designed by Bai et al. (2022) to align the output of LLMs to be more harmless, honest, and helpful. The human annotator prompt is inspired by Chiang and Lee (2023), who use a similar prompt to make the LLM behave as a human annotator. These two prompts will be inserted before the prompt we originally used in our paper. We use these two prompts to inject persona into the LLM. This is inspired by Zeng et al. (2023), which shows that the output of GPT3 can be different when prompted with a different persona. The prompts are detailed in Appendix F.3. + +The results are shown in Table 6 and summarized as follows: rate-explain and analyze-rate consistently outperform the G-eval when using the human annotator prompts and the HHH prompts. This indicates that our guidelines are robust toward different prompts. We also find that the correlations of G-Eval significantly drop when adding the human-annotator prompts or HHH prompts. On the other hand, the correlation for rate-explain and analyze-rate do not significantly decrease when adding the human-annotator prompt and the HHH prompt. This shows that asking the LLM to explain is more robust to the variation of the prompts. + +# 4 Conclusion + +We study how to better use ChatGPT as an automatic evaluation tool by scrutinizing LLM evaluation and G-Eval. We provide concrete guidelines and show that by using those guidelines, the correlations of several evaluated attributes given by ChatGPT, a publicly usable model, can be higher than or comparable to the ratings given by GPT-4, a highly restricted and pricey model. We also show that the evaluation results based on our guidelines improve the best correlation that ChatGPT's rating can achieve. We believe our results and guidelines help future researchers better use LLMs for evaluation. + +# Limitations + +There are three main limitations of this paper. + +1. We only use ChatGPT to conduct the experiments in this paper. We explain why we chose ChatGPT in Section 2.3. We believe that using ChatGPT is already enough since we show that the correlations obtained by using ChatGPT are already comparable to or better than the previous SoTA results obtained by GPT-4. +2. We only conduct analysis using two tasks, while we know that NLP has more diverse tasks. We do not guarantee that our observations can generalize to all the other datasets. We recommend the users verify the effectiveness of using LLM to evaluate the tasks of interest. +3. We cannot fairly compare our results with Liu et al. (2023), the previous SoTA results, due to multiple reasons. We explain those reasons in Appendix A. + +# Ethics Statement + +Our paper follows the ACL Code of Ethics. We do not see a particular harmful outcome of our paper. The code and datasets for reproducing our experiments can be found at https://github.com/d223302/A-Closer-Look-To-LLM-Evaluation/. + +# Acknowledgements + +We want to thank the reviews for providing detailed feedback and actionable suggestions, which helped us strengthen our paper. We also want to thank the senior committee members for monitoring the reviewing process. Cheng-Han Chiang is supported by a Ph.D. scholarship program by Delta Electronics. + +# References + +Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance. +Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A + +general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861. +Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. +Ondrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Computational Linguistics. +Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15607-15631, Toronto, Canada. Association for Computational Linguistics. +Alexander R Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391-409. +Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anushree Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. +Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 172-176, Doha, Qatar. Association for Computational Linguistics. +Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1183-1191, Denver, Colorado. Association for Computational Linguistics. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28. +Fan Huang, Haewoon Kwak, and Jisun An. 2023. Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech. arXiv preprint arXiv:2302.07736. + +Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems. +Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634. +Matouš Macháček and Ondřej Bojar. 2014. Results of the WMT14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 293-301, Baltimore, Maryland, USA. Association for Computational Linguistics. +Shikib Mehri and Maxine Eskenazi. 2020. Usr: An unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681-707. +OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Accessed on January 10, 2023. +OpenAI. 2023. Gpt-4 technical report. +Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744. +Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations. +Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048. +Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In International Conference on Learning Representations. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian richter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. + +Andy Zeng, Maria Attarian, brian richter, Krzysztof Marcin Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael S Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Pete Florence. 2023. Socratic models: Composing zero-shot multimodal reasoning with language. In The Eleventh International Conference on Learning Representations. +Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert*. In International Conference on Learning Representations. +Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2023-2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. + +# A Why We Cannot Fairly Compare with the Results in Liu et al. (2023) + +As a work highly related to G-Eval, we would really like to compare our results with G-Eval. However, we encounter difficulties when comparing our results with those in Liu et al. (2023) for the following reasons. + +- G-Eval proposes to use GPT-4 as the evaluation tool, while it is currently a highly restricted model, and we only have limited access to it. +- G-Eval only releases the prompts for SummEval. We need to construct the prompts for Topical-Chat based on the human evaluation instructions released by Mehri and Eskenazi (2020). It is possible that the prompts we use for Topical-Chat are different from the prompts used in Liu et al. (2023), making their results incomparable to ours. +- The prompts of fluency in SummEval released by Liu et al. (2023) in here is problematic so we need to construct new prompts for fluency. Refer to Appendix F.1 for detailed explanations. This makes us unable to directly compare our results with the results in Liu et al. (2023). +- We cannot reproduce the numbers on the paper of G-Eval even when using their official implementation and the GPT-4 responses they release. This means that the only thing we + +do is calculate the correlation coefficient using the data and code released on the official GitHub of G-Eval, but the numbers are quite different from the results in G-Eval's paper. Moreover, the results of fluency they provide is the result not using auto CoT, but the results of the other three attributes for SummEval use auto CoT. That is why we use a question mark for the auto CoT field in Table 1. + +- The Table 2 in Liu et al. (2023) seems to be wrong. The caption (Spearman's $\rho$ and Kendall's $\tau$ ) does not match the headers ( $r$ and $\rho$ ). This makes us hard to compare their results with ours reliably. + +# B Supplementary Results for Topical-Chat + +Table 2 is the supplementary results for Topical-Chat that we referred to in the main content. We plan to move Table 2 to the main content using the additional one page in the camera-ready version if the paper is accepted. See how Pearson's $r$ and Kendall's $\tau$ are calculated in Appendix C. + +# B.1 Is Auto CoT Useful When ChatGPT Is Asked to Explain? + +In Table 3, we show the results when we add the evaluation steps generated by auto CoT when we ask ChatGPT when prompting with (rate-explain). We find that on groundedness, using auto CoT is worse. However, for the other three attributes, using auto CoT is better. This again shows that auto CoT is not particularly useful. + +# C Calculation of Correlation Coefficient + +In this paper, we calculate Pearson's $r$ and Kendall's $\tau$ between human ratings and ChatGPT's ratings. Whether to use Spearman's rank correlation or Pearson's (linear) correlation to evaluate the alignment between human ratings and an automatic evaluation metric is long-standing, but there has been an increasing trend towards Pearson's correlation since 2014 (Macháček and Bojar, 2014; Graham and Baldwin, 2014; Zhang* et al., 2020). We use the pearsonr and Kendall tau in scipy.stats for calculating the correlation coefficients. For each attribute of each sample, the rating of ChatGPT is obtained by 20 samples; we set the decoding temperature to 1 and the top- $p$ in nucleus sampling to 1, following G-Eval (Liu et al., 2023). + +Consider a dataset with $N$ source documents, and each source document has $M$ corresponding target documents. We also have the human ratings for $N \cdot M$ target documents on a specific attribute. While each attribute of each target document is rated by more than one human rater, we average those ratings when calculating the correlation coefficient. So the $N \cdot M$ ratings are the average ratings from different raters. In the case of SummEval, we have $N = 100$ source documents and $M = 16$ summaries generated by 16 summarization models. There are two different methods for calculating correlation coefficients. + +# C.0.1 Method 1: Dataset-Level Correlation Coefficient + +In this method, we first obtain the ratings on $N \cdot M$ target documents from ChatGPT. We then calculate the correlation coefficient between the $N \cdot M$ ChatGPT's ratings and the $N \cdot M$ average human ratings. In this case, the correlation coefficient is calculated among two $N \cdot M$ vectors, meaning that the correlation coefficient is calculated across the entire dataset. + +# C.0.2 Method 2: Document-Level Correlation Coefficient + +In this method, for each source document, we obtain the ratings of its $M$ target documents using ChatGPT. Next, we calculate the correlation coefficient between these $M$ ChatGPT ratings and the corresponding $M$ human ratings. After iterating the above process over all the $N$ source documents, we obtain the $N$ correlation coefficients. We average the $N$ correlation coefficients as the final correlation coefficient. In this case, the correlation coefficient is calculated at the document-level and averaged over the whole dataset. + +# C.1 How We Calculate the Correlation Coefficient + +In Table 1 and 2 in this paper, we use Method 1 (Subsection C.0.1) to calculate Pearson's correlation, following the recommendation in Graham et al. (2015). Calculating the correlation coefficient on the dataset level is also used in LLM evaluation (Chiang and Lee, 2023). Calculating a single correlation coefficient on the dataset level allows us to use William's test to test whether two Pearson's $r$ are significantly different. + +For Kendall's $\tau$ in Table 1 and 2, we follow most prior works (Zhong et al., 2022; Liu et al., 2023) to + +
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.2XScore only0.3930.3580.4680.3910.5490.5130.3110.566
rate-explain0.5540.4780.5120.4290.6130.5660.5550.664
Xrate-explain0.5240.470.4770.4160.5670.5240.580.693
+ +Table 3: The Pearson's $r$ and Kendall's $\tau$ correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's $r$ statistically significantly higher than auto CoT + score only. We **underline** the Pearson's $r$ comparable auto CoT + score only. + +calculate Kendall's $\tau$ using Method 2 (document-level, Section C.0.2) to understand if ChatGPT can differentiate the quality difference between different system outputs for the same source document. + +In fact, we find that Pearson's $r$ calculated by Method 1 and Method 2 are highly correlated. In Table 4, we show the result of Topical-Chat while we use Method 2 to calculate Pearson's $r$ ; Kendall's $\tau$ is still calculated by Method 2. Comparing the results of Pearson's $r$ in Table 2 and Table 4, one can easily see that when a method have significantly higher Pearson's $r$ in Table 2, it will also have significantly higher Pearson's $r$ . We present the $r$ calculated by Method 1 because it makes more sense when calculating statistical significance when the correlation coefficient is calculated at the dataset-level (Graham et al., 2015). + +# D Results of Changing the Temperature and Prompts + +We show the results of varying the temperature used to sample the ChatGPT output in Table 5. In the experiments in this section, we only sample $N = 5$ samples from the ChatGPT since we find that G-eval and our proposed guidelines are quite robust to the number of samples when $N \geq 5$ . + +# E Datasets + +# E.1 SummEval + +SummEval (Fabbri et al., 2021) is a dataset for the meta-evaluation of summarization. It contains 100 source documents, each with 16 summaries obtained from different summarization models. Each of the 1600 summaries is rated by three workers recruited on Amazon Mturk and two experts in summarization. Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Each attribute is rated based on a 5-point Likert scale. + +We download the source documents, summaries, and human ratings from the GitHub repository of G-Eval (https://github.com/nlpyang/geval/tree/8f54105/data). SummEval was released under MIT License, and our usage for research does not violate the dataset's initial intention. + +# E.2 Topical-Chat + +Topical-Chat (Gopalakrishnan et al., 2019) is a knowledge-grounded open-domain dialogue dataset. The dataset consists of a dialogue context (history), an interesting fact related to the topic of the conversation, and a response. Mehri and Eskenazi (2020) releases high-quality human annotations on the quality of responses. They construct the dataset as follows: they first sample 60 dialogues context from Topical-Chat, and for each dialogue context and corresponding fun fact, they use a transformer model to generate four responses using four decoding methods. Each dialogue content has two additional responses: the human response and the ground truth response. Thus, there are a total of 360 dialogue-response pairs. Those pairs are evaluated based on six attributes, and we follow Zhong et al. (2022) and Liu et al. (2023) to only use four attributes: naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge). We obtain the human ratings of Topical-Chat from the Github repository of UniEval (Zhong et al., 2022): https://github.com/maszhongming/UniEval/blob/main/reproduce/data/dialogue/topical chatting.json. + +# F Prompts + +We list the prompts we use in this section. In the main content of the paper and in the following parts, we use different highlight colors to represent different parts of the prompt. A prompt is composed + +
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
GPT-4†Score only0.549-0.594-0.627-0.531-
3.1Score only0.4450.3580.4980.3910.5790.5130.6850.566
X0.4310.3310.5070.4040.6310.5350.6660.582
3.2XScore only0.4310.3310.5070.4040.6310.5350.6660.582
XFree Text0.5720.4760.5230.4260.6760.5570.7470.666
XRate-explain0.6210.5120.4720.4250.610.5090.7710.663
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
+ +Table 4: The Pearson's $r$ and Kendall's $\tau$ correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. Note that in this table, both Pearson's $r$ and Kendall's $\tau$ are calculated by Method 2 in Appendix C.0.2. All the results in this table, except the first row, are from ChatGPT. The results of GPT-4 are from Liu et al. (2023) but should not be compared with our results since the prompts they use may be different from the prompt we use. Still, we can see that for naturalness, engagingness, and groundedness, the results of rate-explain and analyze-rate is better or comparable to GPT-4. + +of four parts: (1) the descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence used to prompt the LLM to give the rating. + +The prompts for different attributes of the same dataset share the same descriptions of the rating task. Different attributes use different definition and rating criteria. In G-Eval, the prompts also compose of the evaluation steps generated by auto CoT. + +# F.1 Prompts for SummEval + +The descriptions of the rating task, the definition and rating criteria, the evalua- + +tion steps for coherence, consistency, and relevance in SummEval is from the prompts released by G-Eval in their GitHub repository (https://github.com/nlpyang/geval/tree/8f54105/prompts/summeval). While G-Eval also releases the prompt they use for fluency, we find something highly problematic in the prompt they use. The prompt for fluency asks the LLM to rate fluency on a scale of 1 to 3 (https://github.com/nlpyang/geval/blob/ + +8f54105061e00377fbbb909153892d5bfb5b3623a/prompts/summeval/fluDetailed.txt), while the original rating scale in SummEval is 1 to 5. We also find that the original rating criteria used in G-Eval for fluency differ largely from the rating criteria of fluency used for human evaluation in SummEval. Through our experiment, we find that the misalignment of evaluation criteria and evaluation scale significantly decreases Pearson's $r$ with human ratings when using analyze-rate to + +prompt ChatGPT to output. This is likely because ChatGPT tends to stick to the rating criteria when prompted with analyze-rate, and when using the rating criteria different from the criteria that are used to instruct the human raters, the scores generated by ChatGPT deviates more from the human ratings. This highlights the importance of using the same instructions to the LLM as those instructions used in the human evaluation, as emphasized in Chiang and Lee (2023). + +First, we show an example prompt for coherence. This prompt corresponds to the score only + auto CoT in Table 1. + +# Coherence + +You will be given one summary written for a news article. + +Your task is to rate the summary on one metric. + +Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed. + +Evaluation Criteria: + +Coherence (1- 5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby "the summary should be well- structured and well- organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic." + +Evaluation Steps: + +1. Read the news article carefully and + +
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3560.2900.2610.263
XRate-explain0.5480.4820.4230.487
XAnalyze-rate0.5890.4390.4380.319
+ +(a) Temperature $T = 0.3$ + +
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3940.2560.2880.334
XRate-explain0.5260.4680.4140.485
XAnalyze-rate0.6050.4480.4410.392
+ +(b) Temperature $T = 0.7$ + +
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.4500.3700.3190.403
XRate-explain0.5570.4730.4520.509
XAnalyze-rate0.6350.5340.4790.444
+ +(c) Temperature $T = 1.0$ (The result in Table 1) +Table 5: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate at different temperatures. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable). + +
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3080.2480.2650.345
XRate-explaintextbf0.5260.4680.4140.485
XAnalyze-rate0.5890.5240.4590.416
+ +(a) Results when prompted with the human evaluator prompts. + +
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3250.2060.2810.301
XRate-explain0.5960.4650.4030.478
XAnalyze-rate0.5960.4930.4750.406
+ +(b) Results when prompted with the HHH prompts. + +Table 6: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate when using different prompts. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable). + +identify the main topic and key points. 2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order. +3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria. + +```yaml +Example: +Source Text: {{Document}} +Summary: {{Summary}} +Evaluation Form (scores ONLY): - Coherence: +``` + +# F.1.1 Different Output Prompts + +For different output prompts, which is the ablation in Section 3.2 and the last block in Table 1 and 2, we only change the yellow parts (the last part) in the example prompt above. There are four output prompts used in Section 3.2: score only, free text, rate-explain, and analyze-rate. The prompts for free text is attribute-dependent, and we list them in the Their corresponding output prompts are listed as follows: + +# Score only + +```txt +Evaluation Form (scores ONLY): - {Attribute}: +``` + +# Rate-explain + +Evaluation Form (Answer by starting with "Rating:" and then give the explanation of the rating on the next line by "Rationale:"): + +```txt +- {Attribute}: +``` + +# Analyze-rate + +Evaluation Form (Answer by starting with "Analysis:" to analyze the given example regarding the evaluation criteria as concise as possible, and then give the numeric rating on the next line by "Rating:): + +```txt +- {Attribute}: +``` + +# F.1.2 Attribute-Dependent Prompts + +The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. We use different colors to denote different parts in the prompt. + +Note that the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated. + +# Coherence + +Evaluation Criteria: + +Coherence (1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby "the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic." + +# Evaluation Steps: + +1. Read the news article carefully and identify the main topic and key points. +2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order. +3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria. + +# Question: + +How coherent is the summary? That is, how well do the sentences in the summary fit together? (On a scale of 1-5, with 1 being the lowest) + +# Consistency + +Evaluation Criteria: + +Consistency (1-5) - the factual alignment between the summary and the summarized source. A factually consistent summary contains only statements that are entailed by the source document. Annotators were also asked to penalize summaries that contained hallucinated facts. + +# Evaluation Steps: + +1. Read the news article carefully and identify the main facts and details it presents. +2. Read the summary and compare it to the + +article. Check if the summary contains any factual errors that are not supported by the article. + +3. Assign a score for consistency based on the Evaluation Criteria. + +# Question: + +How consistent is the summary with the source document in terms of the factual alignment? (On a scale of 1-5, with 1 being the lowest) + +# Fluency + +Evaluation Criteria: + +Fluency (1-5): This rating measures the quality of individual sentences, are they well-written and grammatically correct. Consider the quality of individual sentences. + +Evaluation steps: + +1. Read the given summary. +2. Evaluate the fluency of the summary on a scale of 1-5 based on the criteria provided. +3. Provide the rating. + +# Question: + +Based on the evaluation criteria, how fluent is the summary? (On a scale of 1-5, with 1 being the lowest) + +# Relevance + +Evaluation Criteria: + +Relevance (1-5) - selection of important content from the source. The summary should include only important information from the source document. Annotators were instructed to penalize summaries which contained redundancies and excess information. + +# Evaluation Steps: + +1. Read the summary and the source document carefully. +2. Compare the summary to the source document and identify the main points of the article. +3. Assess how well the summary covers the main points of the article, and how much irrelevant or redundant information it contains. + +4. Assign a relevance score from 1 to 5. + +# Question: + +On a scale of 1-5, with 1 being the lowest, is the summary relevant to the source document and does the summary only contain the important information of the source document? + +# F.2 Prompts for Topical-Chat + +First, we show an example prompt for naturalness. This prompt corresponds to the score only + auto CoT in Table 2. + +# Naturalness + +You will be given a conversation between two individuals. You will then be given one potential response for the next turn in the conversation. The response concerns an interesting fact, which will be provided as well. + +Your task is to rate the responses on one metric. + +Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed. + +# Evaluation Crieteria: + +Naturalness (1-3) Is the response naturally written?? + +- A score of 1 (bad) means that the response is unnatural. +- A score of 2 (ok) means the response is strange, but not entirely unnatural. +- A score of 3 (good) means that the response is natural. + +# Evaluation Steps: + +1. Read the conversation between the two individuals. +2. Read the potential response for the next turn in the conversation. +3. Evaluate the response based on its naturalness, using the provided criteria. +4. Assign a rating score of 1, 2, or 3 based on the evaluation. + +# Example: + +Conversation History: + +{{Document}} + +Corresponding Fact: + +{{Fact}} + +Response: + +{{Response}} + +Evaluation Form (scores ONLY): + +- Naturalness: + +# F.2.1 Different Output Prompts + +For Topical-Chat, we also conduct ablations on different output prompts. Those different output prompts for score only, rate-explain, analyze-rate are the same as those listed in Section F.1.1. We do not list them here to save some space. The exact prompts we use can be found in the supplementary data of this paper. + +# F.2.2 Attribute-Dependent Prompts + +The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. Again, the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated. + +# Naturalness + +Evaluation Crieteria: + +Naturalness (1-3) Is the response naturally written?? + +- A score of 1 (bad) means that the response is unnatural. + +- A score of 2 (ok) means the response is strange, but not entirely unnatural. + +- A score of 3 (good) means that the response is natural. + +# Evaluation Steps: + +1. Read the conversation between the two individuals. +2. Read the potential response for the next turn in the conversation. +3. Evaluate the response based on its naturalness, using the provided criteria. +4. Assign a rating score of 1, 2, or 3 based on the evaluation. + +# Question: + +How natural is the reponse? (On a scale of 1-3, with 1 being the lowest) + +# Coherence + +Evaluation Crieteria: + +Coherence (1-3) Does the response serve as a valid continuation of the conversation history? + +- A score of 1 (no) means that the response drastically changes topic or ignores the conversation history. +- A score of 2 (somewhat) means the response refers to the conversation history in a limited capacity (e.g., in a generic way) and shifts the conversation topic. +- A score of 3 (yes) means the response is on topic and strongly acknowledges the conversation history. + +# Evaluation Steps: + +1. Read the conversation history. +2. Read the potential response. +3. Evaluate the coherence of the response based on the conversation history. +4. Assign a score of 1, 2, or 3 for coherence. + +# Question: + +Does the response serve as a valid continuation of the conversation history? (On a scale of 1-3, with 1 meaning the response is invalid and 3 meaning the response is coherent) + +# Engagingness + +Evaluation Crieteria: + +Engagingness (1-3) Is the response dull/interesting? + +- A score of 1 (dull) means that the response is generic and dull. +- A score of 2 (somewhat interesting) +means the response is somewhat interesting and could engage you in the conversation (e.g., an opinion, thought) +- A score of 3 (interesting) means the response is very interesting or presents an interesting fact + +# Evaluation Steps: + +1. Read the conversation, the corresponding fact and the response carefully. +2. Rate the response on a scale of 1-3 for engagingness, according to the criteria above. + +Question: + +Is the response interesting and engaging? (On a scale of 1-3, with 1 meaning dull and 3 meaning interesting) + +# Groundedness + +Evaluation Crieteria: + +Groundedness (0- 1) given the fact that this response is conditioned on, determine whether this response uses that fact. + +- A score of 0 (no) means the response does not mention or refer to the fact at all + +- A score of 1 (yes) means the response uses the fact well + +Evaluation Steps: + +1. Read the conversation between the two individuals. +2. Identify the fact that is provided for the potential response. +3. Read the potential response. +4. Determine if the potential response uses or mentions the fact. +5. Assign a score of 0 or 1 for groundedness based on whether the response uses the fact. + +Question: + +Given the fact that this response is conditioned on, does the response use the fact? (On a scale of 0-1, with 0 meaning no and 1 meaning yes) + +# F.3 Prompts for Section 3.4.2 + +HHH prompts You are an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. + +Human annotator prompts Assume that you are a professional and careful human evaluator. You are recruited and paid to conduct the following task. You need to strictly follow the task instruction and ensure that you are doing the job with high-quality. \ No newline at end of file diff --git a/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/images.zip b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..83ea39b53719a3c37304561d64b56031dc80f3e1 --- /dev/null +++ b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:484320df7f6aa802a9a8039b1f26fb43ff98e3fe269f6442274d8941a0305f52 +size 399386 diff --git a/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/layout.json b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ab5a805c8f59b9575b798d45e6edc7166eb5be72 --- /dev/null +++ b/2023/A Closer Look into Using Large Language Models for Automatic Evaluation/layout.json @@ -0,0 +1,12809 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 72, + 75, + 521, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 75, + 521, + 94 + ], + "spans": [ + { + "bbox": [ + 72, + 75, + 521, + 94 + ], + "type": "text", + "content": "A Closer Look into Automatic Evaluation Using Large Language Models" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 146, + 116, + 249, + 130 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 116, + 249, + 130 + ], + "spans": [ + { + "bbox": [ + 146, + 116, + 249, + 130 + ], + "type": "text", + "content": "Cheng-Han Chiang" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 130, + 130, + 266, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 130, + 266, + 143 + ], + "spans": [ + { + "bbox": [ + 130, + 130, + 266, + 143 + ], + "type": "text", + "content": "National Taiwan University," + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 180, + 144, + 217, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 180, + 144, + 217, + 156 + ], + "spans": [ + { + "bbox": [ + 180, + 144, + 217, + 156 + ], + "type": "text", + "content": "Taiwan" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 144, + 158, + 254, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 144, + 158, + 254, + 171 + ], + "spans": [ + { + "bbox": [ + 144, + 158, + 254, + 171 + ], + "type": "text", + "content": "dcml0714@gmail.com" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 363, + 116, + 429, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 363, + 116, + 429, + 129 + ], + "spans": [ + { + "bbox": [ + 363, + 116, + 429, + 129 + ], + "type": "text", + "content": "Hung-yi Lee" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 329, + 130, + 464, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 130, + 464, + 143 + ], + "spans": [ + { + "bbox": [ + 329, + 130, + 464, + 143 + ], + "type": "text", + "content": "National Taiwan University," + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 378, + 144, + 414, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 144, + 414, + 156 + ], + "spans": [ + { + "bbox": [ + 378, + 144, + 414, + 156 + ], + "type": "text", + "content": "Taiwan" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 335, + 158, + 457, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 158, + 457, + 171 + ], + "spans": [ + { + "bbox": [ + 335, + 158, + 457, + 171 + ], + "type": "text", + "content": "hungyilee@ntu.edu.tw" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 84, + 237, + 274, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 237, + 274, + 476 + ], + "spans": [ + { + "bbox": [ + 84, + 237, + 274, + 476 + ], + "type": "text", + "content": "Using large language models (LLMs) to evaluate text quality has recently gained popularity. Some prior works explore the idea of using LLMs for evaluation, while they differ in some details of the evaluation process. In this paper, we analyze LLM evaluation (Chiang and Lee, 2023)1 and G-Eval (Liu et al., 2023), and we discuss how those details in the evaluation process change how well the ratings given by LLMs correlate with human ratings. We find that the auto Chain-of-Thought (CoT) used in G-Eval does not always make G-Eval more aligned with human ratings. We also show that forcing the LLM to output only a numeric rating, as in G-Eval, is suboptimal. Last, we reveal that asking the LLM to explain its own ratings consistently improves the correlation between the ChatGPT and human ratings and pushes state-of-the-art (SoTA) correlations on two meta-evaluation datasets." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 489, + 154, + 502 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 489, + 154, + 502 + ], + "spans": [ + { + "bbox": [ + 68, + 489, + 154, + 502 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 511, + 291, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 511, + 291, + 674 + ], + "spans": [ + { + "bbox": [ + 67, + 511, + 291, + 674 + ], + "type": "text", + "content": "Large language models (LLMs) trained with task instructions and human feedback can follow natural language instructions to complete a task (Askell et al., 2021; Sanh et al., 2022; Wei et al., 2022a; Ouyang et al., 2022). Recently, the instruction-following ability of LLMs makes them promising candidates for automatic evaluation (Chiang and Lee, 2023; Liu et al., 2023; Wang et al., 2023; Huang et al., 2023). By simply instructing the LLMs on how to rate and giving the LLMs the sample to be rated, the LLM can follow the instructions and provide a rating of the sample." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 675, + 290, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 675, + 290, + 743 + ], + "spans": [ + { + "bbox": [ + 67, + 675, + 290, + 743 + ], + "type": "text", + "content": "Chiang and Lee (2023) propose LLM evaluation and Liu et al. (2023) propose " + }, + { + "bbox": [ + 67, + 675, + 290, + 743 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 67, + 675, + 290, + 743 + ], + "type": "text", + "content": "-Eval; both of which use LLMs to evaluate samples by giving the LLM instructions, and they both show that some LLMs can yield evaluation results that are aligned to the" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 212, + 526, + 332 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 212, + 526, + 332 + ], + "spans": [ + { + "bbox": [ + 302, + 212, + 526, + 332 + ], + "type": "text", + "content": "evaluation results of humans. Still, LLM evaluation and G-Eval differ in some specific design choices in the evaluation procedure. Since Chiang and Lee (2023) and Liu et al. (2023) use distinct tasks, it is hard to know how the differences between LLM evaluation and G-Eval affect the evaluation results. This makes practitioners in the future hard to determine how to conduct an automatic evaluation using LLMs." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 334, + 527, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 334, + 527, + 483 + ], + "spans": [ + { + "bbox": [ + 302, + 334, + 527, + 483 + ], + "type": "text", + "content": "Given that LLM evaluation and G-Eval have already received significant attention shortly after publication, these methods will likely revolutionize the evaluation in NLP. Therefore, conducting a detailed analysis of these approaches is essential and timely. This paper aims to identify the crucial components in LLM evaluation and G-Eval that contribute to stronger correlations with human ratings. Based on our analysis, we provide guidelines on how to use LLMs for automatic evaluations. We have the following findings:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 490, + 526, + 688 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 316, + 490, + 525, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 490, + 525, + 530 + ], + "spans": [ + { + "bbox": [ + 316, + 490, + 525, + 530 + ], + "type": "text", + "content": "- Auto-CoT (proposed by G-Eval) does not always improve the correlation between LLM and human ratings." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 538, + 526, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 538, + 526, + 565 + ], + "spans": [ + { + "bbox": [ + 316, + 538, + 526, + 565 + ], + "type": "text", + "content": "- Making the LLMs output only a single numeric rating is suboptimal." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 573, + 526, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 573, + 526, + 613 + ], + "spans": [ + { + "bbox": [ + 316, + 573, + 526, + 613 + ], + "type": "text", + "content": "- Asking the LLMs to rationalize their own ratings significantly improves the correlation between the LLMs' ratings and human ratings." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 622, + 526, + 688 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 622, + 526, + 688 + ], + "spans": [ + { + "bbox": [ + 316, + 622, + 526, + 688 + ], + "type": "text", + "content": "- On two datasets, we improve the best correlation that ChatGPT's rating can achieve, and some correlations even exceed prior SoTA correlations obtained using the ratings of GPT-4 in Liu et al. (2023)." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 698, + 417, + 713 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 698, + 417, + 713 + ], + "spans": [ + { + "bbox": [ + 302, + 698, + 417, + 713 + ], + "type": "text", + "content": "2 Experiment Setup" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 719, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 526, + 772 + ], + "type": "text", + "content": "Our paper studies what components in LLM evaluation and G-Eval make the ratings generated by LLM correlate with human ratings better, and we aim to improve the correlation." + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 751, + 290, + 773 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 751, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 751, + 290, + 773 + ], + "type": "text", + "content": "1In this paper, the term LLM evaluation is used to refer to the specific method proposed by Chiang and Lee (2023)." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "8928" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 129, + 795, + 464, + 806 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 795, + 464, + 806 + ], + "spans": [ + { + "bbox": [ + 129, + 795, + 464, + 806 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8928-8942" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "spans": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 286, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 286, + 83 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 286, + 83 + ], + "type": "text", + "content": "2.1 LLM as an Automatic Evaluation Metric" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 89, + 290, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 89, + 290, + 277 + ], + "spans": [ + { + "bbox": [ + 67, + 89, + 290, + 277 + ], + "type": "text", + "content": "Both LLM evaluation (Chiang and Lee, 2023) and G-Eval (Liu et al., 2023) propose to ask LLMs to rate a sample regarding some attributes of the sample (e.g., fluency, grammaticality) using a " + }, + { + "bbox": [ + 67, + 89, + 290, + 277 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 89, + 290, + 277 + ], + "type": "text", + "content": "-point Likert scale. They give the LLMs (1) descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence that prompts the LLM to give the rating2. The LLM outputs a sequence containing the rating. Unless specified, we follow prior works to sample " + }, + { + "bbox": [ + 67, + 89, + 290, + 277 + ], + "type": "inline_equation", + "content": "N = 20" + }, + { + "bbox": [ + 67, + 89, + 290, + 277 + ], + "type": "text", + "content": " sequences from the LLM and average those ratings as the final rating. While the two methods share the core concept, they differ in two details." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 279, + 291, + 481 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 279, + 291, + 481 + ], + "spans": [ + { + "bbox": [ + 69, + 279, + 291, + 481 + ], + "type": "text", + "content": "Difference 1: Auto Chain-of-Thought The task descriptions and rating criteria in LLM evaluation and G-Eval are all human-written. However, Liu et al. (2023) argue that some evaluated attributes require more than simple definition and evaluation criteria, so they use LLMs to determine the evaluation steps. Specifically, they concatenate the task description, definition, and criteria of the attributes and append a line \"Evaluation steps:\" to prompt the LLM. The LLM then generates an ordered list containing the step-by-step evaluation steps. They dub this process auto chain-of-thought " + }, + { + "bbox": [ + 69, + 279, + 291, + 481 + ], + "type": "inline_equation", + "content": "(CoT)" + }, + { + "bbox": [ + 69, + 279, + 291, + 481 + ], + "type": "text", + "content": ". G-Eval uses human-written task instructions and auto-CoT-generated evaluation steps to prompt the LLM to rate the sample." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 481, + 291, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 481, + 291, + 617 + ], + "spans": [ + { + "bbox": [ + 67, + 481, + 291, + 617 + ], + "type": "text", + "content": "Difference 2: Prompts for Output At the end of the input to LLMs, G-Eval uses the prompt {\"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ score~only}:\" to restrict the LLM to output only the numeric rating; the placeholder will be replaced by the evaluated attributes. In contrast, LLM evaluation uses the following question to ask the LLM to assign the rating: \"How {\"{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ is the sample? (on a scale of 1-k, with 1 being the lowest)\". The LLM's output form is not restricted." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 626, + 277, + 639 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 626, + 277, + 639 + ], + "spans": [ + { + "bbox": [ + 67, + 626, + 277, + 639 + ], + "type": "text", + "content": "2.2 Meta-Evaluating an Evaluation Metric" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 643, + 290, + 724 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 643, + 290, + 724 + ], + "spans": [ + { + "bbox": [ + 67, + 643, + 290, + 724 + ], + "type": "text", + "content": "Given a sample, an evaluation metric assigns it a rating. To evaluate an evaluation metric, we need a dataset containing human ratings for samples in the dataset. We calculate the correlation coefficient between the ratings obtained by the evaluation metric and the human ratings. A higher correlation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 526, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 179 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 179 + ], + "type": "text", + "content": "indicates the evaluation metric better aligns with human ratings. We adopt Pearson " + }, + { + "bbox": [ + 302, + 71, + 526, + 179 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 302, + 71, + 526, + 179 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 302, + 71, + 526, + 179 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 302, + 71, + 526, + 179 + ], + "type": "text", + "content": " as they are widely used in meta-evaluations (Graham et al., 2015; Bojar et al., 2017; Zhang* et al., 2020). In our paper, all the correlation refers to the correlation coefficient between the ratings of LLM and human ratings. Details on the calculation of correlation coefficients are in Appendix C." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 301, + 179, + 526, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 179, + 526, + 410 + ], + "spans": [ + { + "bbox": [ + 301, + 179, + 526, + 410 + ], + "type": "text", + "content": "We use SummEval (Fabbri et al., 2021) and Topical-Chat (Gopalakrishnan et al., 2019; Mehri and Eskenazi, 2020) as the meta-evaluation datasets, following Liu et al. (2023). SummEval is a meta-evaluation dataset for summarization derived from the CNN/DailyMail dataset (Hermann et al., 2015). Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Topical-Chat is a dataset that evaluates the quality of a response given the dialogue history and a piece of knowledge relating to the dialogue. We follow Zhong et al. (2022) to evaluate the naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge) of the response. The dataset details are in Appendix E." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 420, + 444, + 433 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 420, + 444, + 433 + ], + "spans": [ + { + "bbox": [ + 302, + 420, + 444, + 433 + ], + "type": "text", + "content": "2.3 Large Language Models" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 438, + 525, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 438, + 525, + 599 + ], + "spans": [ + { + "bbox": [ + 302, + 438, + 525, + 599 + ], + "type": "text", + "content": "An LLM used as an evaluation metric should be affordable and accessible to whoever wants to use it. Based on this principle, we use ChatGPT (gpt3.5-turbo-0613) (OpenAI, 2022) for evaluation since it has lower cost and improved performance compared with other GPT-3.5 models. ChatGPT is also used in LLM evaluation and G-Eval. While Liu et al. (2023) further use GPT-4 (OpenAI, 2023) in their experiments, we cannot use GPT-4 in our experiments since most people, including us, have limited or no access to GPT-4, making it utterly unsuitable as an evaluation metric." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 601, + 525, + 695 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 601, + 525, + 695 + ], + "spans": [ + { + "bbox": [ + 302, + 601, + 525, + 695 + ], + "type": "text", + "content": "In our preliminary experiments, we also try to use the best open LLM (at the time of writing this manuscript) on Open LLM leaderboard, the falcon-40b-instruct model (Almazrouei et al., 2023), but we find it cannot follow the instructions and rate the samples very well. Hence, we exclude open LLMs in our paper." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 706, + 509, + 719 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 706, + 509, + 719 + ], + "spans": [ + { + "bbox": [ + 302, + 706, + 509, + 719 + ], + "type": "text", + "content": "3 Better Usage of LLM for Evaluation" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 728, + 463, + 741 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 728, + 463, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 728, + 463, + 741 + ], + "type": "text", + "content": "3.1 Is Auto CoT Always Useful?" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 746, + 525, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 773 + ], + "type": "text", + "content": "Liu et al. (2023) shows that adding the evaluation steps generated by auto CoT improves the correla-" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 730, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 730, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 730, + 290, + 772 + ], + "type": "text", + "content": "2In our paper, we use different highlight colors to represent different parts of the prompt, as shown in the above text. Additionally, we use cyan to represent the parts generated by auto Chain-of-Thought" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "8929" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 77, + 68, + 518, + 197 + ], + "blocks": [ + { + "bbox": [ + 77, + 68, + 518, + 197 + ], + "lines": [ + { + "bbox": [ + 77, + 68, + 518, + 197 + ], + "spans": [ + { + "bbox": [ + 77, + 68, + 518, + 197 + ], + "type": "table", + "html": "
Sec.AblationsCoherenceConsistencyFluencyRelevance
CoTOutputrτrτrτrτ
GPT-4†?‡Score only0.5810.4630.5750.4190.60.4570.5990.409
3.1Score only0.450.3590.370.2860.3190.2030.4030.327
X0.3440.2480.3280.1850.3610.1770.3530.248
3.2XScore only0.3440.2480.3280.1850.3610.1770.3530.248
XFree Text0.460.3420.4760.3340.4770.2730.3240.228
XRate-explain0.5570.440.4730.3370.4510.3060.5090.348
XAnalyze-rate0.6350.4760.5370.340.4790.3020.4440.305
", + "image_path": "da30e5019e1285714b9a99c6e894179409063903f8f633ca89509a46eff4ea6e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "lines": [ + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "spans": [ + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "type": "text", + "content": "Table 1: The Pearson's " + }, + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "type": "text", + "content": " correlation coefficient between LLMs' ratings and human ratings for SummEval. All the results in this table, except the first row, are from ChatGPT. We consider auto CoT + score only using ChatGPT proposed in G-Eval as the baseline of this paper. We boldface the Pearson's " + }, + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 206, + 526, + 292 + ], + "type": "text", + "content": " statistically significantly higher than the baseline (except GPT-4). †: results from Liu et al. (2023). Some numbers are different because we re-calculate the correlations based on the GPT-4 responses Liu et al. (2023) released. ‡: The results of GPT-4 cannot serve as a reasonable comparison since we find something odd in the prompts Liu et al. (2023) use, which we elaborate in Appendix A." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 69, + 312, + 292, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 312, + 292, + 608 + ], + "spans": [ + { + "bbox": [ + 69, + 312, + 292, + 608 + ], + "type": "text", + "content": "tion on SummEval when using GPT-4 for evaluation. By scrutinizing their results, we find that the correlations when using auto CoT and not using it often differ by less than 0.02. This raises two questions: (1) Is this difference statistically significant? (2) Does auto CoT yield higher correlations for different LLMs and datasets? To answer these questions, we use ChatGPT to rate the samples in SummEval and Topical-Chat using two sets of prompts, one with the evaluation steps generated using auto CoT and one without those evaluation steps. In this experiment, we follow G-Eval and restrict ChatGPT to output only a numeric score. Following Graham and Baldwin (2014), we use William's test for significance to see if the Pearson's " + }, + { + "bbox": [ + 69, + 312, + 292, + 608 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 69, + 312, + 292, + 608 + ], + "type": "text", + "content": " of using and not using auto CoT is statistically significantly different. We try to follow the prompts used in G-Eval when possible; still, we have to construct some prompts since Liu et al. (2023) only release part of the prompts and some of which are problematic. We list all the prompts and how they are obtained in Appendix F." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 611, + 291, + 772 + ], + "type": "text", + "content": "The experiment results for SummEval are shown in the block in blue in Table 1. We also list the best results of G-Eval using GPT-4 from Liu et al. (2023) in the first row of Table 1 only for reference. Comparing our results with GPT-4 is unfair since we use ChatGPT, which is weaker than GPT-4. A more reasonable baseline for our paper is the \"auto CoT + score only\" using ChatGPT on the second row, which is the method proposed by G-Eval and shows the highest correlation that ChatGPT can achieve in Liu et al. (2023). The numbers here differ from results in Liu et al. (2023) because" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 312, + 506, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 312, + 506, + 324 + ], + "spans": [ + { + "bbox": [ + 302, + 312, + 506, + 324 + ], + "type": "text", + "content": "we carefully reproduce their results ourselves." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "spans": [ + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "text", + "content": "Back to Table 1, we can see that auto CoT leads to higher correlations for coherence, consistency, and relevance. By William's test, these higher correlations reach statistical significance with " + }, + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "text", + "content": "-values less than 0.05. However, using auto CoT results in a lower Pearson's " + }, + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "text", + "content": " for fluency, and this inferiority in Pearson's " + }, + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 302, + 324, + 526, + 419 + ], + "type": "text", + "content": " is also statistically significant." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 420, + 527, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 420, + 527, + 596 + ], + "spans": [ + { + "bbox": [ + 302, + 420, + 527, + 596 + ], + "type": "text", + "content": "The results for Topical-Chat are illustrated in Table 2. For Topical-Chat, the Pearson's " + }, + { + "bbox": [ + 302, + 420, + 527, + 596 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 302, + 420, + 527, + 596 + ], + "type": "text", + "content": " of using and not using auto CoT are very close for all four attributes except groundedness, with differences less than 0.025, and these differences are not statistically significant. For groundedness, auto CoT even drastically decreases the correlation. In summary, using auto CoT does not yield consistent and meaningful improvements compared with not using CoT. This should not be surprising since the evaluation steps generated with auto CoT often merely paraphrases the evaluation criterion and instructions given to the LLM." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 606, + 425, + 619 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 606, + 425, + 619 + ], + "spans": [ + { + "bbox": [ + 302, + 606, + 425, + 619 + ], + "type": "text", + "content": "3.2 Prompt for Outputs" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 624, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 624, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 624, + 526, + 773 + ], + "type": "text", + "content": "In this section, we explore if the difference in how ChatGPT is prompted to output makes it's ratings better aligned with human ratings. We use two sets of prompts that share the same task descriptions and evaluation criteria but differ in how they prompt the LLM to generate the output. One uses \"score only\", as in G-Eval. The other replaces the \"score only\" with \"How {{placeholder}}\" is the sample? (on a scale of 1-k, with 1 being the lowest), as in LLM evaluation. We call the latter prompts free text since they do not" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "text", + "content": "8930" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 80, + 68, + 513, + 184 + ], + "blocks": [ + { + "bbox": [ + 80, + 68, + 513, + 184 + ], + "lines": [ + { + "bbox": [ + 80, + 68, + 513, + 184 + ], + "spans": [ + { + "bbox": [ + 80, + 68, + 513, + 184 + ], + "type": "table", + "html": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.1Score only0.3930.3580.4680.3910.5490.5130.3110.566
X0.4080.3310.4430.4040.5570.5350.3580.582
3.2XScore only0.4080.3310.4430.4040.5570.5350.3580.582
XFree Text0.4640.4760.5240.4260.6110.5570.5630.666
XRate-explain0.5240.470.4770.4160.5670.5240.580.693
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
", + "image_path": "3ebb9e36a524e88cb47e148716c33d6eef8920f190ad48fdf8d8efd3429a7754.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "lines": [ + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "spans": [ + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "text", + "content": "Table 2: The Pearson's " + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "text", + "content": " correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's " + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "text", + "content": " statistically significantly higher than auto CoT + score only. We **underline** the Pearson's " + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 192, + 526, + 240 + ], + "type": "text", + "content": " comparable auto CoT + score only." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 262, + 175, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 262, + 175, + 275 + ], + "spans": [ + { + "bbox": [ + 67, + 262, + 175, + 275 + ], + "type": "text", + "content": "restrict the output form." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "spans": [ + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "text", + "content": "The results for SummEval are shown in the yellow blocks in Table 1, and the results for TopicalChat are shown in Table 2. We find that allowing ChatGPT to respond to the question freely yields Pearson's " + }, + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "text", + "content": " much higher than restricting the model to output a single numeric score for almost all attributes of both datasets. The higher Pearson's " + }, + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 279, + 291, + 427 + ], + "type": "text", + "content": " of free text compared with score only is statistically significant. The only exception is the relevance of SummEval, where free text yields slightly lower correlations." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 431, + 291, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 431, + 291, + 567 + ], + "spans": [ + { + "bbox": [ + 67, + 431, + 291, + 567 + ], + "type": "text", + "content": "Initially, we thought ChatGPT aligns better with human ratings in free text because it can generate natural language explanations to justify their rating, making the ratings more correlated with human ratings. However, we observe that the responses of ChatGPT when prompted with free text mostly contain a single numeric rating, which is the same behavior when it is instructed by score only. This means that what the model is allowed to generate is more important than what it really generates." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 571, + 291, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 571, + 291, + 774 + ], + "spans": [ + { + "bbox": [ + 67, + 571, + 291, + 774 + ], + "type": "text", + "content": "The above observations make us curious if the correlations can be higher if ChatGPT is instructed to justify its ratings. Inspired by chain-of-thought in Wei et al. (2022b) and Kojima et al. (2022) (not the auto CoT in G-Eval), we ask ChatGPT to provide their reasoning and rationales on the ratings. Instead of asking ChatGPT to output only a score, we construct two types of prompts that ask ChatGPT to rationalize its decision. The first type of prompt, called analyze-rate, asks ChatGPT to analyze the samples regarding the evaluated criteria first and give the rating. The second type of prompt, called rate-explain, asks ChatGPT to provide the numeric ratings first and explain why it gives such a rating. analyze-rate is more like the zero-shot" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 262, + 525, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 262, + 525, + 289 + ], + "spans": [ + { + "bbox": [ + 302, + 262, + 525, + 289 + ], + "type": "text", + "content": "chain-of-thought (Kojima et al., 2022). Refer to Appendix F.1.1 for the exact prompts we use." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 294, + 526, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 294, + 526, + 565 + ], + "spans": [ + { + "bbox": [ + 302, + 294, + 526, + 565 + ], + "type": "text", + "content": "The results of asking ChatGPT to explain/analyze how they rate the sample are shown in the last two rows in Table 1 and Appendix Table 2. We find that for all attributes of both datasets, rate-explain and anlyze-rate both lead to correlations stronger than or at least comparable to the correlation of asking ChatGPT to output only a numeric rating (score only). By asking ChatGPT to explain/analyze, we improve the best correlations that can be achieved by ChatGPT in Liu et al. (2023) (the Auto-CoT + score only). Moreover, when asked to explain/analyze when rating, ChatGPT's correlation can be better than or comparable to the state-of-the-art correlation coefficients obtained from GPT-4 in Liu et al. (2023) for coherence of SummEval and three attributes of Topical-Chat. We hypothesize that some attributes (e.g., coherence for SummEval) are harder for ChatGPT to rate, so the correlations for these attributes show a larger improvement when ChatGPT explains how it rates the sample." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 571, + 527, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 571, + 527, + 774 + ], + "spans": [ + { + "bbox": [ + 302, + 571, + 527, + 774 + ], + "type": "text", + "content": "In rate-explain, the output of ChatGPT contains a numeric rating followed by some explanations. As an auto-regressive language model, ChatGPT cannot depend on the explanation when generating the rating due to causal attention. If we stop the generation after ChatGPT generates the ratings, the output of rate-explain will only contain the ratings, just like the output forms in score only. Although the ratings in rate-explain do not depend on ChatGPT's rationales for the ratings, the ratings still correlate better with human ratings, compared with the ratings in score only. We think this is because when ChatGPT knows it needs to explain the ratings, it tends to generate ratings that are easier for it to explain, and a rating that is more" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 792 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 792 + ], + "type": "text", + "content": "8931" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "type": "text", + "content": "aligned to humans' rating is easier for ChatGPT to explain." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 108, + 196, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 108, + 196, + 121 + ], + "spans": [ + { + "bbox": [ + 67, + 108, + 196, + 121 + ], + "type": "text", + "content": "3.3 Empirical Guidelines" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 126, + 291, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 126, + 291, + 315 + ], + "spans": [ + { + "bbox": [ + 67, + 126, + 291, + 315 + ], + "type": "text", + "content": "Based on the analysis and results in this section, we provide the following guideline: Always ask ChatGPT to explain/analyze when rating. We do not see rate-explain to be significantly better (or worse) than analyze-rate, so it is hard to determine which one to use. A valid method is sampling some ratings using rate-explain and sampling some ratings using analyze-rate and averaging the ratings from the two prompts as the final rating. Using auto CoT is optional since it does not always lead to higher correlations with human ratings. We also find that using auto CoT does not always improve the correlations when ChatGPT is asked to explain; this result is shown in Appendix Table 3." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 325, + 230, + 337 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 325, + 230, + 337 + ], + "spans": [ + { + "bbox": [ + 67, + 325, + 230, + 337 + ], + "type": "text", + "content": "3.4 Robustness of the Guidelines" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 343, + 291, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 343, + 291, + 464 + ], + "spans": [ + { + "bbox": [ + 67, + 343, + 291, + 464 + ], + "type": "text", + "content": "LLMs are notorious for their performance fluctuation due to the input prompts, and the sequence generated by LLMs can be different when changing the hyperparameters used in decoding. To verify the validity of our empirical guidelines, we conduct the following two sets of experiments: (1) we vary the temperature used in sampling the output from ChatGPT, and (2) we vary the prompt given to ChatGPT." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 473, + 221, + 486 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 473, + 221, + 486 + ], + "spans": [ + { + "bbox": [ + 67, + 473, + 221, + 486 + ], + "type": "text", + "content": "3.4.1 Varying the Temperature" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 489, + 291, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 489, + 291, + 596 + ], + "spans": [ + { + "bbox": [ + 67, + 489, + 291, + 596 + ], + "type": "text", + "content": "We check if our guideline holds if we change the temperature " + }, + { + "bbox": [ + 67, + 489, + 291, + 596 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 67, + 489, + 291, + 596 + ], + "type": "text", + "content": " during generation. We compare Pearson's " + }, + { + "bbox": [ + 67, + 489, + 291, + 596 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 489, + 291, + 596 + ], + "type": "text", + "content": " when using the method proposed in G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate under different temperatures used when generating the output from ChatGPT. We follow Chiang and Lee (2023) and use two temperatures: 0.7 and 0.3." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": "The results are shown in Appendix Table 5 and summarized as follows: First, when fixing the sampling temperature, we find that rate-explain and analyze-rate always achieve a higher correlation compared with G-Eval. This supports our guideline that \"asking the LLM to explain/analyze outperforms the method proposed in G-Eval.\" Next, we observe that the correlation of G-Eval when " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "T = 0.3" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": " is much lower than that of " + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "inline_equation", + "content": "T = 1.0" + }, + { + "bbox": [ + 67, + 597, + 291, + 773 + ], + "type": "text", + "content": ". This shows that G-Eval is not robust to sampling temperature. Contrarily, we find that the correlations obtained by rate-explain and analyze-rate do not significantly change for different sampling" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 525, + 112 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 112 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 112 + ], + "type": "text", + "content": "temperatures for almost all cases. This shows that rate-explain and analyze-rate are more robust than G-Eval with respect to the sampling temperature." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 122, + 445, + 136 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 122, + 445, + 136 + ], + "spans": [ + { + "bbox": [ + 302, + 122, + 445, + 136 + ], + "type": "text", + "content": "3.4.2 Changing the Prompts" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 140, + 526, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 140, + 526, + 383 + ], + "spans": [ + { + "bbox": [ + 302, + 140, + 526, + 383 + ], + "type": "text", + "content": "We check if our guideline holds if we change the prompt given to ChatGPT. In this experiment, we changed the prompts to ChatGPT by appending some instructions before the descriptions of the rating task. We tried with two prompts: (1) the HHH prompts and (2) the human annotator prompts. The HHH prompt is designed by Bai et al. (2022) to align the output of LLMs to be more harmless, honest, and helpful. The human annotator prompt is inspired by Chiang and Lee (2023), who use a similar prompt to make the LLM behave as a human annotator. These two prompts will be inserted before the prompt we originally used in our paper. We use these two prompts to inject persona into the LLM. This is inspired by Zeng et al. (2023), which shows that the output of GPT3 can be different when prompted with a different persona. The prompts are detailed in Appendix F.3." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 385, + 526, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 385, + 526, + 561 + ], + "spans": [ + { + "bbox": [ + 302, + 385, + 526, + 561 + ], + "type": "text", + "content": "The results are shown in Table 6 and summarized as follows: rate-explain and analyze-rate consistently outperform the G-eval when using the human annotator prompts and the HHH prompts. This indicates that our guidelines are robust toward different prompts. We also find that the correlations of G-Eval significantly drop when adding the human-annotator prompts or HHH prompts. On the other hand, the correlation for rate-explain and analyze-rate do not significantly decrease when adding the human-annotator prompt and the HHH prompt. This shows that asking the LLM to explain is more robust to the variation of the prompts." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 574, + 381, + 587 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 574, + 381, + 587 + ], + "spans": [ + { + "bbox": [ + 302, + 574, + 381, + 587 + ], + "type": "text", + "content": "4 Conclusion" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 597, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 597, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 597, + 526, + 772 + ], + "type": "text", + "content": "We study how to better use ChatGPT as an automatic evaluation tool by scrutinizing LLM evaluation and G-Eval. We provide concrete guidelines and show that by using those guidelines, the correlations of several evaluated attributes given by ChatGPT, a publicly usable model, can be higher than or comparable to the ratings given by GPT-4, a highly restricted and pricey model. We also show that the evaluation results based on our guidelines improve the best correlation that ChatGPT's rating can achieve. We believe our results and guidelines help future researchers better use LLMs for evaluation." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "8932" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 131, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 131, + 83 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 131, + 83 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 92, + 270, + 105 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 92, + 270, + 105 + ], + "spans": [ + { + "bbox": [ + 67, + 92, + 270, + 105 + ], + "type": "text", + "content": "There are three main limitations of this paper." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 76, + 113, + 292, + 375 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 77, + 113, + 291, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 113, + 291, + 208 + ], + "spans": [ + { + "bbox": [ + 77, + 113, + 291, + 208 + ], + "type": "text", + "content": "1. We only use ChatGPT to conduct the experiments in this paper. We explain why we chose ChatGPT in Section 2.3. We believe that using ChatGPT is already enough since we show that the correlations obtained by using ChatGPT are already comparable to or better than the previous SoTA results obtained by GPT-4." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 76, + 216, + 292, + 311 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 216, + 292, + 311 + ], + "spans": [ + { + "bbox": [ + 76, + 216, + 292, + 311 + ], + "type": "text", + "content": "2. We only conduct analysis using two tasks, while we know that NLP has more diverse tasks. We do not guarantee that our observations can generalize to all the other datasets. We recommend the users verify the effectiveness of using LLM to evaluate the tasks of interest." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 76, + 321, + 290, + 375 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 321, + 290, + 375 + ], + "spans": [ + { + "bbox": [ + 76, + 321, + 290, + 375 + ], + "type": "text", + "content": "3. We cannot fairly compare our results with Liu et al. (2023), the previous SoTA results, due to multiple reasons. We explain those reasons in Appendix A." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 68, + 384, + 158, + 396 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 384, + 158, + 396 + ], + "spans": [ + { + "bbox": [ + 68, + 384, + 158, + 396 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 406, + 291, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 406, + 291, + 486 + ], + "spans": [ + { + "bbox": [ + 67, + 406, + 291, + 486 + ], + "type": "text", + "content": "Our paper follows the ACL Code of Ethics. We do not see a particular harmful outcome of our paper. The code and datasets for reproducing our experiments can be found at https://github.com/d223302/A-Closer-Look-To-LLM-Evaluation/." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 497, + 170, + 510 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 497, + 170, + 510 + ], + "spans": [ + { + "bbox": [ + 68, + 497, + 170, + 510 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 517, + 291, + 612 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 517, + 291, + 612 + ], + "spans": [ + { + "bbox": [ + 67, + 517, + 291, + 612 + ], + "type": "text", + "content": "We want to thank the reviews for providing detailed feedback and actionable suggestions, which helped us strengthen our paper. We also want to thank the senior committee members for monitoring the reviewing process. Cheng-Han Chiang is supported by a Ph.D. scholarship program by Delta Electronics." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 68, + 634, + 127, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 634, + 127, + 647 + ], + "spans": [ + { + "bbox": [ + 68, + 634, + 127, + 647 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 653, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 69, + 653, + 291, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 653, + 291, + 731 + ], + "spans": [ + { + "bbox": [ + 69, + 653, + 291, + 731 + ], + "type": "text", + "content": "Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 739, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 739, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 739, + 290, + 772 + ], + "type": "text", + "content": "Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 526, + 772 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 314, + 72, + 525, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 525, + 95 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 525, + 95 + ], + "type": "text", + "content": "general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 102, + 526, + 234 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 102, + 526, + 234 + ], + "spans": [ + { + "bbox": [ + 304, + 102, + 526, + 234 + ], + "type": "text", + "content": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 242, + 526, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 242, + 526, + 298 + ], + "spans": [ + { + "bbox": [ + 304, + 242, + 526, + 298 + ], + "type": "text", + "content": "Ondrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Computational Linguistics." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 305, + 526, + 372 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 305, + 526, + 372 + ], + "spans": [ + { + "bbox": [ + 304, + 305, + 526, + 372 + ], + "type": "text", + "content": "Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15607-15631, Toronto, Canada. Association for Computational Linguistics." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 380, + 525, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 380, + 525, + 435 + ], + "spans": [ + { + "bbox": [ + 304, + 380, + 525, + 435 + ], + "type": "text", + "content": "Alexander R Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391-409." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 442, + 525, + 498 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 442, + 525, + 498 + ], + "spans": [ + { + "bbox": [ + 304, + 442, + 525, + 498 + ], + "type": "text", + "content": "Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anushree Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 506, + 524, + 572 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 506, + 524, + 572 + ], + "spans": [ + { + "bbox": [ + 304, + 506, + 524, + 572 + ], + "type": "text", + "content": "Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 172-176, Doha, Qatar. Association for Computational Linguistics." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 580, + 526, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 580, + 526, + 657 + ], + "spans": [ + { + "bbox": [ + 304, + 580, + 526, + 657 + ], + "type": "text", + "content": "Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1183-1191, Denver, Colorado. Association for Computational Linguistics." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 664, + 525, + 720 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 664, + 525, + 720 + ], + "spans": [ + { + "bbox": [ + 304, + 664, + 525, + 720 + ], + "type": "text", + "content": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 728, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 728, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 728, + 525, + 772 + ], + "type": "text", + "content": "Fan Huang, Haewoon Kwak, and Jisun An. 2023. Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech. arXiv preprint arXiv:2302.07736." + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "8933" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 117 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 117 + ], + "type": "text", + "content": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 123, + 291, + 168 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 123, + 291, + 168 + ], + "spans": [ + { + "bbox": [ + 69, + 123, + 291, + 168 + ], + "type": "text", + "content": "Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 174, + 291, + 231 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 174, + 291, + 231 + ], + "spans": [ + { + "bbox": [ + 69, + 174, + 291, + 231 + ], + "type": "text", + "content": "Matouš Macháček and Ondřej Bojar. 2014. Results of the WMT14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 293-301, Baltimore, Maryland, USA. Association for Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 237, + 290, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 237, + 290, + 292 + ], + "spans": [ + { + "bbox": [ + 69, + 237, + 290, + 292 + ], + "type": "text", + "content": "Shikib Mehri and Maxine Eskenazi. 2020. Usr: An unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681-707." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 299, + 289, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 299, + 289, + 322 + ], + "spans": [ + { + "bbox": [ + 69, + 299, + 289, + 322 + ], + "type": "text", + "content": "OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Accessed on January 10, 2023." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 328, + 224, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 328, + 224, + 340 + ], + "spans": [ + { + "bbox": [ + 69, + 328, + 224, + 340 + ], + "type": "text", + "content": "OpenAI. 2023. Gpt-4 technical report." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 348, + 290, + 413 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 348, + 290, + 413 + ], + "spans": [ + { + "bbox": [ + 69, + 348, + 290, + 413 + ], + "type": "text", + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 420, + 290, + 596 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 420, + 290, + 596 + ], + "spans": [ + { + "bbox": [ + 69, + 420, + 290, + 596 + ], + "type": "text", + "content": "Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 602, + 290, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 602, + 290, + 647 + ], + "spans": [ + { + "bbox": [ + 69, + 602, + 290, + 647 + ], + "type": "text", + "content": "Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 655, + 290, + 710 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 655, + 290, + 710 + ], + "spans": [ + { + "bbox": [ + 69, + 655, + 290, + 710 + ], + "type": "text", + "content": "Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In International Conference on Learning Representations." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 717, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 717, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 717, + 290, + 772 + ], + "type": "text", + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian richter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 310 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 304, + 72, + 525, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 525, + 160 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 525, + 160 + ], + "type": "text", + "content": "Andy Zeng, Maria Attarian, brian richter, Krzysztof Marcin Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael S Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Pete Florence. 2023. Socratic models: Composing zero-shot multimodal reasoning with language. In The Eleventh International Conference on Learning Representations." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 169, + 525, + 213 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 169, + 525, + 213 + ], + "spans": [ + { + "bbox": [ + 304, + 169, + 525, + 213 + ], + "type": "text", + "content": "Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert*. In International Conference on Learning Representations." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 222, + 525, + 310 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 222, + 525, + 310 + ], + "spans": [ + { + "bbox": [ + 304, + 222, + 525, + 310 + ], + "type": "text", + "content": "Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2023-2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 322, + 521, + 349 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 322, + 521, + 349 + ], + "spans": [ + { + "bbox": [ + 304, + 322, + 521, + 349 + ], + "type": "text", + "content": "A Why We Cannot Fairly Compare with the Results in Liu et al. (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 357, + 525, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 357, + 525, + 423 + ], + "spans": [ + { + "bbox": [ + 304, + 357, + 525, + 423 + ], + "type": "text", + "content": "As a work highly related to G-Eval, we would really like to compare our results with G-Eval. However, we encounter difficulties when comparing our results with those in Liu et al. (2023) for the following reasons." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 433, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 316, + 433, + 525, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 433, + 525, + 486 + ], + "spans": [ + { + "bbox": [ + 316, + 433, + 525, + 486 + ], + "type": "text", + "content": "- G-Eval proposes to use GPT-4 as the evaluation tool, while it is currently a highly restricted model, and we only have limited access to it." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 497, + 525, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 497, + 525, + 604 + ], + "spans": [ + { + "bbox": [ + 316, + 497, + 525, + 604 + ], + "type": "text", + "content": "- G-Eval only releases the prompts for SummEval. We need to construct the prompts for Topical-Chat based on the human evaluation instructions released by Mehri and Eskenazi (2020). It is possible that the prompts we use for Topical-Chat are different from the prompts used in Liu et al. (2023), making their results incomparable to ours." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 614, + 525, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 614, + 525, + 708 + ], + "spans": [ + { + "bbox": [ + 316, + 614, + 525, + 708 + ], + "type": "text", + "content": "- The prompts of fluency in SummEval released by Liu et al. (2023) in here is problematic so we need to construct new prompts for fluency. Refer to Appendix F.1 for detailed explanations. This makes us unable to directly compare our results with the results in Liu et al. (2023)." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 719, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 719, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 316, + 719, + 525, + 772 + ], + "type": "text", + "content": "- We cannot reproduce the numbers on the paper of G-Eval even when using their official implementation and the GPT-4 responses they release. This means that the only thing we" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "8934" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 88, + 71, + 291, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 71, + 291, + 191 + ], + "spans": [ + { + "bbox": [ + 88, + 71, + 291, + 191 + ], + "type": "text", + "content": "do is calculate the correlation coefficient using the data and code released on the official GitHub of G-Eval, but the numbers are quite different from the results in G-Eval's paper. Moreover, the results of fluency they provide is the result not using auto CoT, but the results of the other three attributes for SummEval use auto CoT. That is why we use a question mark for the auto CoT field in Table 1." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "spans": [ + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "text", + "content": "- The Table 2 in Liu et al. (2023) seems to be wrong. The caption (Spearman's " + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "inline_equation", + "content": "\\rho" + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "text", + "content": ") does not match the headers (" + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "inline_equation", + "content": "\\rho" + }, + { + "bbox": [ + 81, + 204, + 291, + 271 + ], + "type": "text", + "content": "). This makes us hard to compare their results with ours reliably." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 282, + 228, + 310 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 282, + 228, + 310 + ], + "spans": [ + { + "bbox": [ + 68, + 282, + 228, + 310 + ], + "type": "text", + "content": "B Supplementary Results for Topical-Chat" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 318, + 291, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 318, + 291, + 400 + ], + "spans": [ + { + "bbox": [ + 67, + 318, + 291, + 400 + ], + "type": "text", + "content": "Table 2 is the supplementary results for Topical-Chat that we referred to in the main content. We plan to move Table 2 to the main content using the additional one page in the camera-ready version if the paper is accepted. See how Pearson's " + }, + { + "bbox": [ + 67, + 318, + 291, + 400 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 318, + 291, + 400 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 67, + 318, + 291, + 400 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 67, + 318, + 291, + 400 + ], + "type": "text", + "content": " are calculated in Appendix C." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 410, + 277, + 437 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 410, + 277, + 437 + ], + "spans": [ + { + "bbox": [ + 67, + 410, + 277, + 437 + ], + "type": "text", + "content": "B.1 Is Auto CoT Useful When ChatGPT Is Asked to Explain?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 441, + 291, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 441, + 291, + 536 + ], + "spans": [ + { + "bbox": [ + 67, + 441, + 291, + 536 + ], + "type": "text", + "content": "In Table 3, we show the results when we add the evaluation steps generated by auto CoT when we ask ChatGPT when prompting with (rate-explain). We find that on groundedness, using auto CoT is worse. However, for the other three attributes, using auto CoT is better. This again shows that auto CoT is not particularly useful." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 548, + 285, + 560 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 548, + 285, + 560 + ], + "spans": [ + { + "bbox": [ + 67, + 548, + 285, + 560 + ], + "type": "text", + "content": "C Calculation of Correlation Coefficient" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "text", + "content": "In this paper, we calculate Pearson's " + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "text", + "content": " between human ratings and ChatGPT's ratings. Whether to use Spearman's rank correlation or Pearson's (linear) correlation to evaluate the alignment between human ratings and an automatic evaluation metric is long-standing, but there has been an increasing trend towards Pearson's correlation since 2014 (Macháček and Bojar, 2014; Graham and Baldwin, 2014; Zhang* et al., 2020). We use the pearsonr and Kendall tau in scipy.stats for calculating the correlation coefficients. For each attribute of each sample, the rating of ChatGPT is obtained by 20 samples; we set the decoding temperature to 1 and the top- " + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 67, + 570, + 291, + 772 + ], + "type": "text", + "content": " in nucleus sampling to 1, following G-Eval (Liu et al., 2023)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "content": "Consider a dataset with " + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "content": " source documents, and each source document has " + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "content": " corresponding target documents. We also have the human ratings for " + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "inline_equation", + "content": "N \\cdot M" + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "content": " target documents on a specific attribute. While each attribute of each target document is rated by more than one human rater, we average those ratings when calculating the correlation coefficient. So the " + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "inline_equation", + "content": "N \\cdot M" + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "content": " ratings are the average ratings from different raters. In the case of SummEval, we have " + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "inline_equation", + "content": "N = 100" + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "content": " source documents and " + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "inline_equation", + "content": "M = 16" + }, + { + "bbox": [ + 302, + 71, + 526, + 246 + ], + "type": "text", + "content": " summaries generated by 16 summarization models. There are two different methods for calculating correlation coefficients." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 256, + 515, + 280 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 256, + 515, + 280 + ], + "spans": [ + { + "bbox": [ + 302, + 256, + 515, + 280 + ], + "type": "text", + "content": "C.0.1 Method 1: Dataset-Level Correlation Coefficient" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "spans": [ + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "text", + "content": "In this method, we first obtain the ratings on " + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "inline_equation", + "content": "N \\cdot M" + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "text", + "content": " target documents from ChatGPT. We then calculate the correlation coefficient between the " + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "inline_equation", + "content": "N \\cdot M" + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "text", + "content": " ChatGPT's ratings and the " + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "inline_equation", + "content": "N \\cdot M" + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "text", + "content": " average human ratings. In this case, the correlation coefficient is calculated among two " + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "inline_equation", + "content": "N \\cdot M" + }, + { + "bbox": [ + 302, + 285, + 526, + 393 + ], + "type": "text", + "content": " vectors, meaning that the correlation coefficient is calculated across the entire dataset." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 403, + 525, + 428 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 403, + 525, + 428 + ], + "spans": [ + { + "bbox": [ + 302, + 403, + 525, + 428 + ], + "type": "text", + "content": "C.0.2 Method 2: Document-Level Correlation Coefficient" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "spans": [ + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "content": "In this method, for each source document, we obtain the ratings of its " + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "content": " target documents using ChatGPT. Next, we calculate the correlation coefficient between these " + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "content": " ChatGPT ratings and the corresponding " + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "content": " human ratings. After iterating the above process over all the " + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "content": " source documents, we obtain the " + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "content": " correlation coefficients. We average the " + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 302, + 433, + 525, + 581 + ], + "type": "text", + "content": " correlation coefficients as the final correlation coefficient. In this case, the correlation coefficient is calculated at the document-level and averaged over the whole dataset." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 592, + 495, + 617 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 592, + 495, + 617 + ], + "spans": [ + { + "bbox": [ + 302, + 592, + 495, + 617 + ], + "type": "text", + "content": "C.1 How We Calculate the Correlation Coefficient" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 624, + 526, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 624, + 526, + 745 + ], + "spans": [ + { + "bbox": [ + 302, + 624, + 526, + 745 + ], + "type": "text", + "content": "In Table 1 and 2 in this paper, we use Method 1 (Subsection C.0.1) to calculate Pearson's correlation, following the recommendation in Graham et al. (2015). Calculating the correlation coefficient on the dataset level is also used in LLM evaluation (Chiang and Lee, 2023). Calculating a single correlation coefficient on the dataset level allows us to use William's test to test whether two Pearson's " + }, + { + "bbox": [ + 302, + 624, + 526, + 745 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 302, + 624, + 526, + 745 + ], + "type": "text", + "content": " are significantly different." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "content": "For Kendall's " + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "content": " in Table 1 and 2, we follow most prior works (Zhong et al., 2022; Liu et al., 2023) to" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "8935" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 81, + 68, + 512, + 143 + ], + "blocks": [ + { + "bbox": [ + 81, + 68, + 512, + 143 + ], + "lines": [ + { + "bbox": [ + 81, + 68, + 512, + 143 + ], + "spans": [ + { + "bbox": [ + 81, + 68, + 512, + 143 + ], + "type": "table", + "html": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
3.2XScore only0.3930.3580.4680.3910.5490.5130.3110.566
rate-explain0.5540.4780.5120.4290.6130.5660.5550.664
Xrate-explain0.5240.470.4770.4160.5670.5240.580.693
", + "image_path": "6d4e48286d55641e6ef93a3617592e48d0fbdc04dab5497f8cfab665b573f607.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "lines": [ + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "spans": [ + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "text", + "content": "Table 3: The Pearson's " + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "text", + "content": " correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson's " + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "text", + "content": " statistically significantly higher than auto CoT + score only. We **underline** the Pearson's " + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 151, + 526, + 199 + ], + "type": "text", + "content": " comparable auto CoT + score only." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 66, + 220, + 290, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 220, + 290, + 274 + ], + "spans": [ + { + "bbox": [ + 66, + 220, + 290, + 274 + ], + "type": "text", + "content": "calculate Kendall's " + }, + { + "bbox": [ + 66, + 220, + 290, + 274 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 66, + 220, + 290, + 274 + ], + "type": "text", + "content": " using Method 2 (document-level, Section C.0.2) to understand if ChatGPT can differentiate the quality difference between different system outputs for the same source document." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "spans": [ + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": "In fact, we find that Pearson's " + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": " calculated by Method 1 and Method 2 are highly correlated. In Table 4, we show the result of Topical-Chat while we use Method 2 to calculate Pearson's " + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": "; Kendall's " + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": " is still calculated by Method 2. Comparing the results of Pearson's " + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": " in Table 2 and Table 4, one can easily see that when a method have significantly higher Pearson's " + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": " in Table 2, it will also have significantly higher Pearson's " + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": ". We present the " + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 275, + 290, + 450 + ], + "type": "text", + "content": " calculated by Method 1 because it makes more sense when calculating statistical significance when the correlation coefficient is calculated at the dataset-level (Graham et al., 2015)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 460, + 285, + 488 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 460, + 285, + 488 + ], + "spans": [ + { + "bbox": [ + 67, + 460, + 285, + 488 + ], + "type": "text", + "content": "D Results of Changing the Temperature and Prompts" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 495, + 290, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 495, + 290, + 576 + ], + "spans": [ + { + "bbox": [ + 67, + 495, + 290, + 576 + ], + "type": "text", + "content": "We show the results of varying the temperature used to sample the ChatGPT output in Table 5. In the experiments in this section, we only sample " + }, + { + "bbox": [ + 67, + 495, + 290, + 576 + ], + "type": "inline_equation", + "content": "N = 5" + }, + { + "bbox": [ + 67, + 495, + 290, + 576 + ], + "type": "text", + "content": " samples from the ChatGPT since we find that G-eval and our proposed guidelines are quite robust to the number of samples when " + }, + { + "bbox": [ + 67, + 495, + 290, + 576 + ], + "type": "inline_equation", + "content": "N \\geq 5" + }, + { + "bbox": [ + 67, + 495, + 290, + 576 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 586, + 135, + 598 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 586, + 135, + 598 + ], + "spans": [ + { + "bbox": [ + 67, + 586, + 135, + 598 + ], + "type": "text", + "content": "E Datasets" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 607, + 150, + 619 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 607, + 150, + 619 + ], + "spans": [ + { + "bbox": [ + 67, + 607, + 150, + 619 + ], + "type": "text", + "content": "E.1 SummEval" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 624, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 291, + 772 + ], + "type": "text", + "content": "SummEval (Fabbri et al., 2021) is a dataset for the meta-evaluation of summarization. It contains 100 source documents, each with 16 summaries obtained from different summarization models. Each of the 1600 summaries is rated by three workers recruited on Amazon Mturk and two experts in summarization. Each summary in SummEval is rated by humans based on the coherence, consistency, fluency of the summary, and relevance between the summary and the source document. Each attribute is rated based on a 5-point Likert scale." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "spans": [ + { + "bbox": [ + 302, + 220, + 526, + 301 + ], + "type": "text", + "content": "We download the source documents, summaries, and human ratings from the GitHub repository of G-Eval (https://github.com/nlpyang/geval/tree/8f54105/data). SummEval was released under MIT License, and our usage for research does not violate the dataset's initial intention." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 313, + 394, + 326 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 313, + 394, + 326 + ], + "spans": [ + { + "bbox": [ + 302, + 313, + 394, + 326 + ], + "type": "text", + "content": "E.2 Topical-Chat" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 332, + 526, + 683 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 332, + 526, + 683 + ], + "spans": [ + { + "bbox": [ + 301, + 332, + 526, + 683 + ], + "type": "text", + "content": "Topical-Chat (Gopalakrishnan et al., 2019) is a knowledge-grounded open-domain dialogue dataset. The dataset consists of a dialogue context (history), an interesting fact related to the topic of the conversation, and a response. Mehri and Eskenazi (2020) releases high-quality human annotations on the quality of responses. They construct the dataset as follows: they first sample 60 dialogues context from Topical-Chat, and for each dialogue context and corresponding fun fact, they use a transformer model to generate four responses using four decoding methods. Each dialogue content has two additional responses: the human response and the ground truth response. Thus, there are a total of 360 dialogue-response pairs. Those pairs are evaluated based on six attributes, and we follow Zhong et al. (2022) and Liu et al. (2023) to only use four attributes: naturalness, coherence, engagingness, and groundedness (whether the response is grounded on the provided knowledge). We obtain the human ratings of Topical-Chat from the Github repository of UniEval (Zhong et al., 2022): https://github.com/maszhongming/UniEval/blob/main/reproduce/data/dialogue/topical chatting.json." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 696, + 371, + 709 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 696, + 371, + 709 + ], + "spans": [ + { + "bbox": [ + 302, + 696, + 371, + 709 + ], + "type": "text", + "content": "F Prompts" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 719, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 526, + 773 + ], + "type": "text", + "content": "We list the prompts we use in this section. In the main content of the paper and in the following parts, we use different highlight colors to represent different parts of the prompt. A prompt is composed" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "text", + "content": "8936" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 73, + 68, + 522, + 197 + ], + "blocks": [ + { + "bbox": [ + 73, + 68, + 522, + 197 + ], + "lines": [ + { + "bbox": [ + 73, + 68, + 522, + 197 + ], + "spans": [ + { + "bbox": [ + 73, + 68, + 522, + 197 + ], + "type": "table", + "html": "
Sec.AblationsNaturalnessCoherenceEngagingnessGroundedness
CoTOutputrτrτrτrτ
GPT-4†Score only0.549-0.594-0.627-0.531-
3.1Score only0.4450.3580.4980.3910.5790.5130.6850.566
X0.4310.3310.5070.4040.6310.5350.6660.582
3.2XScore only0.4310.3310.5070.4040.6310.5350.6660.582
XFree Text0.5720.4760.5230.4260.6760.5570.7470.666
XRate-explain0.6210.5120.4720.4250.610.5090.7710.663
XAnalyze-rate0.5730.470.4860.4160.6280.5240.7250.693
", + "image_path": "d773d135366523f8ed15bb4f352947ae19cf8a5ddfa1ed4d54eba1ec2b54f9a4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "lines": [ + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "spans": [ + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "text", + "content": "Table 4: The Pearson's " + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "text", + "content": " correlation coefficient between LLMs' ratings and human ratings for Topical-Chat. Note that in this table, both Pearson's " + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "text", + "content": " and Kendall's " + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 66, + 206, + 526, + 279 + ], + "type": "text", + "content": " are calculated by Method 2 in Appendix C.0.2. All the results in this table, except the first row, are from ChatGPT. The results of GPT-4 are from Liu et al. (2023) but should not be compared with our results since the prompts they use may be different from the prompt we use. Still, we can see that for naturalness, engagingness, and groundedness, the results of rate-explain and analyze-rate is better or comparable to GPT-4." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 300, + 290, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 300, + 290, + 366 + ], + "spans": [ + { + "bbox": [ + 67, + 300, + 290, + 366 + ], + "type": "text", + "content": "of four parts: (1) the descriptions of the rating task, (2) the definition and rating criteria of the attribute to be rated, (3) the sample to be rated, and (4) a sentence used to prompt the LLM to give the rating." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 367, + 290, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 367, + 290, + 447 + ], + "spans": [ + { + "bbox": [ + 67, + 367, + 290, + 447 + ], + "type": "text", + "content": "The prompts for different attributes of the same dataset share the same descriptions of the rating task. Different attributes use different definition and rating criteria. In G-Eval, the prompts also compose of the evaluation steps generated by auto CoT." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 458, + 207, + 470 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 458, + 207, + 470 + ], + "spans": [ + { + "bbox": [ + 67, + 458, + 207, + 470 + ], + "type": "text", + "content": "F.1 Prompts for SummEval" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 476, + 292, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 476, + 292, + 502 + ], + "spans": [ + { + "bbox": [ + 67, + 476, + 292, + 502 + ], + "type": "text", + "content": "The descriptions of the rating task, the definition and rating criteria, the evalua-" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 503, + 295, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 503, + 295, + 638 + ], + "spans": [ + { + "bbox": [ + 67, + 503, + 295, + 638 + ], + "type": "text", + "content": "tion steps for coherence, consistency, and relevance in SummEval is from the prompts released by G-Eval in their GitHub repository (https://github.com/nlpyang/geval/tree/8f54105/prompts/summeval). While G-Eval also releases the prompt they use for fluency, we find something highly problematic in the prompt they use. The prompt for fluency asks the LLM to rate fluency on a scale of 1 to 3 (https://github.com/nlpyang/geval/blob/" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 638, + 295, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 295, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 295, + 772 + ], + "type": "text", + "content": "8f54105061e00377fbbb909153892d5bfb5b3623a/prompts/summeval/fluDetailed.txt), while the original rating scale in SummEval is 1 to 5. We also find that the original rating criteria used in G-Eval for fluency differ largely from the rating criteria of fluency used for human evaluation in SummEval. Through our experiment, we find that the misalignment of evaluation criteria and evaluation scale significantly decreases Pearson's " + }, + { + "bbox": [ + 67, + 638, + 295, + 772 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 638, + 295, + 772 + ], + "type": "text", + "content": " with human ratings when using analyze-rate to" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 300, + 525, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 300, + 525, + 434 + ], + "spans": [ + { + "bbox": [ + 302, + 300, + 525, + 434 + ], + "type": "text", + "content": "prompt ChatGPT to output. This is likely because ChatGPT tends to stick to the rating criteria when prompted with analyze-rate, and when using the rating criteria different from the criteria that are used to instruct the human raters, the scores generated by ChatGPT deviates more from the human ratings. This highlights the importance of using the same instructions to the LLM as those instructions used in the human evaluation, as emphasized in Chiang and Lee (2023)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 436, + 525, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 436, + 525, + 476 + ], + "spans": [ + { + "bbox": [ + 302, + 436, + 525, + 476 + ], + "type": "text", + "content": "First, we show an example prompt for coherence. This prompt corresponds to the score only + auto CoT in Table 1." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 489, + 357, + 500 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 489, + 357, + 500 + ], + "spans": [ + { + "bbox": [ + 303, + 489, + 357, + 500 + ], + "type": "text", + "content": "Coherence" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 503, + 524, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 503, + 524, + 528 + ], + "spans": [ + { + "bbox": [ + 302, + 503, + 524, + 528 + ], + "type": "text", + "content": "You will be given one summary written for a news article." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 530, + 524, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 530, + 524, + 555 + ], + "spans": [ + { + "bbox": [ + 302, + 530, + 524, + 555 + ], + "type": "text", + "content": "Your task is to rate the summary on one metric." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 557, + 524, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 557, + 524, + 609 + ], + "spans": [ + { + "bbox": [ + 302, + 557, + 524, + 609 + ], + "type": "text", + "content": "Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 611, + 414, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 611, + 414, + 623 + ], + "spans": [ + { + "bbox": [ + 302, + 611, + 414, + 623 + ], + "type": "text", + "content": "Evaluation Criteria:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 624, + 524, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 624, + 524, + 745 + ], + "spans": [ + { + "bbox": [ + 302, + 624, + 524, + 745 + ], + "type": "text", + "content": "Coherence (1- 5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby \"the summary should be well- structured and well- organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.\"" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 303, + 747, + 399, + 760 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 747, + 399, + 760 + ], + "spans": [ + { + "bbox": [ + 303, + 747, + 399, + 760 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 303, + 761, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 761, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 303, + 761, + 524, + 772 + ], + "type": "text", + "content": "1. Read the news article carefully and" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "8937" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 115, + 121, + 478, + 184 + ], + "blocks": [ + { + "bbox": [ + 115, + 121, + 478, + 184 + ], + "lines": [ + { + "bbox": [ + 115, + 121, + 478, + 184 + ], + "spans": [ + { + "bbox": [ + 115, + 121, + 478, + 184 + ], + "type": "table", + "html": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3560.2900.2610.263
XRate-explain0.5480.4820.4230.487
XAnalyze-rate0.5890.4390.4380.319
", + "image_path": "a4dd450a4843fc8ab986d86a26fbe1de3d385becff0e9ae96add2b2ac334ff37.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 115, + 208, + 479, + 271 + ], + "blocks": [ + { + "bbox": [ + 250, + 186, + 343, + 198 + ], + "lines": [ + { + "bbox": [ + 250, + 186, + 343, + 198 + ], + "spans": [ + { + "bbox": [ + 250, + 186, + 343, + 198 + ], + "type": "text", + "content": "(a) Temperature " + }, + { + "bbox": [ + 250, + 186, + 343, + 198 + ], + "type": "inline_equation", + "content": "T = 0.3" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 115, + 208, + 479, + 271 + ], + "lines": [ + { + "bbox": [ + 115, + 208, + 479, + 271 + ], + "spans": [ + { + "bbox": [ + 115, + 208, + 479, + 271 + ], + "type": "table", + "html": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3940.2560.2880.334
XRate-explain0.5260.4680.4140.485
XAnalyze-rate0.6050.4480.4410.392
", + "image_path": "44437fe3adaba5df160e8b538a01a78f4f10814b4513e52f244696d7f9ed5e52.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 115, + 296, + 479, + 359 + ], + "blocks": [ + { + "bbox": [ + 250, + 274, + 343, + 285 + ], + "lines": [ + { + "bbox": [ + 250, + 274, + 343, + 285 + ], + "spans": [ + { + "bbox": [ + 250, + 274, + 343, + 285 + ], + "type": "text", + "content": "(b) Temperature " + }, + { + "bbox": [ + 250, + 274, + 343, + 285 + ], + "type": "inline_equation", + "content": "T = 0.7" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 115, + 296, + 479, + 359 + ], + "lines": [ + { + "bbox": [ + 115, + 296, + 479, + 359 + ], + "spans": [ + { + "bbox": [ + 115, + 296, + 479, + 359 + ], + "type": "table", + "html": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.4500.3700.3190.403
XRate-explain0.5570.4730.4520.509
XAnalyze-rate0.6350.5340.4790.444
", + "image_path": "02fe5cbf4befb09edd95946d99450d1d099d99a0bbf681cb91df9fcb1a628602.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 115, + 519, + 479, + 581 + ], + "blocks": [ + { + "bbox": [ + 208, + 360, + 383, + 372 + ], + "lines": [ + { + "bbox": [ + 208, + 360, + 383, + 372 + ], + "spans": [ + { + "bbox": [ + 208, + 360, + 383, + 372 + ], + "type": "text", + "content": "(c) Temperature " + }, + { + "bbox": [ + 208, + 360, + 383, + 372 + ], + "type": "inline_equation", + "content": "T = 1.0" + }, + { + "bbox": [ + 208, + 360, + 383, + 372 + ], + "type": "text", + "content": " (The result in Table 1)" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 67, + 381, + 525, + 407 + ], + "lines": [ + { + "bbox": [ + 67, + 381, + 525, + 407 + ], + "spans": [ + { + "bbox": [ + 67, + 381, + 525, + 407 + ], + "type": "text", + "content": "Table 5: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate at different temperatures. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable)." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 115, + 519, + 479, + 581 + ], + "lines": [ + { + "bbox": [ + 115, + 519, + 479, + 581 + ], + "spans": [ + { + "bbox": [ + 115, + 519, + 479, + 581 + ], + "type": "table", + "html": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3080.2480.2650.345
XRate-explaintextbf0.5260.4680.4140.485
XAnalyze-rate0.5890.5240.4590.416
", + "image_path": "3156288a0971415364076d9ec98426acc8b46ac9ba29c4eba29bcaecd6caaa30.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 115, + 606, + 479, + 668 + ], + "blocks": [ + { + "bbox": [ + 183, + 584, + 409, + 596 + ], + "lines": [ + { + "bbox": [ + 183, + 584, + 409, + 596 + ], + "spans": [ + { + "bbox": [ + 183, + 584, + 409, + 596 + ], + "type": "text", + "content": "(a) Results when prompted with the human evaluator prompts." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 115, + 606, + 479, + 668 + ], + "lines": [ + { + "bbox": [ + 115, + 606, + 479, + 668 + ], + "spans": [ + { + "bbox": [ + 115, + 606, + 479, + 668 + ], + "type": "table", + "html": "
Auto-CoTOutputCoherenceConsistencyFluencyRelevance
Score only0.3250.2060.2810.301
XRate-explain0.5960.4650.4030.478
XAnalyze-rate0.5960.4930.4750.406
", + "image_path": "02e0915a2af0e474e11e070df2cb7bb086a6575988d27a30b7a030f0f723b3f7.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 202, + 671, + 389, + 682 + ], + "lines": [ + { + "bbox": [ + 202, + 671, + 389, + 682 + ], + "spans": [ + { + "bbox": [ + 202, + 671, + 389, + 682 + ], + "type": "text", + "content": "(b) Results when prompted with the HHH prompts." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 692, + 525, + 717 + ], + "lines": [ + { + "bbox": [ + 67, + 692, + 525, + 717 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 525, + 717 + ], + "type": "text", + "content": "Table 6: Comparing G-Eval (Auto-CoT + score only) with rate-explain and analyze-rate when using different prompts. We boldface Pearson's r statistically significantly higher than the baseline (the first row in each subtable)." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "8938" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 289, + 206 + ], + "type": "list", + "angle": 0, + "index": 2, + "blocks": [ + { + "bbox": [ + 67, + 71, + 289, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 289, + 152 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 289, + 152 + ], + "type": "text", + "content": "identify the main topic and key points. 2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 153, + 289, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 153, + 289, + 206 + ], + "spans": [ + { + "bbox": [ + 67, + 153, + 289, + 206 + ], + "type": "text", + "content": "3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria." + } + ] + } + ], + "index": 1 + } + ], + "sub_type": "text" + }, + { + "type": "code", + "bbox": [ + 68, + 207, + 234, + 273 + ], + "blocks": [ + { + "bbox": [ + 68, + 207, + 234, + 273 + ], + "lines": [ + { + "bbox": [ + 68, + 207, + 234, + 273 + ], + "spans": [ + { + "bbox": [ + 68, + 207, + 234, + 273 + ], + "type": "text", + "content": "Example: \nSource Text: {{Document}} \nSummary: {{Summary}} \nEvaluation Form (scores ONLY): - Coherence:" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "code_body" + } + ], + "index": 3, + "sub_type": "code", + "guess_lang": "yaml" + }, + { + "bbox": [ + 68, + 283, + 226, + 296 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 283, + 226, + 296 + ], + "spans": [ + { + "bbox": [ + 68, + 283, + 226, + 296 + ], + "type": "text", + "content": "F.1.1 Different Output Prompts" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 66, + 300, + 290, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 300, + 290, + 421 + ], + "spans": [ + { + "bbox": [ + 66, + 300, + 290, + 421 + ], + "type": "text", + "content": "For different output prompts, which is the ablation in Section 3.2 and the last block in Table 1 and 2, we only change the yellow parts (the last part) in the example prompt above. There are four output prompts used in Section 3.2: score only, free text, rate-explain, and analyze-rate. The prompts for free text is attribute-dependent, and we list them in the Their corresponding output prompts are listed as follows:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 432, + 120, + 444 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 432, + 120, + 444 + ], + "spans": [ + { + "bbox": [ + 68, + 432, + 120, + 444 + ], + "type": "text", + "content": "Score only" + } + ] + } + ], + "index": 6 + }, + { + "type": "code", + "bbox": [ + 69, + 445, + 239, + 470 + ], + "blocks": [ + { + "bbox": [ + 69, + 445, + 239, + 470 + ], + "lines": [ + { + "bbox": [ + 69, + 445, + 239, + 470 + ], + "spans": [ + { + "bbox": [ + 69, + 445, + 239, + 470 + ], + "type": "text", + "content": "Evaluation Form (scores ONLY): - {Attribute}:" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "code_body" + } + ], + "index": 7, + "sub_type": "code", + "guess_lang": "txt" + }, + { + "bbox": [ + 68, + 481, + 130, + 494 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 481, + 130, + 494 + ], + "spans": [ + { + "bbox": [ + 68, + 481, + 130, + 494 + ], + "type": "text", + "content": "Rate-explain" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 494, + 289, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 494, + 289, + 547 + ], + "spans": [ + { + "bbox": [ + 67, + 494, + 289, + 547 + ], + "type": "text", + "content": "Evaluation Form (Answer by starting with \"Rating:\" and then give the explanation of the rating on the next line by \"Rationale:\"):" + } + ] + } + ], + "index": 9 + }, + { + "type": "code", + "bbox": [ + 69, + 549, + 146, + 561 + ], + "blocks": [ + { + "bbox": [ + 69, + 549, + 146, + 561 + ], + "lines": [ + { + "bbox": [ + 69, + 549, + 146, + 561 + ], + "spans": [ + { + "bbox": [ + 69, + 549, + 146, + 561 + ], + "type": "text", + "content": "- {Attribute}:" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "code_body" + } + ], + "index": 10, + "sub_type": "code", + "guess_lang": "txt" + }, + { + "bbox": [ + 68, + 571, + 131, + 584 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 571, + 131, + 584 + ], + "spans": [ + { + "bbox": [ + 68, + 571, + 131, + 584 + ], + "type": "text", + "content": "Analyze-rate" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 585, + 289, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 585, + 289, + 665 + ], + "spans": [ + { + "bbox": [ + 67, + 585, + 289, + 665 + ], + "type": "text", + "content": "Evaluation Form (Answer by starting with \"Analysis:\" to analyze the given example regarding the evaluation criteria as concise as possible, and then give the numeric rating on the next line by \"Rating:):" + } + ] + } + ], + "index": 12 + }, + { + "type": "code", + "bbox": [ + 69, + 666, + 146, + 678 + ], + "blocks": [ + { + "bbox": [ + 69, + 666, + 146, + 678 + ], + "lines": [ + { + "bbox": [ + 69, + 666, + 146, + 678 + ], + "spans": [ + { + "bbox": [ + 69, + 666, + 146, + 678 + ], + "type": "text", + "content": "- {Attribute}:" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "code_body" + } + ], + "index": 13, + "sub_type": "code", + "guess_lang": "txt" + }, + { + "bbox": [ + 68, + 688, + 244, + 702 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 688, + 244, + 702 + ], + "spans": [ + { + "bbox": [ + 68, + 688, + 244, + 702 + ], + "type": "text", + "content": "F.1.2 Attribute-Dependent Prompts" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 66, + 705, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 705, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 66, + 705, + 291, + 772 + ], + "type": "text", + "content": "The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. We use different colors to denote different parts in the prompt." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 71, + 526, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 126 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 126 + ], + "type": "text", + "content": "Note that the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 303, + 138, + 357, + 149 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 138, + 357, + 149 + ], + "spans": [ + { + "bbox": [ + 303, + 138, + 357, + 149 + ], + "type": "text", + "content": "Coherence" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 303, + 152, + 414, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 152, + 414, + 163 + ], + "spans": [ + { + "bbox": [ + 303, + 152, + 414, + 163 + ], + "type": "text", + "content": "Evaluation Criteria:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 165, + 525, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 165, + 525, + 300 + ], + "spans": [ + { + "bbox": [ + 302, + 165, + 525, + 300 + ], + "type": "text", + "content": "Coherence (1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby \"the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic.\"" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 303, + 314, + 398, + 327 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 314, + 398, + 327 + ], + "spans": [ + { + "bbox": [ + 303, + 314, + 398, + 327 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 303, + 327, + 524, + 476 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 303, + 327, + 524, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 327, + 524, + 354 + ], + "spans": [ + { + "bbox": [ + 303, + 327, + 524, + 354 + ], + "type": "text", + "content": "1. Read the news article carefully and identify the main topic and key points." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 303, + 354, + 524, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 354, + 524, + 421 + ], + "spans": [ + { + "bbox": [ + 303, + 354, + 524, + 421 + ], + "type": "text", + "content": "2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 303, + 423, + 524, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 423, + 524, + 476 + ], + "spans": [ + { + "bbox": [ + 303, + 423, + 524, + 476 + ], + "type": "text", + "content": "3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria." + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 490, + 354, + 502 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 490, + 354, + 502 + ], + "spans": [ + { + "bbox": [ + 303, + 490, + 354, + 502 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 302, + 503, + 525, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 503, + 525, + 557 + ], + "spans": [ + { + "bbox": [ + 302, + 503, + 525, + 557 + ], + "type": "text", + "content": "How coherent is the summary? That is, how well do the sentences in the summary fit together? (On a scale of 1-5, with 1 being the lowest)" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 303, + 571, + 362, + 582 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 571, + 362, + 582 + ], + "spans": [ + { + "bbox": [ + 303, + 571, + 362, + 582 + ], + "type": "text", + "content": "Consistency" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 303, + 584, + 414, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 584, + 414, + 596 + ], + "spans": [ + { + "bbox": [ + 303, + 584, + 414, + 596 + ], + "type": "text", + "content": "Evaluation Criteria:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 302, + 597, + 524, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 597, + 524, + 691 + ], + "spans": [ + { + "bbox": [ + 302, + 597, + 524, + 691 + ], + "type": "text", + "content": "Consistency (1-5) - the factual alignment between the summary and the summarized source. A factually consistent summary contains only statements that are entailed by the source document. Annotators were also asked to penalize summaries that contained hallucinated facts." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 303, + 706, + 398, + 718 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 706, + 398, + 718 + ], + "spans": [ + { + "bbox": [ + 303, + 706, + 398, + 718 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 303, + 719, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 33, + "blocks": [ + { + "bbox": [ + 303, + 719, + 524, + 759 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 719, + 524, + 759 + ], + "spans": [ + { + "bbox": [ + 303, + 719, + 524, + 759 + ], + "type": "text", + "content": "1. Read the news article carefully and identify the main facts and details it presents." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 303, + 760, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 760, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 303, + 760, + 524, + 772 + ], + "type": "text", + "content": "2. Read the summary and compare it to the" + } + ] + } + ], + "index": 32 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "8939" + } + ] + } + ], + "index": 34 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 72, + 289, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 72, + 289, + 111 + ], + "spans": [ + { + "bbox": [ + 68, + 72, + 289, + 111 + ], + "type": "text", + "content": "article. Check if the summary contains any factual errors that are not supported by the article." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 112, + 289, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 112, + 289, + 137 + ], + "spans": [ + { + "bbox": [ + 68, + 112, + 289, + 137 + ], + "type": "text", + "content": "3. Assign a score for consistency based on the Evaluation Criteria." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 153, + 119, + 164 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 153, + 119, + 164 + ], + "spans": [ + { + "bbox": [ + 68, + 153, + 119, + 164 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 166, + 289, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 166, + 289, + 219 + ], + "spans": [ + { + "bbox": [ + 67, + 166, + 289, + 219 + ], + "type": "text", + "content": "How consistent is the summary with the source document in terms of the factual alignment? (On a scale of 1-5, with 1 being the lowest)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 233, + 108, + 245 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 233, + 108, + 245 + ], + "spans": [ + { + "bbox": [ + 68, + 233, + 108, + 245 + ], + "type": "text", + "content": "Fluency" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 247, + 179, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 247, + 179, + 259 + ], + "spans": [ + { + "bbox": [ + 68, + 247, + 179, + 259 + ], + "type": "text", + "content": "Evaluation Criteria:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 260, + 289, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 260, + 289, + 326 + ], + "spans": [ + { + "bbox": [ + 67, + 260, + 289, + 326 + ], + "type": "text", + "content": "Fluency (1-5): This rating measures the quality of individual sentences, are they well-written and grammatically correct. Consider the quality of individual sentences." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 68, + 342, + 163, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 342, + 163, + 354 + ], + "spans": [ + { + "bbox": [ + 68, + 342, + 163, + 354 + ], + "type": "text", + "content": "Evaluation steps:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 355, + 289, + 422 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 69, + 355, + 210, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 355, + 210, + 367 + ], + "spans": [ + { + "bbox": [ + 69, + 355, + 210, + 367 + ], + "type": "text", + "content": "1. Read the given summary." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 369, + 289, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 369, + 289, + 407 + ], + "spans": [ + { + "bbox": [ + 69, + 369, + 289, + 407 + ], + "type": "text", + "content": "2. Evaluate the fluency of the summary on a scale of 1-5 based on the criteria provided." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 409, + 189, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 409, + 189, + 422 + ], + "spans": [ + { + "bbox": [ + 69, + 409, + 189, + 422 + ], + "type": "text", + "content": "3. Provide the rating." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 68, + 437, + 119, + 448 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 437, + 119, + 448 + ], + "spans": [ + { + "bbox": [ + 68, + 437, + 119, + 448 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 449, + 289, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 449, + 289, + 489 + ], + "spans": [ + { + "bbox": [ + 67, + 449, + 289, + 489 + ], + "type": "text", + "content": "Based on the evaluation criteria, how fluent is the summary? (On a scale of 1-5, with 1 being the lowest)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 68, + 503, + 119, + 514 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 503, + 119, + 514 + ], + "spans": [ + { + "bbox": [ + 68, + 503, + 119, + 514 + ], + "type": "text", + "content": "Relevance" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 68, + 517, + 179, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 517, + 179, + 528 + ], + "spans": [ + { + "bbox": [ + 68, + 517, + 179, + 528 + ], + "type": "text", + "content": "Evaluation Criteria:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 67, + 530, + 289, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 530, + 289, + 623 + ], + "spans": [ + { + "bbox": [ + 67, + 530, + 289, + 623 + ], + "type": "text", + "content": "Relevance (1-5) - selection of important content from the source. The summary should include only important information from the source document. Annotators were instructed to penalize summaries which contained redundancies and excess information." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 68, + 639, + 163, + 650 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 639, + 163, + 650 + ], + "spans": [ + { + "bbox": [ + 68, + 639, + 163, + 650 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 69, + 652, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 69, + 652, + 289, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 652, + 289, + 677 + ], + "spans": [ + { + "bbox": [ + 69, + 652, + 289, + 677 + ], + "type": "text", + "content": "1. Read the summary and the source document carefully." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 69, + 679, + 289, + 718 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 679, + 289, + 718 + ], + "spans": [ + { + "bbox": [ + 69, + 679, + 289, + 718 + ], + "type": "text", + "content": "2. Compare the summary to the source document and identify the main points of the article." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 69, + 719, + 289, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 719, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 719, + 289, + 772 + ], + "type": "text", + "content": "3. Assess how well the summary covers the main points of the article, and how much irrelevant or redundant information it contains." + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 72, + 519, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 72, + 519, + 84 + ], + "spans": [ + { + "bbox": [ + 303, + 72, + 519, + 84 + ], + "type": "text", + "content": "4. Assign a relevance score from 1 to 5." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 303, + 99, + 354, + 111 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 99, + 354, + 111 + ], + "spans": [ + { + "bbox": [ + 303, + 99, + 354, + 111 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 303, + 112, + 524, + 178 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 112, + 524, + 178 + ], + "spans": [ + { + "bbox": [ + 303, + 112, + 524, + 178 + ], + "type": "text", + "content": "On a scale of 1-5, with 1 being the lowest, is the summary relevant to the source document and does the summary only contain the important information of the source document?" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 303, + 191, + 451, + 203 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 191, + 451, + 203 + ], + "spans": [ + { + "bbox": [ + 303, + 191, + 451, + 203 + ], + "type": "text", + "content": "F.2 Prompts for Topical-Chat" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 302, + 209, + 525, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 209, + 525, + 248 + ], + "spans": [ + { + "bbox": [ + 302, + 209, + 525, + 248 + ], + "type": "text", + "content": "First, we show an example prompt for naturalness. This prompt corresponds to the score only + auto CoT in Table 2." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 303, + 259, + 362, + 270 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 259, + 362, + 270 + ], + "spans": [ + { + "bbox": [ + 303, + 259, + 362, + 270 + ], + "type": "text", + "content": "Naturalness" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 303, + 273, + 524, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 273, + 524, + 352 + ], + "spans": [ + { + "bbox": [ + 303, + 273, + 524, + 352 + ], + "type": "text", + "content": "You will be given a conversation between two individuals. You will then be given one potential response for the next turn in the conversation. The response concerns an interesting fact, which will be provided as well." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 303, + 354, + 524, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 354, + 524, + 379 + ], + "spans": [ + { + "bbox": [ + 303, + 354, + 524, + 379 + ], + "type": "text", + "content": "Your task is to rate the responses on one metric." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 303, + 381, + 525, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 381, + 525, + 433 + ], + "spans": [ + { + "bbox": [ + 303, + 381, + 525, + 433 + ], + "type": "text", + "content": "Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 304, + 449, + 420, + 460 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 449, + 420, + 460 + ], + "spans": [ + { + "bbox": [ + 304, + 449, + 420, + 460 + ], + "type": "text", + "content": "Evaluation Crieteria:" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 303, + 463, + 524, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 463, + 524, + 488 + ], + "spans": [ + { + "bbox": [ + 303, + 463, + 524, + 488 + ], + "type": "text", + "content": "Naturalness (1-3) Is the response naturally written??" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 304, + 490, + 524, + 569 + ], + "type": "list", + "angle": 0, + "index": 36, + "blocks": [ + { + "bbox": [ + 304, + 490, + 524, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 490, + 524, + 515 + ], + "spans": [ + { + "bbox": [ + 304, + 490, + 524, + 515 + ], + "type": "text", + "content": "- A score of 1 (bad) means that the response is unnatural." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 304, + 517, + 524, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 517, + 524, + 542 + ], + "spans": [ + { + "bbox": [ + 304, + 517, + 524, + 542 + ], + "type": "text", + "content": "- A score of 2 (ok) means the response is strange, but not entirely unnatural." + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 304, + 544, + 524, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 544, + 524, + 569 + ], + "spans": [ + { + "bbox": [ + 304, + 544, + 524, + 569 + ], + "type": "text", + "content": "- A score of 3 (good) means that the response is natural." + } + ] + } + ], + "index": 35 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 304, + 584, + 398, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 584, + 398, + 597 + ], + "spans": [ + { + "bbox": [ + 304, + 584, + 398, + 597 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 304, + 598, + 524, + 704 + ], + "type": "list", + "angle": 0, + "index": 42, + "blocks": [ + { + "bbox": [ + 304, + 598, + 524, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 598, + 524, + 623 + ], + "spans": [ + { + "bbox": [ + 304, + 598, + 524, + 623 + ], + "type": "text", + "content": "1. Read the conversation between the two individuals." + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 304, + 625, + 524, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 625, + 524, + 650 + ], + "spans": [ + { + "bbox": [ + 304, + 625, + 524, + 650 + ], + "type": "text", + "content": "2. Read the potential response for the next turn in the conversation." + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 304, + 652, + 524, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 652, + 524, + 677 + ], + "spans": [ + { + "bbox": [ + 304, + 652, + 524, + 677 + ], + "type": "text", + "content": "3. Evaluate the response based on its naturalness, using the provided criteria." + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 304, + 679, + 524, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 679, + 524, + 704 + ], + "spans": [ + { + "bbox": [ + 304, + 679, + 524, + 704 + ], + "type": "text", + "content": "4. Assign a rating score of 1, 2, or 3 based on the evaluation." + } + ] + } + ], + "index": 41 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 719, + 348, + 731 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 719, + 348, + 731 + ], + "spans": [ + { + "bbox": [ + 303, + 719, + 348, + 731 + ], + "type": "text", + "content": "Example:" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 303, + 734, + 419, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 734, + 419, + 745 + ], + "spans": [ + { + "bbox": [ + 303, + 734, + 419, + 745 + ], + "type": "text", + "content": "Conversation History:" + } + ] + } + ], + "index": 44 + }, + { + "bbox": [ + 303, + 748, + 368, + 759 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 748, + 368, + 759 + ], + "spans": [ + { + "bbox": [ + 303, + 748, + 368, + 759 + ], + "type": "text", + "content": "{{Document}}" + } + ] + } + ], + "index": 45 + }, + { + "bbox": [ + 303, + 761, + 408, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 761, + 408, + 772 + ], + "spans": [ + { + "bbox": [ + 303, + 761, + 408, + 772 + ], + "type": "text", + "content": "Corresponding Fact:" + } + ] + } + ], + "index": 46 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "8940" + } + ] + } + ], + "index": 47 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 72, + 113, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 72, + 113, + 84 + ], + "spans": [ + { + "bbox": [ + 67, + 72, + 113, + 84 + ], + "type": "text", + "content": "{{Fact}}" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 86, + 119, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 86, + 119, + 98 + ], + "spans": [ + { + "bbox": [ + 67, + 86, + 119, + 98 + ], + "type": "text", + "content": "Response:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 99, + 134, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 99, + 134, + 111 + ], + "spans": [ + { + "bbox": [ + 67, + 99, + 134, + 111 + ], + "type": "text", + "content": "{{Response}}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 126, + 234, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 126, + 234, + 138 + ], + "spans": [ + { + "bbox": [ + 67, + 126, + 234, + 138 + ], + "type": "text", + "content": "Evaluation Form (scores ONLY):" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 140, + 146, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 140, + 146, + 152 + ], + "spans": [ + { + "bbox": [ + 69, + 140, + 146, + 152 + ], + "type": "text", + "content": "- Naturalness:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 161, + 226, + 174 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 161, + 226, + 174 + ], + "spans": [ + { + "bbox": [ + 68, + 161, + 226, + 174 + ], + "type": "text", + "content": "F.2.1 Different Output Prompts" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 177, + 289, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 177, + 289, + 272 + ], + "spans": [ + { + "bbox": [ + 67, + 177, + 289, + 272 + ], + "type": "text", + "content": "For Topical-Chat, we also conduct ablations on different output prompts. Those different output prompts for score only, rate-explain, analyze-rate are the same as those listed in Section F.1.1. We do not list them here to save some space. The exact prompts we use can be found in the supplementary data of this paper." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 280, + 244, + 293 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 280, + 244, + 293 + ], + "spans": [ + { + "bbox": [ + 67, + 280, + 244, + 293 + ], + "type": "text", + "content": "F.2.2 Attribute-Dependent Prompts" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 296, + 289, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 296, + 289, + 403 + ], + "spans": [ + { + "bbox": [ + 67, + 296, + 289, + 403 + ], + "type": "text", + "content": "The definition and rating criteria of the attribute to be rated, the evaluation steps generated by auto CoT, and output prompt for text-free are attributedependent, and we list them as follows. Again, the following prompts are not the complete prompts used as the model input; they need to be used with the descriptions of the rating task and the sample to be rated." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 68, + 413, + 127, + 424 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 413, + 127, + 424 + ], + "spans": [ + { + "bbox": [ + 68, + 413, + 127, + 424 + ], + "type": "text", + "content": "Naturalness" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 427, + 185, + 439 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 427, + 185, + 439 + ], + "spans": [ + { + "bbox": [ + 67, + 427, + 185, + 439 + ], + "type": "text", + "content": "Evaluation Crieteria:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 440, + 289, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 440, + 289, + 465 + ], + "spans": [ + { + "bbox": [ + 67, + 440, + 289, + 465 + ], + "type": "text", + "content": "Naturalness (1-3) Is the response naturally written??" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 68, + 468, + 289, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 468, + 289, + 492 + ], + "spans": [ + { + "bbox": [ + 68, + 468, + 289, + 492 + ], + "type": "text", + "content": "- A score of 1 (bad) means that the response is unnatural." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 68, + 495, + 289, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 495, + 289, + 520 + ], + "spans": [ + { + "bbox": [ + 68, + 495, + 289, + 520 + ], + "type": "text", + "content": "- A score of 2 (ok) means the response is strange, but not entirely unnatural." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 68, + 522, + 289, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 522, + 289, + 547 + ], + "spans": [ + { + "bbox": [ + 68, + 522, + 289, + 547 + ], + "type": "text", + "content": "- A score of 3 (good) means that the response is natural." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 68, + 562, + 163, + 575 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 562, + 163, + 575 + ], + "spans": [ + { + "bbox": [ + 68, + 562, + 163, + 575 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 68, + 576, + 289, + 682 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 68, + 576, + 289, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 576, + 289, + 601 + ], + "spans": [ + { + "bbox": [ + 68, + 576, + 289, + 601 + ], + "type": "text", + "content": "1. Read the conversation between the two individuals." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 68, + 603, + 289, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 603, + 289, + 628 + ], + "spans": [ + { + "bbox": [ + 68, + 603, + 289, + 628 + ], + "type": "text", + "content": "2. Read the potential response for the next turn in the conversation." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 68, + 630, + 289, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 630, + 289, + 655 + ], + "spans": [ + { + "bbox": [ + 68, + 630, + 289, + 655 + ], + "type": "text", + "content": "3. Evaluate the response based on its naturalness, using the provided criteria." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 68, + 657, + 289, + 682 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 657, + 289, + 682 + ], + "spans": [ + { + "bbox": [ + 68, + 657, + 289, + 682 + ], + "type": "text", + "content": "4. Assign a rating score of 1, 2, or 3 based on the evaluation." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 68, + 698, + 119, + 709 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 698, + 119, + 709 + ], + "spans": [ + { + "bbox": [ + 68, + 698, + 119, + 709 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 67, + 711, + 289, + 737 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 711, + 289, + 737 + ], + "spans": [ + { + "bbox": [ + 67, + 711, + 289, + 737 + ], + "type": "text", + "content": "How natural is the reponse? (On a scale of 1-3, with 1 being the lowest)" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 68, + 746, + 121, + 758 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 746, + 121, + 758 + ], + "spans": [ + { + "bbox": [ + 68, + 746, + 121, + 758 + ], + "type": "text", + "content": "Coherence" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 68, + 760, + 185, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 760, + 185, + 772 + ], + "spans": [ + { + "bbox": [ + 68, + 760, + 185, + 772 + ], + "type": "text", + "content": "Evaluation Crieteria:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "type": "text", + "content": "Coherence (1-3) Does the response serve as a valid continuation of the conversation history?" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 302, + 112, + 524, + 260 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 302, + 112, + 524, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 112, + 524, + 152 + ], + "spans": [ + { + "bbox": [ + 302, + 112, + 524, + 152 + ], + "type": "text", + "content": "- A score of 1 (no) means that the response drastically changes topic or ignores the conversation history." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 302, + 153, + 524, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 153, + 524, + 219 + ], + "spans": [ + { + "bbox": [ + 302, + 153, + 524, + 219 + ], + "type": "text", + "content": "- A score of 2 (somewhat) means the response refers to the conversation history in a limited capacity (e.g., in a generic way) and shifts the conversation topic." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 302, + 221, + 524, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 221, + 524, + 260 + ], + "spans": [ + { + "bbox": [ + 302, + 221, + 524, + 260 + ], + "type": "text", + "content": "- A score of 3 (yes) means the response is on topic and strongly acknowledges the conversation history." + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 275, + 399, + 287 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 275, + 399, + 287 + ], + "spans": [ + { + "bbox": [ + 303, + 275, + 399, + 287 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 303, + 289, + 524, + 367 + ], + "type": "list", + "angle": 0, + "index": 35, + "blocks": [ + { + "bbox": [ + 304, + 289, + 486, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 289, + 486, + 301 + ], + "spans": [ + { + "bbox": [ + 304, + 289, + 486, + 301 + ], + "type": "text", + "content": "1. Read the conversation history." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 304, + 302, + 474, + 314 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 302, + 474, + 314 + ], + "spans": [ + { + "bbox": [ + 304, + 302, + 474, + 314 + ], + "type": "text", + "content": "2. Read the potential response." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 303, + 316, + 524, + 341 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 316, + 524, + 341 + ], + "spans": [ + { + "bbox": [ + 303, + 316, + 524, + 341 + ], + "type": "text", + "content": "3. Evaluate the coherence of the response based on the conversation history." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 303, + 343, + 524, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 343, + 524, + 367 + ], + "spans": [ + { + "bbox": [ + 303, + 343, + 524, + 367 + ], + "type": "text", + "content": "4. Assign a score of 1, 2, or 3 for coherence." + } + ] + } + ], + "index": 34 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 383, + 354, + 395 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 383, + 354, + 395 + ], + "spans": [ + { + "bbox": [ + 303, + 383, + 354, + 395 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 302, + 396, + 524, + 463 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 396, + 524, + 463 + ], + "spans": [ + { + "bbox": [ + 302, + 396, + 524, + 463 + ], + "type": "text", + "content": "Does the response serve as a valid continuation of the conversation history? (On a scale of 1-3, with 1 meaning the response is invalid and 3 meaning the response is coherent)" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 303, + 476, + 370, + 488 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 476, + 370, + 488 + ], + "spans": [ + { + "bbox": [ + 303, + 476, + 370, + 488 + ], + "type": "text", + "content": "Engagingness" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 303, + 490, + 420, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 490, + 420, + 502 + ], + "spans": [ + { + "bbox": [ + 303, + 490, + 420, + 502 + ], + "type": "text", + "content": "Evaluation Crieteria:" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 303, + 503, + 524, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 503, + 524, + 529 + ], + "spans": [ + { + "bbox": [ + 303, + 503, + 524, + 529 + ], + "type": "text", + "content": "Engagingness (1-3) Is the response dull/interesting?" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 303, + 530, + 524, + 650 + ], + "type": "list", + "angle": 0, + "index": 44, + "blocks": [ + { + "bbox": [ + 303, + 530, + 524, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 530, + 524, + 555 + ], + "spans": [ + { + "bbox": [ + 303, + 530, + 524, + 555 + ], + "type": "text", + "content": "- A score of 1 (dull) means that the response is generic and dull." + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 303, + 557, + 524, + 610 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 557, + 524, + 610 + ], + "spans": [ + { + "bbox": [ + 303, + 557, + 524, + 610 + ], + "type": "text", + "content": "- A score of 2 (somewhat interesting) \nmeans the response is somewhat interesting and could engage you in the conversation (e.g., an opinion, thought)" + } + ] + } + ], + "index": 42 + }, + { + "bbox": [ + 303, + 612, + 524, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 612, + 524, + 650 + ], + "spans": [ + { + "bbox": [ + 303, + 612, + 524, + 650 + ], + "type": "text", + "content": "- A score of 3 (interesting) means the response is very interesting or presents an interesting fact" + } + ] + } + ], + "index": 43 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 666, + 399, + 678 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 666, + 399, + 678 + ], + "spans": [ + { + "bbox": [ + 303, + 666, + 399, + 678 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 45 + }, + { + "bbox": [ + 303, + 679, + 524, + 745 + ], + "type": "list", + "angle": 0, + "index": 48, + "blocks": [ + { + "bbox": [ + 303, + 679, + 524, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 679, + 524, + 705 + ], + "spans": [ + { + "bbox": [ + 303, + 679, + 524, + 705 + ], + "type": "text", + "content": "1. Read the conversation, the corresponding fact and the response carefully." + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 303, + 707, + 524, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 707, + 524, + 745 + ], + "spans": [ + { + "bbox": [ + 303, + 707, + 524, + 745 + ], + "type": "text", + "content": "2. Rate the response on a scale of 1-3 for engagingness, according to the criteria above." + } + ] + } + ], + "index": 47 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 304, + 761, + 354, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 761, + 354, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 761, + 354, + 772 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 49 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "type": "text", + "content": "8941" + } + ] + } + ], + "index": 50 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 289, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 289, + 111 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 289, + 111 + ], + "type": "text", + "content": "Is the response interesting and engaging? (On a scale of 1-3, with 1 meaning dull and 3 meaning interesting)" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 120, + 138, + 131 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 120, + 138, + 131 + ], + "spans": [ + { + "bbox": [ + 68, + 120, + 138, + 131 + ], + "type": "text", + "content": "Groundedness" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 133, + 185, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 133, + 185, + 145 + ], + "spans": [ + { + "bbox": [ + 68, + 133, + 185, + 145 + ], + "type": "text", + "content": "Evaluation Crieteria:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 147, + 289, + 199 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 147, + 289, + 199 + ], + "spans": [ + { + "bbox": [ + 68, + 147, + 289, + 199 + ], + "type": "text", + "content": "Groundedness (0- 1) given the fact that this response is conditioned on, determine whether this response uses that fact." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 201, + 289, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 201, + 289, + 239 + ], + "spans": [ + { + "bbox": [ + 68, + 201, + 289, + 239 + ], + "type": "text", + "content": "- A score of 0 (no) means the response does not mention or refer to the fact at all" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 242, + 289, + 267 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 242, + 289, + 267 + ], + "spans": [ + { + "bbox": [ + 68, + 242, + 289, + 267 + ], + "type": "text", + "content": "- A score of 1 (yes) means the response uses the fact well" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 282, + 163, + 294 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 282, + 163, + 294 + ], + "spans": [ + { + "bbox": [ + 68, + 282, + 163, + 294 + ], + "type": "text", + "content": "Evaluation Steps:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 68, + 296, + 289, + 429 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 68, + 296, + 289, + 321 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 296, + 289, + 321 + ], + "spans": [ + { + "bbox": [ + 68, + 296, + 289, + 321 + ], + "type": "text", + "content": "1. Read the conversation between the two individuals." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 322, + 289, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 322, + 289, + 348 + ], + "spans": [ + { + "bbox": [ + 68, + 322, + 289, + 348 + ], + "type": "text", + "content": "2. Identify the fact that is provided for the potential response." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 68, + 350, + 239, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 350, + 239, + 362 + ], + "spans": [ + { + "bbox": [ + 68, + 350, + 239, + 362 + ], + "type": "text", + "content": "3. Read the potential response." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 68, + 364, + 289, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 364, + 289, + 388 + ], + "spans": [ + { + "bbox": [ + 68, + 364, + 289, + 388 + ], + "type": "text", + "content": "4. Determine if the potential response uses or mentions the fact." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 391, + 289, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 391, + 289, + 429 + ], + "spans": [ + { + "bbox": [ + 68, + 391, + 289, + 429 + ], + "type": "text", + "content": "5. Assign a score of 0 or 1 for groundedness based on whether the response uses the fact." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 68, + 444, + 119, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 444, + 119, + 455 + ], + "spans": [ + { + "bbox": [ + 68, + 444, + 119, + 455 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 67, + 458, + 289, + 511 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 458, + 289, + 511 + ], + "spans": [ + { + "bbox": [ + 67, + 458, + 289, + 511 + ], + "type": "text", + "content": "Given the fact that this response is conditioned on, does the response use the fact? (On a scale of 0-1, with 0 meaning no and 1 meaning yes)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 68, + 521, + 213, + 533 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 521, + 213, + 533 + ], + "spans": [ + { + "bbox": [ + 68, + 521, + 213, + 533 + ], + "type": "text", + "content": "F.3 Prompts for Section 3.4.2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 67, + 539, + 291, + 631 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 539, + 291, + 631 + ], + "spans": [ + { + "bbox": [ + 67, + 539, + 291, + 631 + ], + "type": "text", + "content": "HHH prompts You are an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 67, + 640, + 289, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 640, + 289, + 734 + ], + "spans": [ + { + "bbox": [ + 67, + 640, + 289, + 734 + ], + "type": "text", + "content": "Human annotator prompts Assume that you are a professional and careful human evaluator. You are recruited and paid to conduct the following task. You need to strictly follow the task instruction and ensure that you are doing the job with high-quality." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "8942" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_content_list.json b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..191633141942d5e8644165c05d7ba88354a28ce8 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_content_list.json @@ -0,0 +1,1929 @@ +[ + { + "type": "text", + "text": "A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction", + "text_level": 1, + "bbox": [ + 188, + 79, + 811, + 118 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Ruihao Shui", + "bbox": [ + 277, + 126, + 389, + 140 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "National University of Singapore ruihaoshui@u.nus.edu", + "bbox": [ + 198, + 143, + 468, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yixin Cao", + "bbox": [ + 620, + 126, + 710, + 140 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Singapore Management University yxcao@smu.edu.sg", + "bbox": [ + 522, + 142, + 806, + 175 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Wang Xiang*", + "bbox": [ + 275, + 187, + 393, + 204 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "University of Science and Technology of China xiangwang1223@gmail.com", + "bbox": [ + 137, + 204, + 524, + 237 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Tat-Seng Chua", + "bbox": [ + 600, + 187, + 732, + 204 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "National University of Singapore dcscts@nus.edu.sg", + "bbox": [ + 529, + 205, + 801, + 237 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 341, + 267 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain. However, recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks. To systematically investigate their competency in the law, we design practical baseline solutions based on LLMs and test on the task of legal judgment prediction. In our solutions, LLMs can work alone to answer open questions or coordinate with an information retrieval (IR) system to learn from similar cases or solve simplified multi-choice questions. We show that similar cases and multi-choice options, namely label candidates, included in prompts can help LLMs recall domain knowledge that is critical for expertise legal reasoning. We additionally present an intriguing paradox wherein an IR system surpasses the performance of LLM+IR due to limited gains acquired by weaker LLMs from powerful IR systems. In such cases, the role of LLMs becomes redundant. Our evaluation pipeline can be easily extended into other tasks to facilitate evaluations in other domains. Code is available at https://github.com/srhthu/LM-CompEval-Legal", + "bbox": [ + 141, + 282, + 460, + 667 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 682, + 258, + 697 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large language models have achieved great success in various Natural Language Processing (NLP) tasks (Brown et al., 2020; Touvron et al., 2023), while there are still some disputes over the potential for domain-specific applications (Martínez, 2023). Focusing on the law domain, the leading LLM, GPT-4 (OpenAI, 2023), was claimed to pass the Uniform Bar Exam (UBE) with a 90th percentile score. Although inspiring, however, this result was pointed out to be overestimated (Martínez, 2023).", + "bbox": [ + 112, + 709, + 490, + 872 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/99ef283638db9ae2fce19004b145e404d3b53a5a36b8bac09db03b2bc2773c2c.jpg", + "image_caption": [ + "Figure 1: The task of Legal Judgment Prediction and the evaluation settings. Different colors refer to different charges. For similar cases, \"T\" refers to true similar cases with the same charges as the query cases, while \"F\" refers to false similar cases. For task settings, \"ZS\" is the abbreviation for zero-shot and \"FS\" for few-shot." + ], + "image_footnote": [], + "bbox": [ + 512, + 250, + 884, + 448 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "This raises an interesting question: How exactly LLMs perform in various real-world legal tasks?", + "bbox": [ + 507, + 577, + 882, + 609 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this paper, we design practical baseline solutions based on LLMs and systematically investigate their competency in the law, to shed light on other domains as well. We attribute the main issues of the previous benchmark as follows. First, UBE is too general and not subject to any legal jurisdiction (Martínez, 2023). Second, UBE contains multi-choice questions and open-ended questions that require human experts to evaluate. To avoid human evaluation, some datasets (Hendrycks et al., 2020) replace open-ended questions with multi-choice questions. However, in real-world applications, there are not only multi-choice but also open questions. Using multi-choice questions only may not be comprehensive enough. Third, specifically in but not limited to common law (Shulayeva et al., 2017; Xiao et al., 2019), similar cases are always introduced as evidence to support expertise legal reasoning (Zhong et al., 2020b), which are not fully", + "bbox": [ + 505, + 613, + 884, + 919 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Xiang Wang is also affiliated with Institute of Artificial Intelligence, Institute of Dataspace, Hefei Comprehensive National Science Center.", + "bbox": [ + 112, + 879, + 487, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "7337", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7337-7348 December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 216, + 945, + 779, + 973 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "studied in previous benchmark (Hendrycks et al., 2020).", + "bbox": [ + 112, + 84, + 489, + 115 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "For the first issue, we choose legal judgment prediction (LJP) (Xiao et al., 2018; Chalkidis et al., 2019; Zhong et al., 2020a) as the example task for investigation. It is a real-world problem to determine the charges committed by the defendants under a juridical system, as shown in Figure 1. LJP is typically formulated as a classification task to predict the most possible one from a list of predefined charges. Then, for the second and third issues, we design four settings derived from two work scenarios of LLMs to cover open and multichoice questions and the usage of similar cases. In the first scenario, LLMs work alone without explicit knowledge in prompts, assuming all domain knowledge is implicitly stored in parameters. In the second scenario, LLMs coordinate with an information retrieval (IR) system that enriches prompts with similar demonstrations and label candidates to benefit expertise reasoning. Specifically, demonstrations consist of pairs of similar cases and their charges, which are retrieved by the IR system based on similarity of case facts. Labels of the retrieved cases can form label candidates, shown as circles of different colors in Figure 1, to hint LLM with label information and narrow down label space (Ma et al., 2023).", + "bbox": [ + 112, + 117, + 489, + 533 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The four evaluation settings in Figure 1 can be categorized based on the presence of two elements in prompts: demonstrations (similar cases) and label candidates. Demonstrations convert the setting from zero-shot to few-shot prompting, while label candidates simplify the task from open questions to multi-choice questions1. The first scenario corresponds to the first setting, where neither element is present, while the second scenario encompasses the remaining three settings. We evaluate five up-to-date LLMs of the close-source GPT-3 (Brown et al., 2020) family, ChatGPT and GPT-4 (OpenAI, 2023), and open-source LLMs including Vicuna (Chiang et al., 2023), ChatGLM (Du et al., 2022) and BLOOMZ (Muennighoff et al., 2022). The evaluation is conducted on a Chinese LJP dataset, namely CAIL (Xiao et al., 2018), which contains cases of 112 criminal law charges2.", + "bbox": [ + 112, + 536, + 489, + 824 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We highlight our key findings as follows:", + "bbox": [ + 131, + 826, + 438, + 841 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1. Similar cases and label candidates can help", + "bbox": [ + 131, + 841, + 485, + 858 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "LLMs recall domain knowledge that is critical for expertise legal reasoning.", + "bbox": [ + 544, + 84, + 880, + 116 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "2. Label candidates result in more consistent outputs, indicating LLMs gain greater confidence in their domain knowledge (Jiang et al., 2021).", + "3. Irrelevant demonstrations formed by fixed cases hardly improve performance. This excludes their effect on task illustration.", + "4. Paradox: An IR system can outperform LLM+IR since weaker LLMs acquire limited gains from informative documents retrieved by a powerful IR system. Thus, it is critical to adapte LLMs to generate with retrieved documents.", + "5. More similar cases introduce more knowledge and noise simultaneously, whose final outcome depends on LLMs." + ], + "bbox": [ + 522, + 117, + 884, + 357 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The main contributions are summarized in three aspects:", + "bbox": [ + 509, + 357, + 880, + 390 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We investigate the law competency of LLMs on the task of legal judgment prediction.", + "- We propose practical baseline solutions for LLMs that tackle two scenarios: working alone or in coordination with an IR system.", + "- We evaluate five LLMs and conduct comprehensive analysis to demystify their characteristics of expertise reasoning." + ], + "bbox": [ + 531, + 398, + 882, + 546 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Baseline Method", + "text_level": 1, + "bbox": [ + 507, + 558, + 690, + 571 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The goal of legal judgment prediction is to determine the committed charges given case facts. To harness LLMs for LJP, we adopt in-context learning (Brown et al., 2020) and use LLMs to generate the charges conditioned on prompts (Section 2.1). To enhance LLMs, we incorporate label candidates and demonstrations consisting of similar cases into prompts, which are acquired by an IR system (Section 2.2). This derives four settings of baseline solutions, namely zero-shot open questions, few-shot open questions, zero-shot multi-choice questions, and few-shot multi-choice questions. The multi-choice settings employ label candidates while few-shot settings include demonstrations, as shown in Figure 1. Finally, we introduce how to simulate IR systems with different capabilities to understand their effects (Section 2.3).", + "bbox": [ + 507, + 582, + 884, + 854 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 LLM Prompting", + "text_level": 1, + "bbox": [ + 507, + 866, + 687, + 881 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Prompt Design. A prompt begins with an instruction to illustrate the task followed by label", + "bbox": [ + 507, + 887, + 882, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "1It is not strict multi-choice questions. LLMs can generate correct answers even though ground-truth labels are absent in candidates.", + "bbox": [ + 112, + 866, + 487, + 903 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "After filtering less frequent (article, charge) pairs", + "bbox": [ + 134, + 904, + 440, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "7338", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "candidates and task demonstrations in the form of input-output pairs. The templates of prompts are displayed in Appendix A.1.", + "bbox": [ + 112, + 84, + 487, + 131 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Parsing. We adopt one automatic parsing function for all LLMs to map LLM outputs to predefined charge labels. No ad hoc heuristics are employed for a fair comparison. Specifically, we use the BM25 algorithm3 to measure text similarity between outputs and pre-defined charges and predict the most similar charges. BM25 is robust and yields comparable performances to neural similarity methods like text2vec4 in our pilot experiments.", + "bbox": [ + 112, + 133, + 487, + 292 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Inference. Sampling is enabled during generation for consistent results, as inspired by Wang et al. (2022). Five outputs are sampled for each prompt with the temperature of 0.8. Their similarity scores of pre-defined labels are averaged.", + "bbox": [ + 112, + 294, + 487, + 375 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 IR System for Knowledge Incorporation", + "text_level": 1, + "bbox": [ + 112, + 387, + 475, + 401 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "IR systems are utilized to retrieve similar cases, commonly referenced by lawyers and judges, to inform their judgments. In addition to providing demonstrations, these similar cases can also aid in generating potential labels by incorporating the labels from the top similar cases. By employing these smaller sets of predefined charges, namely label candidates, complex open questions can be simplified into multiple-choice questions. This approach is effective in enhancing LM prompting (Ma et al., 2023), as including hundreds of charges directly in prompts is impractical.", + "bbox": [ + 112, + 409, + 487, + 601 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Implementation of IR System. We use the BM25 algorithm to measure the semantic similarity between cases. Similar cases are retrieved from the training dataset. To guarantee that the demonstrations exemplify one of the multi-choice options, we exclude demonstrations with labels that are not among the candidate options5.", + "bbox": [ + 112, + 602, + 487, + 715 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3 Simulation of IR Systems", + "text_level": 1, + "bbox": [ + 112, + 728, + 359, + 743 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To investigate the effects of IR capabilities, we simulate a series of IR systems of different capabilities as measured by Precision@1 $^{6}$ . Then the top retrieved cases are used as demonstrations. We consider cases with identical charges to the query cases as true similar cases and vice versa.", + "bbox": [ + 112, + 750, + 487, + 845 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "3https://pypi.org/project/rank-bm25/", + "4https://github.com/crownpku/text2vec", + "5This condition is not violated for the top four similar cases without filtering.", + "The accuracy of the top one retrieved case." + ], + "bbox": [ + 115, + 853, + 485, + 917 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Realistic Simulation. We prioritize the returning of true similar cases for easy query cases, rather than the returning in a random manner. The query difficulty is measured by the Precision@10 of the BM25 retriever described in Section 2.2. The motivation is that queries with shadow linguistic features are more possible to get relevant retrieval results than complex or obscure queries. For a specific value (e.g., a%) of Precision@1 to be simulated, the top a% of easy test cases are assured to have a true similar case, while the rest are assigned false similar cases.", + "bbox": [ + 505, + 84, + 884, + 275 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Experimental Setup", + "text_level": 1, + "bbox": [ + 507, + 291, + 715, + 307 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Models", + "text_level": 1, + "bbox": [ + 507, + 319, + 613, + 332 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Below is a concise introduction to the five LLMs to be evaluated.", + "bbox": [ + 507, + 340, + 880, + 370 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "GPT-4 (OpenAI, 2023) and ChatGPT are available from OpenAI API and the versions of gpt-4-0314 and gpt-3.5-turbo-0301 are used. For technological details, ChatGPT is claimed to be a sibling model to InstructGPT (Ouyang et al., 2022) that is trained to follow instructions and align to human preferences with the RLHF algorithm (Christiano et al., 2017).", + "bbox": [ + 507, + 373, + 882, + 501 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Vicuna-13B (Chiang et al., 2023) is a LLaMA model (Touvron et al., 2023) fine-tuned on 70K public user-shared conversations with ChatGPT. It can be viewed to learn distilled knowledge (Hinton et al., 2015) of ChatGPT.", + "bbox": [ + 507, + 502, + 880, + 582 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "ChatGLM-6B7 is a dialog language model based on the GLM (Du et al., 2022) architecture and supports English and Chinese.", + "bbox": [ + 507, + 583, + 880, + 631 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "BLOOMZ (Muennighoff et al., 2022) is an instruction fine-tuned BLOOM (Scao et al., 2022), a multilingual language model. We use the bloomz-7b1-mt version that is tuned for multilingual prompts. Except for BLOOMZ, Vicuna and ChatGLM are mainly fine-tuned on conversational data.", + "bbox": [ + 507, + 633, + 882, + 744 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2 Dataset and Pre-processing", + "text_level": 1, + "bbox": [ + 507, + 759, + 769, + 774 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The Chinese LJP dataset, CAIL (Xiao et al., 2018), is used in our experiments. Each sample consists of the case facts and the committed charge as the label. As the original dataset is very large (~100K for training and ~20K for test), we randomly sample a balanced small test set from the original test set. Five cases are sampled for each charge, accounting", + "bbox": [ + 507, + 781, + 884, + 894 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "7https://github.com/THUDM/ChatGLM-6B", + "bbox": [ + 529, + 904, + 800, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "7339", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/534e04af4b4c52ed8454eb1cad6294e2caec99b499fba08b07118ed2be61461d.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TokenizerMedian<=500<=1000
ChatGPT396.568.7592.32
Vicuna496.050.8986.96
ChatGLM206.591.0798.57
BLOOMZ210.590.5498.93
", + "bbox": [ + 139, + 80, + 463, + 166 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 1: Statistics of the number of tokens across tokenizers. The last two columns present the ratios of test samples with token counts below the specified values.", + "bbox": [ + 112, + 175, + 489, + 219 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "for 560 test cases in total for 112 charges. Similarly, we also sample the training and validation sets with 10 cases per charge. The training set is used to retrieve similar cases (Section 2.3), while the validation set is used to determine the optimal $k$ of the kNN algorithm.", + "bbox": [ + 112, + 248, + 487, + 344 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Truncation. Since some cases have very long descriptions, we truncate the case facts of demonstrations to 500 tokens and those of test samples to 1000 tokens. It is worth noting that the text is tokenized by the tokenizer of each model before truncation for a fair comparison. Recently, Petrov et al. (2023) address the issue that a tokenizer can lead to different performances of different languages. This suggests that the performance on a particular language can also be influenced by tokenizers from various models with varying language encoding efficiency.", + "bbox": [ + 115, + 347, + 489, + 539 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 1 shows the statistics of the number of tokens processed by different tokenizers8. The most efficient tokenizers for Chinese are those of ChatGLM and BLOOMZ, indicated by the medians of token numbers. In contrast, the tokenizer of ChatGPT produces $2 \\times$ tokens and that of Vicuna produces $2.5 \\times$ tokens. The truncation length is proper to accommodate most samples.", + "bbox": [ + 112, + 542, + 489, + 671 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4 LLM vs. LLM with IR System", + "text_level": 1, + "bbox": [ + 112, + 689, + 413, + 707 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We initially present the overall results, highlighting the importance of label candidates and similar cases, and conduct a comparative analysis of the models. Subsequently, we investigate the relationship between label candidates and self-consistency to unveil their actual effects on expertise reasoning. Additionally, we perform an ablation study by replacing similar cases with fixed cases as demonstrations to further understand their impact.", + "bbox": [ + 112, + 719, + 489, + 865 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/620fe7dc4baab808679f7c64609c21920fadcfb6307c9e362caf890ff9646183.jpg", + "image_caption": [ + "Figure 2: The macro comparison between the four settings. “+Label” refers to zero-shot multi-choice questions; “+Sim Case” refers to few-shot open questions and “+Label +Sim Case” refers to few-shot multi-choice questions. More than one points of a model in the last two settings refer to runs with different number of demonstrations." + ], + "image_footnote": [], + "bbox": [ + 521, + 87, + 870, + 265 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/058621cf7b920696a5ded18ebf7ce22d573e13f2fbf3fb20ad1dab34e7e7a260.jpg", + "image_caption": [ + "Figure 3: Compare the models under each setting. Few-shot performances are averaged among 1-shot to 4-shot." + ], + "image_footnote": [], + "bbox": [ + 519, + 400, + 705, + 564 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/927a75a027158091b5c7362bb8583d03d067384c89367a300592ad4e44d85ee6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 709, + 399, + 870, + 564 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.1 Overall Results", + "text_level": 1, + "bbox": [ + 507, + 637, + 678, + 651 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The macro comparison between the four settings is shown in Figure 2, where each point represents the performance of one specific run of one model.", + "bbox": [ + 507, + 659, + 882, + 707 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Significance of label candidates and similar cases. In comparison to the zero-shot open question setting where LLMs work alone, the inclusion of label candidates, similar cases, or both demonstrates noteworthy enhancements. This highlights the effectiveness of our baseline solutions that leverage IR systems to expand the capabilities of LLMs in legal domains. These findings align with previous research that has also recognized the significance of the two components (Ma et al., 2023; Liu et al., 2021).", + "bbox": [ + 505, + 709, + 884, + 884 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The effects of label candidates and similar cases differ slightly in terms of performance mean and", + "bbox": [ + 507, + 887, + 882, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "GPT-4 and ChatGPT have the same results. Following OpenAI's guidance, we use the python package tiktoken for tokenization", + "bbox": [ + 112, + 879, + 487, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "7340", + "bbox": [ + 480, + 927, + 521, + 940 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "variance. Label candidates contribute to a higher mean performance, while similar cases introduce greater variance. Examining the model performances in the third setting (+Sim Case) displayed in Figure 2, GPT-4 and ChatGPT exhibit more significant improvements from similar cases compared to their smaller counterparts. They also gain more benefit from similar cases than from label candidates. This observation can be attributed to the varying difficulty levels of knowledge utilization. While the knowledge within label candidates is readily accessible and straightforward, leveraging similar cases requires stronger language understanding and few-shot learning abilities.", + "bbox": [ + 112, + 84, + 492, + 311 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Furthermore, the coexistence of label candidates and similar cases further enhances the performance of GPT-4 and ChatGPT, but it diminishes the performance of Vicuna, ChatGLM, and BLOOMZ. This suggests that smaller LLMs may encounter challenges in effectively managing knowledge in multiple forms simultaneously, leading to confusion.", + "bbox": [ + 112, + 317, + 489, + 447 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Model comparison. The performances of the models under zero-shot and few-shot prompting is shown in Figure 3, where few-shot performances are averaged among 1-shot to 4-shot.", + "bbox": [ + 112, + 453, + 489, + 519 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The zero-shot setting emphasizes the ability to understand instructions. When only instructions are available, BLOOMZ performs better than ChatGPT, indicating a superior multilingual instruction following ability. This result is reasonable as BLOOMZ is the only smaller LLM that is fine-tuned on multilingual instructions. Once provided with explicit domain knowledge, ChatGPT outperforms all smaller LLMs. The case is the same for BLOOMZ and ChatGLM, where ChatGLM overtakes BLOOMZ with knowledge of label candidates. BLOOMZ performs worst when prompted with two forms of knowledge, indicating that BLOOMZ is not very robust to prompts. Among the three smaller LLMs, ChatGLM is the most robust to various forms of knowledge.", + "bbox": [ + 112, + 525, + 489, + 783 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The significant effects of label candidates and similar cases can be explained as they activate LLM's memory of relevant domain knowledge. This view can be supported by two pieces of evidence about the relationship between label candidates and self-consistency (Section 4.2) and the negligible effect of irrelevant cases as fixed demonstrations (Section 4.3).", + "bbox": [ + 112, + 790, + 489, + 919 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/797c37324a6cae109c7d203b9697dd318c533ebb16e08e086e16c03012b985f5.jpg", + "image_caption": [ + "Figure 4: Changes of performance and self-consistency after adding label candidates. The change of each model is illustrated by an arrow pointing from the open question setting to the multi-choice setting." + ], + "image_footnote": [], + "bbox": [ + 526, + 87, + 848, + 236 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2 Label Candidates Enhance Self-consistency and Confidence", + "text_level": 1, + "bbox": [ + 507, + 326, + 806, + 359 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To further understand the effect of label candidates, we propose a metric to measure the self-consistency of LLMs that is calculated as the number of the majority prediction. Consistent outputs indicate a high level of confidence in LLMs, which is often associated with a better grasp of knowledge (Jiang et al., 2021, 2023).", + "bbox": [ + 505, + 363, + 884, + 475 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The changes in performance and self-consistency after introducing label candidates are shown in Figure 4 as the arrows. We observe that the incorporation of label candidates leads to more consistent outputs (8 of 10 cases) and higher confidence in LLMs except zero-shot GPT-4 with a slight decrease and few-shot BLOOMZ. In the zero-shot setting, label candidates significantly boost LLM performances. We postulate that label candidates help by eliciting pre-stored domain knowledge with concise charge names. Besides, the self-consistency also correlates with model performances (7 of 10 cases). Such correlation is also observed in other tasks like question answering (Jiang et al., 2021). It is worth noting that label candidates decrease both self-consistency and performance of few-shot prompted BLOOMZ, which also aligns with the correlation.", + "bbox": [ + 507, + 478, + 884, + 766 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.3 Domain Knowledge Is More Critical Than Task Illustration", + "text_level": 1, + "bbox": [ + 507, + 780, + 880, + 810 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "There is a possible argument that similar demonstrations can help LLMs understand instructions and tasks. To disentangle their effects on task illustration and provision of domain knowledge, we", + "bbox": [ + 507, + 818, + 884, + 883 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "For example, if the five sampled outputs are mapped to labels of (a,a,a,b,c), the consistency score is 3.", + "bbox": [ + 507, + 892, + 882, + 917 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "7341", + "bbox": [ + 480, + 927, + 517, + 940 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/eea72bf54c5ffa53aceb9880908d9a5af00872456134b4818d6da649d2fe31a9.jpg", + "image_caption": [ + "Figure 5: The effects of fixed (irrelevant) and similar cases as demonstrations. Divided by the baseline setting of zero-shot open questions, the left part refers to fixed demonstrations with increasing numbers of demonstrations, while the right part refers to similar demonstrations. The shadow area represents the range of standard deviation." + ], + "image_footnote": [], + "bbox": [ + 122, + 86, + 475, + 227 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "experiment with irrelevant demonstrations fixed for all test samples. We manually select two common cases with frequent charges in the original dataset as the fixed demonstrations. The 1-shot performance was averaged on the two demonstrations.", + "bbox": [ + 112, + 372, + 487, + 451 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We compare the effects of fixed and similar demonstrations with the baseline setting of zero-shot open questions in Figure 5. The change of performance from center to left demonstrates that fixed demonstrations hardly benefit LLMs and sometimes harm the performance (e.g., ChatGLM). This indicates that LLMs can basically understand instructions and do not need general demonstrations for task clarification, implying that the main challenge of expertise reasoning is to recall domain knowledge instead of understanding a specific task.", + "bbox": [ + 112, + 455, + 489, + 631 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We inspect the notable performance drop of ChatGLM resulting from fixed demonstrations. We find that ChatGLM tends to analyze the cases of both demonstrations and test samples and then answer with both of their charges. Its wordy style seems to result from the fine-tuning dialog corpus where an assistant LLM is supposed to provide rich information. In contrast, similar cases seem to encourage more concise outputs following the format of demonstrations.", + "bbox": [ + 112, + 633, + 489, + 791 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5 Paradox of Information Retrieval System", + "text_level": 1, + "bbox": [ + 112, + 809, + 436, + 841 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The significance of similar demonstrations illustrated in Section 4.3 has motivated research focusing on prompting-oriented IR systems (Rubin et al., 2021; Sun et al., 2023) to retrieve high qual", + "bbox": [ + 112, + 854, + 489, + 917 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/0474ae253fa7623cb6310b53e5d59e27aaf151b2cd9ad07d38f2f015a7f64a32.jpg", + "image_caption": [ + "Figure 6: The performance of ChatGPT coordinated with a series of simulated IR systems with varying capabilities as measured by Precision@1. The vertical blue line represents the threshold of IR capability at which IR systems overtake ChatGPT. The performance of ChatGPT in the real setting (1-shot open questions) is indicated by the red plus sign." + ], + "image_footnote": [], + "bbox": [ + 521, + 87, + 870, + 206 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "ity demonstrations. However, we raise an intuitive question: Do LLMs gain substantial improvement from IR systems compared to the kNN baseline that harnesses IR systems for classification tasks? The question is inspired by our observation that the BM25 retriever achieves $48.03\\%$ of Precision@1 and $57.68\\%$ prediction accuracy by majority vote of top $k = 17$ retrieved similar cases.", + "bbox": [ + 505, + 349, + 882, + 476 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "This observation suggests a paradoxical scenario wherein an IR system outperforms the combination of LLM and IR, with the LLM taking on the leading role and the IR serving as a supporting role. In such a scenario, the LLM becomes redundant due to its failure to fully utilize the informative retrieved documents.", + "bbox": [ + 505, + 478, + 882, + 588 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To investigate the paradox, instead of experimenting with different IR systems, we manipulate the BM25 retriever to simulate a series of IR systems with different capabilities measured by Precision@1 as described by Section 2.3. We take a case study of ChatGPT, whose 1-shot performance under different IR systems (denoted as Precision@1) is shown in Figure 6.", + "bbox": [ + 505, + 590, + 882, + 717 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Results Although the performance of ChatGPT enhanced by IR systems improves with IR capability, it will eventually underperform the IR system once the IR capability surpasses a certain threshold. In the ideal situation where true similar cases are always retrieved, ChatGPT is unable to attain $100\\%$ accuracy and lags significantly behind the optimal IR system. According to Appendix A.4, all smaller LLMs are not comparable to the BM25 retriever.", + "bbox": [ + 507, + 719, + 884, + 862 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Discussion The findings demonstrate that LLMs face challenges in effectively leveraging informa", + "bbox": [ + 507, + 864, + 882, + 896 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "It is identical to the precision of $k\\mathrm{NN}$ with $k = 1$", + "bbox": [ + 526, + 903, + 836, + 917 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "7342", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/172f2228b736de67a5247c124932299a26a9018979977cbb67db93e147fb597c.jpg", + "image_caption": [ + "Figure 7: Performance vs. the number of similar demonstrations of the five LLMs." + ], + "image_footnote": [], + "bbox": [ + 122, + 86, + 480, + 227 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "tive retrieved documents. This underscores the need for significant research efforts to enhance the synergy between auto-regressive language models and retrieval by conditioning model outputs more on retrieved documents. Previous work has explored the augmentation of LLMs with retrieval at both the pre-training and fine-tuning stages (Borgeaud et al., 2022; Wang et al., 2023). Moreover, the marginal and inadequate improvement with retrieval indicates the limited legal reasoning ability of existing general LLMs. There is a need for future efforts to enhance domain-specific reasoning abilities of pre-trained foundation models.", + "bbox": [ + 112, + 297, + 489, + 505 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "6 Ablation Study", + "text_level": 1, + "bbox": [ + 112, + 518, + 280, + 533 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "6.1 More Demonstrations Are Not Always Better", + "text_level": 1, + "bbox": [ + 112, + 544, + 460, + 575 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The impact of the number of similar demonstrations $(n)$ is depicted in Figure 7. It is evident that GPT-4 and ChatGPT demonstrate proficiency in handling larger numbers of demonstrations, leading to enhanced performance, whereas Vicuna, ChatGLM and BLOOZ experience varying degrees of performance degradation with increasing numbers. Notably, ChatGLM displays the least sensitivity to $n$ . Furthermore, even ChatGPT's performance declines when $n$ is increased from three to four. The performance improvement resulting from larger values of $n$ can be attributed to the increased recall of true similar cases. Conversely, the decline in performance can be attributed to the noise introduced by more false similar cases.", + "bbox": [ + 112, + 581, + 489, + 820 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Performance variations. The change of performance after including an additional demonstration are visualized using heat maps in Figure 8. For each model, the three heat maps stand for the variations from k-shot to $(\\mathrm{k} + 1)$ -shot, which are denoted below. For each heat map, the two rows indicate", + "bbox": [ + 112, + 822, + 489, + 917 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "the inclusion of a new demonstration with true (T) or false (F) similar cases, while the columns indicate the combinations of existing demonstrations. Take the second heat map as an example. The cell in the column of (F, T) and the row of (T) displays the performance variation between 2-shot of (F, T) demonstrations and 3-shot of (F, T, T) demonstrations. Purple represents performance improvement, while green represents performance decline.", + "bbox": [ + 507, + 84, + 884, + 228 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "For ChatGPT and BLOOMZ, the second rows of their three heat maps are mainly in purple, indicating significant enhancements resulting from the inclusion of true similar cases. However, the first lines of BLOOMZ display a deeper green color than those of ChatGPT, suggesting that BLOOMZ experiences greater degree of performance declines caused by the inclusion of false similar cases. These findings indicate different sensitivity to false similar demonstrations. Powerful language models like GPT-4 and ChatGPT exhibit robustness to noise in false similar cases, allowing them to remain focused on relevant information in true similar cases. In contrast, weaker LLMs are susceptible to the influence of such noise. Overall, ChatGPT performs better when provided with more similar demonstrations, whereas BLOOMZ demonstrates the opposite, as shown in Figure 7.", + "bbox": [ + 507, + 230, + 884, + 520 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The conclusion is that increased numbers of demonstrations have both positive and negative implications for expertise reasoning. However, LLMs could potentially gain from additional demonstrations in tasks that requires clear task illustration.", + "bbox": [ + 507, + 521, + 882, + 602 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "6.2 The Impact of Absent Ground Truth Labels", + "text_level": 1, + "bbox": [ + 507, + 619, + 843, + 650 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We manually incorporate ground-truth labels into label candidates in cases where they are absent, which may occur due to the limited recall capability of the IR system described in Section 2.2. The test samples are categorized into two groups, namely \"Easy\" and \"Hard\", based on the retrieval of their ground truth labels by the IR system. The original performance of the two groups and the performance of the \"Hard\" group with modified prompts to include ground truth labels, namely \"Hard+GT\", are displayed in Figure 9.", + "bbox": [ + 505, + 659, + 882, + 835 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The performance gaps between the \"Easy\" and \"Hard+GT\" groups suggest that challenging samples for IR systems are also difficult for LLMs. However, this gap is insignificant for the powerful GPT-4 who perceives them as equal challeng", + "bbox": [ + 507, + 839, + 884, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7343", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fceac87db227ad2a57df6f7e774a27f1bdaa0fa61b7ab09823a4b90fdef37bc8.jpg", + "image_caption": [ + "Figure 8: Heat maps of performance variations resulting from the inclusion of an addition demonstration. \"T\" corresponds to demonstrations with true similar cases, while \"F\" represents those with false similar cases. Each row represents the included new demonstration, while each column indicates the status of existing demonstrations." + ], + "image_footnote": [], + "bbox": [ + 179, + 84, + 833, + 227 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/37506c50c53710f25935e164f14860b2a53239585ed1d5a3b8f8ac353fd38898.jpg", + "image_caption": [ + "Figure 9: The performance of \"Easy\" and \"Hard\" samples under the setting of zero-shot multi-choice questions. \"Hard+GT\" refers to improvement of including the absent ground truth labels in label candidates." + ], + "image_footnote": [], + "bbox": [ + 122, + 311, + 478, + 432 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "ing. The improvement of \"Hard+GT\" compared to \"Hard\" is notable in GPT-4, ChatGPT and ChatGLM but inconspicuous in Vicuna with inferior competency in the law. Considering the relatively small size of the \"Hard\" group (79/560), the absence of ground truth labels does not have a significant impact, especially for weaker LLMs.", + "bbox": [ + 112, + 536, + 489, + 650 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.3 Incorporation of Law Articles", + "text_level": 1, + "bbox": [ + 112, + 668, + 394, + 683 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We examine the effect of incorporating legal articles that explicitly define the charges into prompts. For each charge retrieved by the IR system11, ChatGPT is required to determine whether the defendant is guilty for the particular charge by answering with a yes or no. We find that $94.46\\%$ of the ground truth charges are accurately detected, while only $27.31\\%$ of the detected charges are correct. The high recall and low precision indicate a substantial difference between ChatGPT and legal experts in the ability to distinguish charges and make precise judgments.", + "bbox": [ + 112, + 693, + 489, + 885 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "7 Discussion", + "text_level": 1, + "bbox": [ + 509, + 309, + 636, + 323 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We compare the LLMs with supervised baselines. We fine-tune BERT (Devlin et al., 2018) on the same training set and achieve a comparable accuracy of $68\\%$ to ChatGPT but lower than GPT-4. Since LLMs are not fine-tuned on the specific LJP task, this result highlights the remarkable superiority of LLMs in acquiring significant knowledge and leveraging transfer learning Raffel et al. (2020).", + "bbox": [ + 507, + 334, + 884, + 462 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "However, we observe that BERT's performance improves to $89\\%$ when trained with the original training set ( $\\sim 10\\mathrm{K}$ ). We find that certain knowledge is present in shadow features, which can be easily learned with supervision. These superficial features can result in biased supervised models. Fortunately, unsupervised pre-training objectives, make LLMs more robust and less vulnerable to this issue. This depicts a promising future for NLP applications in various domains.", + "bbox": [ + 507, + 463, + 882, + 621 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "8 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 636, + 640, + 651 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To address the deficiency in evaluating the competency of LLMs in the field of law, we focused on the task of legal judgment prediction and devised four settings to facilitate a thorough evaluation that encompassed both open and multiple-choice questions and incorporated similar cases to aid in the decision-making process.", + "bbox": [ + 507, + 661, + 884, + 772 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The evaluation results revealed different behaviors among the prominent LLMs, namely GPT-4 and ChatGPT, compared to their smaller counterparts. Both GPT-4 and ChatGPT exhibited remarkable proficiency in effectively leveraging domain knowledge in various formats. Among the smaller LLMs, ChatGLM displayed greater robustness, while BLOOMZ showcased superior zero-shot ability.", + "bbox": [ + 507, + 774, + 885, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_footnote", + "text": "we also include the ground truth charge", + "bbox": [ + 131, + 903, + 384, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "7344", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We presented an intriguing paradox wherein LLMs could become abundant in the presence of a powerful IR system. When improving IR systems to benefit LLMs, it is crucial for researchers to acknowledge this paradoxical scenario and prevent great disparity between LLMs and IR systems.", + "bbox": [ + 112, + 84, + 489, + 181 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 112, + 198, + 220, + 212 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "One limitation of this paper is the use of the close-source GPT-4 and ChatGPT whose availability depends on the commercial company OpenAI. According to OpenAI, the ChatGPT and GPT-4 versions used in this paper, namely gpt-3.5-turbo-0301 and gpt-4-0314, will be deprecated and not available after September 13th, 2023.", + "bbox": [ + 112, + 228, + 489, + 355 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Another limitation pertains to the selection of LLMs. Due to the rapid emergence of new LLMs, we are not able to include all of them with the constraint of limited time. Instead of more models, we focus more on designing comprehensive evaluation settings and conducting insightful analyses to shed light on other domains.", + "bbox": [ + 112, + 357, + 489, + 470 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 114, + 488, + 265, + 504 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The task of legal judgment prediction is used to evaluate LLM's competency in the law. The primary objective of this task is to assist judges and lawyers in comprehending lengthy legal documents by offering them a supplementary tool. It is important to note that this task does not seek to replace the roles of judges and lawyers, nor does it aim to determine the guilt or charges of defendants through machine learning algorithms. Additionally, there is research focused on interpreting LJP models, aiming to enhance the transparency of black-box models for improved utilization by legal practitioners. The paper utilizes a public and anonymized dataset to exclude the potential issue of personal information leakage.", + "bbox": [ + 112, + 518, + 489, + 760 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 114, + 777, + 285, + 793 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We thank all reviewers for their constructive comments. This research is supported by NExT Research Center, the National Natural Science Foundation of China (9227010114) and the University Synergy Innovation Program of Anhui Province (GXXT-2022-040).", + "bbox": [ + 112, + 806, + 489, + 903 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 510, + 83, + 608, + 98 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206-2240. PMLR.", + "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.", + "Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in english. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323.", + "Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with $90\\%$ * chatgpt quality.", + "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30.", + "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.", + "Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320-335.", + "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.", + "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", + "Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962-977.", + "Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983." + ], + "bbox": [ + 510, + 105, + 885, + 917 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "7345", + "bbox": [ + 480, + 928, + 519, + 941 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804.", + "Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. 2023. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559.", + "Eric Martínez. 2023. Re-evaluating gpt-4's bar exam performance. Available at SSRN 4441311.", + "Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786.", + "OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.", + "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744.", + "Aleksandar Petrov, Emanuele La Malfa, Philip HS Torr, and Adel Bibi. 2023. Language model tokenizers introduce unfairness between languages. arXiv preprint arXiv:2305.15425.", + "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551.", + "Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2021. Learning to retrieve prompts for in-context learning. arXiv preprint arXiv:2112.08633.", + "Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Galle, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.", + "Olga Shulayeva, Advaith Siddharthan, and Adam Wyner. 2017. Recognizing cited facts and principles in legal judgements. Artificial Intelligence and Law, 25(1):107-126.", + "Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, and Guoyin Wang. 2023. Pushing the limits of chatgpt on nlp tasks.", + "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro," + ], + "bbox": [ + 115, + 85, + 485, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.", + "Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, et al. 2023. Shall we pretrain autoregressive language models with retrieval? a comprehensive study. arXiv preprint arXiv:2304.06762.", + "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.", + "Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478.", + "Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Tianyang Zhang, Xianpei Han, Zhen Hu, Heng Wang, et al. 2019. Cail2019-scm: A dataset of similar case matching in legal domain. arXiv preprint arXiv:1911.08962.", + "Haoxi Zhong, Yuzhong Wang, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020a. Iteratively questioning and answering for interpretable legal judgment prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1250-1257.", + "Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020b. How does nlp benefit legal system: A summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158." + ], + "bbox": [ + 510, + 85, + 880, + 605 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "7346", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "A Appendix", + "text_level": 1, + "bbox": [ + 114, + 84, + 236, + 99 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A.1 Prompt Templates", + "text_level": 1, + "bbox": [ + 114, + 110, + 307, + 124 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "The prompt template is shown in Figure 10. The translation of the original Chinese prompt is displayed using orange text. The setting of zero-shot open questions use a longer instruction that appends \"Output the charge name directly\" to the instruction in Figure A.1.", + "bbox": [ + 112, + 131, + 487, + 227 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/2f571fc406aeb44a37b39fa8436dcaf4cb565f2f8dbdaee851cee92ed3b8a327.jpg", + "image_caption": [ + "Figure 10: The prompt template in Chinese and English." + ], + "image_footnote": [], + "bbox": [ + 119, + 243, + 468, + 382 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A.2 Robust to Fixed Demonstrations", + "text_level": 1, + "bbox": [ + 114, + 439, + 418, + 454 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/ac846278a1128c582af753f3fa49f81f2815f8dfaa13c2157b3810c6f4b2d04b.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Model1shot2shot
GPT-449.59 / 48.8450.69
ChatGPT47.01 / 46.5747.55
Vicuna-13B22.74 / 29.3828.37
ChatGLM-6B22.39 / 25.1421.36
BLOOMZ-7B36.65 / 43.9442.24
", + "bbox": [ + 147, + 468, + 453, + 569 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "We examine the effects of the two fixed cases mentioned in Section 4.3 in Table 2. We find that GPT-4 and ChatGPT are robust to the selection of the fixed demonstration in 1-shot setting, while Vicuna, ChatGLM and BLOOMZ are less robust.", + "bbox": [ + 112, + 627, + 485, + 708 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A.3 Comparison with Supervised Baselines", + "text_level": 1, + "bbox": [ + 114, + 720, + 468, + 736 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "To understand the performance of supervised finetuning (SFT) baselines on LJP, we experiment on three models: BERT $^{12}$ , XLM-RoBERTa $^{13}$ and DeBERTa $^{14}$ . These models are fine-tuned on two datasets of different sizes: the original CAIL dataset (~100k samples) and the sampled training set (1120 samples) that is used as retrieval corpus described in Section 3.2, denoted as CAIL_few.", + "bbox": [ + 112, + 741, + 487, + 869 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "The SFT models are evaluated on the same evaluation dataset described in Section 3.2. The smaller training set aims to compare the few-shot performance of SFT baselines and LLMs in low data scenario.", + "bbox": [ + 507, + 84, + 882, + 162 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "The results of SFT models are shown in Figure 3. Considering the highest accuracy of GPT-4 being $74.46\\%$ (multi-choice, 4shot), GPT-4 can outperform supervised baselines in low data scenario. If there is abundant training data, supervised baselines are still better than GPT-4 by $15\\%$ .", + "bbox": [ + 507, + 165, + 882, + 261 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/9ff0ebb167c5e4bf5cbfeac94073da6a547845d2e4d07cd2fc8c9b9ce5e51518.jpg", + "table_caption": [ + "Table 2: The classification accuracy scores with prompts consisting of fixed cases." + ], + "table_footnote": [], + "table_body": "
ModelCAILCAIL_few
BERT89.6468.04
XLM-RoBERTa88.7566.43
DeBERTa88.5730.89
", + "bbox": [ + 542, + 271, + 848, + 340 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Table 3: Prediction accuracy of SFT models fine-tuned on two training datasets of different sizes.", + "bbox": [ + 507, + 350, + 880, + 379 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A.4 Detailed Results", + "text_level": 1, + "bbox": [ + 507, + 406, + 687, + 420 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "The specific values of performances displayed in Figure 2 are presented in Table 4. Besides, we also provide the performance of the F1 score in Table 5.", + "bbox": [ + 507, + 426, + 882, + 474 + ], + "page_idx": 10 + }, + { + "type": "page_footnote", + "text": "12bert-base-chinese", + "bbox": [ + 132, + 877, + 273, + 890 + ], + "page_idx": 10 + }, + { + "type": "page_footnote", + "text": "13xIm-roberta-base", + "bbox": [ + 132, + 891, + 265, + 903 + ], + "page_idx": 10 + }, + { + "type": "page_footnote", + "text": "$^{14}$ microsoft/mdeberta-v3-base", + "bbox": [ + 132, + 904, + 341, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "7347", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/f4de87165a1fedaa2c922df6fe9b182d5d6f862f35ff28c6fa9598eaeea95886.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-455.1864.8269.1169.8271.9663.9371.2572.5073.7574.46
ChatGPT46.6160.0062.8664.8266.9661.6164.4666.9670.3667.14
Vicuna-13B28.2150.3649.6451.7935.8947.8644.8243.3935.7119.46
ChatGLM-6B41.4351.7950.0050.3650.5455.7150.5449.6449.4647.32
BLOOMZ-7B49.8254.8252.6852.5051.2553.3931.9631.0727.3226.61
", + "bbox": [ + 127, + 216, + 868, + 334 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/7dc56daa2ba608a95e02a93d3ad5bc4b3b63d157d6d482ad9dbe66ee92f49115.jpg", + "table_caption": [ + "Table 4: The classification accuracy scores of all models under the four settings." + ], + "table_footnote": [], + "table_body": "
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-450.5262.7267.5468.6171.0262.3170.4271.8173.2474.00
ChatGPT43.1458.4261.8664.4066.1660.6763.5166.8569.5966.62
Vicuna-13B25.5048.8547.6449.4939.8244.7041.7341.4835.0321.61
ChatGLM-6B41.8950.3047.7648.5948.6753.7449.2647.5647.6145.32
BLOOMZ-7B46.9053.2851.0650.9049.2650.6829.2527.9225.2723.37
", + "bbox": [ + 127, + 637, + 868, + 753 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Table 5: The classification F1 scores of all models under the four settings.", + "bbox": [ + 247, + 764, + 746, + 778 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "7348", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 11 + } +] \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_model.json b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6b7f3acb31bf4d003b227e20b44b2b70a3832663 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_model.json @@ -0,0 +1,2424 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.189, + 0.08, + 0.812, + 0.12 + ], + "angle": 0, + "content": "A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction" + }, + { + "type": "text", + "bbox": [ + 0.278, + 0.127, + 0.391, + 0.141 + ], + "angle": 0, + "content": "Ruihao Shui" + }, + { + "type": "text", + "bbox": [ + 0.199, + 0.144, + 0.47, + 0.175 + ], + "angle": 0, + "content": "National University of Singapore ruihaoshui@u.nus.edu" + }, + { + "type": "text", + "bbox": [ + 0.621, + 0.127, + 0.712, + 0.141 + ], + "angle": 0, + "content": "Yixin Cao" + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.143, + 0.808, + 0.177 + ], + "angle": 0, + "content": "Singapore Management University yxcao@smu.edu.sg" + }, + { + "type": "text", + "bbox": [ + 0.276, + 0.188, + 0.394, + 0.205 + ], + "angle": 0, + "content": "Wang Xiang*" + }, + { + "type": "text", + "bbox": [ + 0.139, + 0.205, + 0.525, + 0.238 + ], + "angle": 0, + "content": "University of Science and Technology of China xiangwang1223@gmail.com" + }, + { + "type": "text", + "bbox": [ + 0.601, + 0.188, + 0.734, + 0.205 + ], + "angle": 0, + "content": "Tat-Seng Chua" + }, + { + "type": "text", + "bbox": [ + 0.53, + 0.206, + 0.802, + 0.238 + ], + "angle": 0, + "content": "National University of Singapore dcscts@nus.edu.sg" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.342, + 0.268 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.284, + 0.461, + 0.668 + ], + "angle": 0, + "content": "Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain. However, recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks. To systematically investigate their competency in the law, we design practical baseline solutions based on LLMs and test on the task of legal judgment prediction. In our solutions, LLMs can work alone to answer open questions or coordinate with an information retrieval (IR) system to learn from similar cases or solve simplified multi-choice questions. We show that similar cases and multi-choice options, namely label candidates, included in prompts can help LLMs recall domain knowledge that is critical for expertise legal reasoning. We additionally present an intriguing paradox wherein an IR system surpasses the performance of LLM+IR due to limited gains acquired by weaker LLMs from powerful IR systems. In such cases, the role of LLMs becomes redundant. Our evaluation pipeline can be easily extended into other tasks to facilitate evaluations in other domains. Code is available at https://github.com/srhthu/LM-CompEval-Legal" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.683, + 0.26, + 0.699 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.711, + 0.491, + 0.873 + ], + "angle": 0, + "content": "Large language models have achieved great success in various Natural Language Processing (NLP) tasks (Brown et al., 2020; Touvron et al., 2023), while there are still some disputes over the potential for domain-specific applications (Martínez, 2023). Focusing on the law domain, the leading LLM, GPT-4 (OpenAI, 2023), was claimed to pass the Uniform Bar Exam (UBE) with a 90th percentile score. Although inspiring, however, this result was pointed out to be overestimated (Martínez, 2023)." + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.251, + 0.885, + 0.449 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.457, + 0.886, + 0.543 + ], + "angle": 0, + "content": "Figure 1: The task of Legal Judgment Prediction and the evaluation settings. Different colors refer to different charges. For similar cases, \"T\" refers to true similar cases with the same charges as the query cases, while \"F\" refers to false similar cases. For task settings, \"ZS\" is the abbreviation for zero-shot and \"FS\" for few-shot." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.578, + 0.883, + 0.61 + ], + "angle": 0, + "content": "This raises an interesting question: How exactly LLMs perform in various real-world legal tasks?" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.614, + 0.885, + 0.92 + ], + "angle": 0, + "content": "In this paper, we design practical baseline solutions based on LLMs and systematically investigate their competency in the law, to shed light on other domains as well. We attribute the main issues of the previous benchmark as follows. First, UBE is too general and not subject to any legal jurisdiction (Martínez, 2023). Second, UBE contains multi-choice questions and open-ended questions that require human experts to evaluate. To avoid human evaluation, some datasets (Hendrycks et al., 2020) replace open-ended questions with multi-choice questions. However, in real-world applications, there are not only multi-choice but also open questions. Using multi-choice questions only may not be comprehensive enough. Third, specifically in but not limited to common law (Shulayeva et al., 2017; Xiao et al., 2019), similar cases are always introduced as evidence to support expertise legal reasoning (Zhong et al., 2020b), which are not fully" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.881, + 0.488, + 0.918 + ], + "angle": 0, + "content": "*Xiang Wang is also affiliated with Institute of Artificial Intelligence, Institute of Dataspace, Hefei Comprehensive National Science Center." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7337" + }, + { + "type": "footer", + "bbox": [ + 0.218, + 0.946, + 0.781, + 0.974 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7337-7348 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.49, + 0.116 + ], + "angle": 0, + "content": "studied in previous benchmark (Hendrycks et al., 2020)." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.118, + 0.49, + 0.535 + ], + "angle": 0, + "content": "For the first issue, we choose legal judgment prediction (LJP) (Xiao et al., 2018; Chalkidis et al., 2019; Zhong et al., 2020a) as the example task for investigation. It is a real-world problem to determine the charges committed by the defendants under a juridical system, as shown in Figure 1. LJP is typically formulated as a classification task to predict the most possible one from a list of predefined charges. Then, for the second and third issues, we design four settings derived from two work scenarios of LLMs to cover open and multichoice questions and the usage of similar cases. In the first scenario, LLMs work alone without explicit knowledge in prompts, assuming all domain knowledge is implicitly stored in parameters. In the second scenario, LLMs coordinate with an information retrieval (IR) system that enriches prompts with similar demonstrations and label candidates to benefit expertise reasoning. Specifically, demonstrations consist of pairs of similar cases and their charges, which are retrieved by the IR system based on similarity of case facts. Labels of the retrieved cases can form label candidates, shown as circles of different colors in Figure 1, to hint LLM with label information and narrow down label space (Ma et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.537, + 0.49, + 0.825 + ], + "angle": 0, + "content": "The four evaluation settings in Figure 1 can be categorized based on the presence of two elements in prompts: demonstrations (similar cases) and label candidates. Demonstrations convert the setting from zero-shot to few-shot prompting, while label candidates simplify the task from open questions to multi-choice questions1. The first scenario corresponds to the first setting, where neither element is present, while the second scenario encompasses the remaining three settings. We evaluate five up-to-date LLMs of the close-source GPT-3 (Brown et al., 2020) family, ChatGPT and GPT-4 (OpenAI, 2023), and open-source LLMs including Vicuna (Chiang et al., 2023), ChatGLM (Du et al., 2022) and BLOOMZ (Muennighoff et al., 2022). The evaluation is conducted on a Chinese LJP dataset, namely CAIL (Xiao et al., 2018), which contains cases of 112 criminal law charges2." + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.827, + 0.44, + 0.842 + ], + "angle": 0, + "content": "We highlight our key findings as follows:" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.843, + 0.486, + 0.859 + ], + "angle": 0, + "content": "1. Similar cases and label candidates can help" + }, + { + "type": "text", + "bbox": [ + 0.545, + 0.085, + 0.882, + 0.117 + ], + "angle": 0, + "content": "LLMs recall domain knowledge that is critical for expertise legal reasoning." + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.118, + 0.884, + 0.165 + ], + "angle": 0, + "content": "2. Label candidates result in more consistent outputs, indicating LLMs gain greater confidence in their domain knowledge (Jiang et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.166, + 0.885, + 0.212 + ], + "angle": 0, + "content": "3. Irrelevant demonstrations formed by fixed cases hardly improve performance. This excludes their effect on task illustration." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.214, + 0.884, + 0.308 + ], + "angle": 0, + "content": "4. Paradox: An IR system can outperform LLM+IR since weaker LLMs acquire limited gains from informative documents retrieved by a powerful IR system. Thus, it is critical to adapte LLMs to generate with retrieved documents." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.31, + 0.884, + 0.358 + ], + "angle": 0, + "content": "5. More similar cases introduce more knowledge and noise simultaneously, whose final outcome depends on LLMs." + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.118, + 0.885, + 0.358 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.359, + 0.882, + 0.391 + ], + "angle": 0, + "content": "The main contributions are summarized in three aspects:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.399, + 0.882, + 0.43 + ], + "angle": 0, + "content": "- We investigate the law competency of LLMs on the task of legal judgment prediction." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.441, + 0.882, + 0.489 + ], + "angle": 0, + "content": "- We propose practical baseline solutions for LLMs that tackle two scenarios: working alone or in coordination with an IR system." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.499, + 0.884, + 0.547 + ], + "angle": 0, + "content": "- We evaluate five LLMs and conduct comprehensive analysis to demystify their characteristics of expertise reasoning." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.399, + 0.884, + 0.547 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.559, + 0.691, + 0.573 + ], + "angle": 0, + "content": "2 Baseline Method" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.583, + 0.885, + 0.856 + ], + "angle": 0, + "content": "The goal of legal judgment prediction is to determine the committed charges given case facts. To harness LLMs for LJP, we adopt in-context learning (Brown et al., 2020) and use LLMs to generate the charges conditioned on prompts (Section 2.1). To enhance LLMs, we incorporate label candidates and demonstrations consisting of similar cases into prompts, which are acquired by an IR system (Section 2.2). This derives four settings of baseline solutions, namely zero-shot open questions, few-shot open questions, zero-shot multi-choice questions, and few-shot multi-choice questions. The multi-choice settings employ label candidates while few-shot settings include demonstrations, as shown in Figure 1. Finally, we introduce how to simulate IR systems with different capabilities to understand their effects (Section 2.3)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.867, + 0.688, + 0.882 + ], + "angle": 0, + "content": "2.1 LLM Prompting" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.888, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Prompt Design. A prompt begins with an instruction to illustrate the task followed by label" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.868, + 0.488, + 0.904 + ], + "angle": 0, + "content": "1It is not strict multi-choice questions. LLMs can generate correct answers even though ground-truth labels are absent in candidates." + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.905, + 0.442, + 0.919 + ], + "angle": 0, + "content": "After filtering less frequent (article, charge) pairs" + }, + { + "type": "list", + "bbox": [ + 0.114, + 0.868, + 0.488, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7338" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.489, + 0.132 + ], + "angle": 0, + "content": "candidates and task demonstrations in the form of input-output pairs. The templates of prompts are displayed in Appendix A.1." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.134, + 0.489, + 0.293 + ], + "angle": 0, + "content": "Parsing. We adopt one automatic parsing function for all LLMs to map LLM outputs to predefined charge labels. No ad hoc heuristics are employed for a fair comparison. Specifically, we use the BM25 algorithm3 to measure text similarity between outputs and pre-defined charges and predict the most similar charges. BM25 is robust and yields comparable performances to neural similarity methods like text2vec4 in our pilot experiments." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.295, + 0.489, + 0.376 + ], + "angle": 0, + "content": "Inference. Sampling is enabled during generation for consistent results, as inspired by Wang et al. (2022). Five outputs are sampled for each prompt with the temperature of 0.8. Their similarity scores of pre-defined labels are averaged." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.388, + 0.476, + 0.403 + ], + "angle": 0, + "content": "2.2 IR System for Knowledge Incorporation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.41, + 0.489, + 0.602 + ], + "angle": 0, + "content": "IR systems are utilized to retrieve similar cases, commonly referenced by lawyers and judges, to inform their judgments. In addition to providing demonstrations, these similar cases can also aid in generating potential labels by incorporating the labels from the top similar cases. By employing these smaller sets of predefined charges, namely label candidates, complex open questions can be simplified into multiple-choice questions. This approach is effective in enhancing LM prompting (Ma et al., 2023), as including hundreds of charges directly in prompts is impractical." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.604, + 0.489, + 0.716 + ], + "angle": 0, + "content": "Implementation of IR System. We use the BM25 algorithm to measure the semantic similarity between cases. Similar cases are retrieved from the training dataset. To guarantee that the demonstrations exemplify one of the multi-choice options, we exclude demonstrations with labels that are not among the candidate options5." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.729, + 0.36, + 0.744 + ], + "angle": 0, + "content": "2.3 Simulation of IR Systems" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.751, + 0.489, + 0.846 + ], + "angle": 0, + "content": "To investigate the effects of IR capabilities, we simulate a series of IR systems of different capabilities as measured by Precision@1\\(^{6}\\). Then the top retrieved cases are used as demonstrations. We consider cases with identical charges to the query cases as true similar cases and vice versa." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.854, + 0.36, + 0.868 + ], + "angle": 0, + "content": "3https://pypi.org/project/rank-bm25/" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.868, + 0.375, + 0.881 + ], + "angle": 0, + "content": "4https://github.com/crownpku/text2vec" + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.881, + 0.486, + 0.905 + ], + "angle": 0, + "content": "5This condition is not violated for the top four similar cases without filtering." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.906, + 0.403, + 0.918 + ], + "angle": 0, + "content": "The accuracy of the top one retrieved case." + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.854, + 0.486, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.085, + 0.885, + 0.277 + ], + "angle": 0, + "content": "Realistic Simulation. We prioritize the returning of true similar cases for easy query cases, rather than the returning in a random manner. The query difficulty is measured by the Precision@10 of the BM25 retriever described in Section 2.2. The motivation is that queries with shadow linguistic features are more possible to get relevant retrieval results than complex or obscure queries. For a specific value (e.g., a%) of Precision@1 to be simulated, the top a% of easy test cases are assured to have a true similar case, while the rest are assigned false similar cases." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.292, + 0.716, + 0.309 + ], + "angle": 0, + "content": "3 Experimental Setup" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.32, + 0.614, + 0.333 + ], + "angle": 0, + "content": "3.1 Models" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.341, + 0.881, + 0.372 + ], + "angle": 0, + "content": "Below is a concise introduction to the five LLMs to be evaluated." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.374, + 0.884, + 0.502 + ], + "angle": 0, + "content": "GPT-4 (OpenAI, 2023) and ChatGPT are available from OpenAI API and the versions of gpt-4-0314 and gpt-3.5-turbo-0301 are used. For technological details, ChatGPT is claimed to be a sibling model to InstructGPT (Ouyang et al., 2022) that is trained to follow instructions and align to human preferences with the RLHF algorithm (Christiano et al., 2017)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.504, + 0.882, + 0.583 + ], + "angle": 0, + "content": "Vicuna-13B (Chiang et al., 2023) is a LLaMA model (Touvron et al., 2023) fine-tuned on 70K public user-shared conversations with ChatGPT. It can be viewed to learn distilled knowledge (Hinton et al., 2015) of ChatGPT." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.585, + 0.882, + 0.632 + ], + "angle": 0, + "content": "ChatGLM-6B7 is a dialog language model based on the GLM (Du et al., 2022) architecture and supports English and Chinese." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.634, + 0.884, + 0.745 + ], + "angle": 0, + "content": "BLOOMZ (Muennighoff et al., 2022) is an instruction fine-tuned BLOOM (Scao et al., 2022), a multilingual language model. We use the bloomz-7b1-mt version that is tuned for multilingual prompts. Except for BLOOMZ, Vicuna and ChatGLM are mainly fine-tuned on conversational data." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.76, + 0.771, + 0.775 + ], + "angle": 0, + "content": "3.2 Dataset and Pre-processing" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.782, + 0.885, + 0.895 + ], + "angle": 0, + "content": "The Chinese LJP dataset, CAIL (Xiao et al., 2018), is used in our experiments. Each sample consists of the case facts and the committed charge as the label. As the original dataset is very large (~100K for training and ~20K for test), we randomly sample a balanced small test set from the original test set. Five cases are sampled for each charge, accounting" + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.905, + 0.8, + 0.918 + ], + "angle": 0, + "content": "7https://github.com/THUDM/ChatGLM-6B" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7339" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.14, + 0.082, + 0.465, + 0.167 + ], + "angle": 0, + "content": "
TokenizerMedian<=500<=1000
ChatGPT396.568.7592.32
Vicuna496.050.8986.96
ChatGLM206.591.0798.57
BLOOMZ210.590.5498.93
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.176, + 0.49, + 0.22 + ], + "angle": 0, + "content": "Table 1: Statistics of the number of tokens across tokenizers. The last two columns present the ratios of test samples with token counts below the specified values." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.249, + 0.489, + 0.345 + ], + "angle": 0, + "content": "for 560 test cases in total for 112 charges. Similarly, we also sample the training and validation sets with 10 cases per charge. The training set is used to retrieve similar cases (Section 2.3), while the validation set is used to determine the optimal \\( k \\) of the kNN algorithm." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.348, + 0.49, + 0.541 + ], + "angle": 0, + "content": "Truncation. Since some cases have very long descriptions, we truncate the case facts of demonstrations to 500 tokens and those of test samples to 1000 tokens. It is worth noting that the text is tokenized by the tokenizer of each model before truncation for a fair comparison. Recently, Petrov et al. (2023) address the issue that a tokenizer can lead to different performances of different languages. This suggests that the performance on a particular language can also be influenced by tokenizers from various models with varying language encoding efficiency." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.543, + 0.49, + 0.673 + ], + "angle": 0, + "content": "Table 1 shows the statistics of the number of tokens processed by different tokenizers8. The most efficient tokenizers for Chinese are those of ChatGLM and BLOOMZ, indicated by the medians of token numbers. In contrast, the tokenizer of ChatGPT produces \\(2 \\times\\) tokens and that of Vicuna produces \\(2.5 \\times\\) tokens. The truncation length is proper to accommodate most samples." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.69, + 0.414, + 0.708 + ], + "angle": 0, + "content": "4 LLM vs. LLM with IR System" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.72, + 0.49, + 0.866 + ], + "angle": 0, + "content": "We initially present the overall results, highlighting the importance of label candidates and similar cases, and conduct a comparative analysis of the models. Subsequently, we investigate the relationship between label candidates and self-consistency to unveil their actual effects on expertise reasoning. Additionally, we perform an ablation study by replacing similar cases with fixed cases as demonstrations to further understand their impact." + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.089, + 0.872, + 0.266 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.281, + 0.885, + 0.381 + ], + "angle": 0, + "content": "Figure 2: The macro comparison between the four settings. “+Label” refers to zero-shot multi-choice questions; “+Sim Case” refers to few-shot open questions and “+Label +Sim Case” refers to few-shot multi-choice questions. More than one points of a model in the last two settings refer to runs with different number of demonstrations." + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.401, + 0.707, + 0.565 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.71, + 0.4, + 0.872, + 0.565 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.581, + 0.885, + 0.611 + ], + "angle": 0, + "content": "Figure 3: Compare the models under each setting. Few-shot performances are averaged among 1-shot to 4-shot." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.638, + 0.679, + 0.652 + ], + "angle": 0, + "content": "4.1 Overall Results" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.661, + 0.883, + 0.708 + ], + "angle": 0, + "content": "The macro comparison between the four settings is shown in Figure 2, where each point represents the performance of one specific run of one model." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.71, + 0.885, + 0.885 + ], + "angle": 0, + "content": "Significance of label candidates and similar cases. In comparison to the zero-shot open question setting where LLMs work alone, the inclusion of label candidates, similar cases, or both demonstrates noteworthy enhancements. This highlights the effectiveness of our baseline solutions that leverage IR systems to expand the capabilities of LLMs in legal domains. These findings align with previous research that has also recognized the significance of the two components (Ma et al., 2023; Liu et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.888, + 0.884, + 0.919 + ], + "angle": 0, + "content": "The effects of label candidates and similar cases differ slightly in terms of performance mean and" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.88, + 0.488, + 0.918 + ], + "angle": 0, + "content": "GPT-4 and ChatGPT have the same results. Following OpenAI's guidance, we use the python package tiktoken for tokenization" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.522, + 0.941 + ], + "angle": 0, + "content": "7340" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.312 + ], + "angle": 0, + "content": "variance. Label candidates contribute to a higher mean performance, while similar cases introduce greater variance. Examining the model performances in the third setting (+Sim Case) displayed in Figure 2, GPT-4 and ChatGPT exhibit more significant improvements from similar cases compared to their smaller counterparts. They also gain more benefit from similar cases than from label candidates. This observation can be attributed to the varying difficulty levels of knowledge utilization. While the knowledge within label candidates is readily accessible and straightforward, leveraging similar cases requires stronger language understanding and few-shot learning abilities." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.318, + 0.49, + 0.448 + ], + "angle": 0, + "content": "Furthermore, the coexistence of label candidates and similar cases further enhances the performance of GPT-4 and ChatGPT, but it diminishes the performance of Vicuna, ChatGLM, and BLOOMZ. This suggests that smaller LLMs may encounter challenges in effectively managing knowledge in multiple forms simultaneously, leading to confusion." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.454, + 0.49, + 0.52 + ], + "angle": 0, + "content": "Model comparison. The performances of the models under zero-shot and few-shot prompting is shown in Figure 3, where few-shot performances are averaged among 1-shot to 4-shot." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.526, + 0.49, + 0.784 + ], + "angle": 0, + "content": "The zero-shot setting emphasizes the ability to understand instructions. When only instructions are available, BLOOMZ performs better than ChatGPT, indicating a superior multilingual instruction following ability. This result is reasonable as BLOOMZ is the only smaller LLM that is fine-tuned on multilingual instructions. Once provided with explicit domain knowledge, ChatGPT outperforms all smaller LLMs. The case is the same for BLOOMZ and ChatGLM, where ChatGLM overtakes BLOOMZ with knowledge of label candidates. BLOOMZ performs worst when prompted with two forms of knowledge, indicating that BLOOMZ is not very robust to prompts. Among the three smaller LLMs, ChatGLM is the most robust to various forms of knowledge." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.791, + 0.49, + 0.92 + ], + "angle": 0, + "content": "The significant effects of label candidates and similar cases can be explained as they activate LLM's memory of relevant domain knowledge. This view can be supported by two pieces of evidence about the relationship between label candidates and self-consistency (Section 4.2) and the negligible effect of irrelevant cases as fixed demonstrations (Section 4.3)." + }, + { + "type": "image", + "bbox": [ + 0.527, + 0.089, + 0.85, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.243, + 0.885, + 0.302 + ], + "angle": 0, + "content": "Figure 4: Changes of performance and self-consistency after adding label candidates. The change of each model is illustrated by an arrow pointing from the open question setting to the multi-choice setting." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.327, + 0.808, + 0.36 + ], + "angle": 0, + "content": "4.2 Label Candidates Enhance Self-consistency and Confidence" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.365, + 0.885, + 0.476 + ], + "angle": 0, + "content": "To further understand the effect of label candidates, we propose a metric to measure the self-consistency of LLMs that is calculated as the number of the majority prediction. Consistent outputs indicate a high level of confidence in LLMs, which is often associated with a better grasp of knowledge (Jiang et al., 2021, 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.479, + 0.885, + 0.768 + ], + "angle": 0, + "content": "The changes in performance and self-consistency after introducing label candidates are shown in Figure 4 as the arrows. We observe that the incorporation of label candidates leads to more consistent outputs (8 of 10 cases) and higher confidence in LLMs except zero-shot GPT-4 with a slight decrease and few-shot BLOOMZ. In the zero-shot setting, label candidates significantly boost LLM performances. We postulate that label candidates help by eliciting pre-stored domain knowledge with concise charge names. Besides, the self-consistency also correlates with model performances (7 of 10 cases). Such correlation is also observed in other tasks like question answering (Jiang et al., 2021). It is worth noting that label candidates decrease both self-consistency and performance of few-shot prompted BLOOMZ, which also aligns with the correlation." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.781, + 0.882, + 0.811 + ], + "angle": 0, + "content": "4.3 Domain Knowledge Is More Critical Than Task Illustration" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.819, + 0.885, + 0.884 + ], + "angle": 0, + "content": "There is a possible argument that similar demonstrations can help LLMs understand instructions and tasks. To disentangle their effects on task illustration and provision of domain knowledge, we" + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.893, + 0.883, + 0.919 + ], + "angle": 0, + "content": "For example, if the five sampled outputs are mapped to labels of (a,a,a,b,c), the consistency score is 3." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.519, + 0.941 + ], + "angle": 0, + "content": "7341" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.124, + 0.087, + 0.476, + 0.228 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.245, + 0.49, + 0.344 + ], + "angle": 0, + "content": "Figure 5: The effects of fixed (irrelevant) and similar cases as demonstrations. Divided by the baseline setting of zero-shot open questions, the left part refers to fixed demonstrations with increasing numbers of demonstrations, while the right part refers to similar demonstrations. The shadow area represents the range of standard deviation." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.373, + 0.489, + 0.452 + ], + "angle": 0, + "content": "experiment with irrelevant demonstrations fixed for all test samples. We manually select two common cases with frequent charges in the original dataset as the fixed demonstrations. The 1-shot performance was averaged on the two demonstrations." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.456, + 0.49, + 0.632 + ], + "angle": 0, + "content": "We compare the effects of fixed and similar demonstrations with the baseline setting of zero-shot open questions in Figure 5. The change of performance from center to left demonstrates that fixed demonstrations hardly benefit LLMs and sometimes harm the performance (e.g., ChatGLM). This indicates that LLMs can basically understand instructions and do not need general demonstrations for task clarification, implying that the main challenge of expertise reasoning is to recall domain knowledge instead of understanding a specific task." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.634, + 0.49, + 0.793 + ], + "angle": 0, + "content": "We inspect the notable performance drop of ChatGLM resulting from fixed demonstrations. We find that ChatGLM tends to analyze the cases of both demonstrations and test samples and then answer with both of their charges. Its wordy style seems to result from the fine-tuning dialog corpus where an assistant LLM is supposed to provide rich information. In contrast, similar cases seem to encourage more concise outputs following the format of demonstrations." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.81, + 0.438, + 0.843 + ], + "angle": 0, + "content": "5 Paradox of Information Retrieval System" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.855, + 0.49, + 0.919 + ], + "angle": 0, + "content": "The significance of similar demonstrations illustrated in Section 4.3 has motivated research focusing on prompting-oriented IR systems (Rubin et al., 2021; Sun et al., 2023) to retrieve high qual" + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.089, + 0.872, + 0.208 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.224, + 0.885, + 0.325 + ], + "angle": 0, + "content": "Figure 6: The performance of ChatGPT coordinated with a series of simulated IR systems with varying capabilities as measured by Precision@1. The vertical blue line represents the threshold of IR capability at which IR systems overtake ChatGPT. The performance of ChatGPT in the real setting (1-shot open questions) is indicated by the red plus sign." + }, + { + "type": "text", + "bbox": [ + 0.506, + 0.35, + 0.883, + 0.477 + ], + "angle": 0, + "content": "ity demonstrations. However, we raise an intuitive question: Do LLMs gain substantial improvement from IR systems compared to the kNN baseline that harnesses IR systems for classification tasks? The question is inspired by our observation that the BM25 retriever achieves \\(48.03\\%\\) of Precision@1 and \\(57.68\\%\\) prediction accuracy by majority vote of top \\(k = 17\\) retrieved similar cases." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.479, + 0.884, + 0.589 + ], + "angle": 0, + "content": "This observation suggests a paradoxical scenario wherein an IR system outperforms the combination of LLM and IR, with the LLM taking on the leading role and the IR serving as a supporting role. In such a scenario, the LLM becomes redundant due to its failure to fully utilize the informative retrieved documents." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.591, + 0.884, + 0.718 + ], + "angle": 0, + "content": "To investigate the paradox, instead of experimenting with different IR systems, we manipulate the BM25 retriever to simulate a series of IR systems with different capabilities measured by Precision@1 as described by Section 2.3. We take a case study of ChatGPT, whose 1-shot performance under different IR systems (denoted as Precision@1) is shown in Figure 6." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.72, + 0.885, + 0.863 + ], + "angle": 0, + "content": "Results Although the performance of ChatGPT enhanced by IR systems improves with IR capability, it will eventually underperform the IR system once the IR capability surpasses a certain threshold. In the ideal situation where true similar cases are always retrieved, ChatGPT is unable to attain \\(100\\%\\) accuracy and lags significantly behind the optimal IR system. According to Appendix A.4, all smaller LLMs are not comparable to the BM25 retriever." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.865, + 0.884, + 0.897 + ], + "angle": 0, + "content": "Discussion The findings demonstrate that LLMs face challenges in effectively leveraging informa" + }, + { + "type": "page_footnote", + "bbox": [ + 0.527, + 0.904, + 0.838, + 0.918 + ], + "angle": 0, + "content": "It is identical to the precision of \\(k\\mathrm{NN}\\) with \\(k = 1\\)" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7342" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.124, + 0.087, + 0.481, + 0.228 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.243, + 0.49, + 0.271 + ], + "angle": 0, + "content": "Figure 7: Performance vs. the number of similar demonstrations of the five LLMs." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.298, + 0.49, + 0.506 + ], + "angle": 0, + "content": "tive retrieved documents. This underscores the need for significant research efforts to enhance the synergy between auto-regressive language models and retrieval by conditioning model outputs more on retrieved documents. Previous work has explored the augmentation of LLMs with retrieval at both the pre-training and fine-tuning stages (Borgeaud et al., 2022; Wang et al., 2023). Moreover, the marginal and inadequate improvement with retrieval indicates the limited legal reasoning ability of existing general LLMs. There is a need for future efforts to enhance domain-specific reasoning abilities of pre-trained foundation models." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.519, + 0.281, + 0.535 + ], + "angle": 0, + "content": "6 Ablation Study" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.545, + 0.462, + 0.576 + ], + "angle": 0, + "content": "6.1 More Demonstrations Are Not Always Better" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.582, + 0.49, + 0.821 + ], + "angle": 0, + "content": "The impact of the number of similar demonstrations \\((n)\\) is depicted in Figure 7. It is evident that GPT-4 and ChatGPT demonstrate proficiency in handling larger numbers of demonstrations, leading to enhanced performance, whereas Vicuna, ChatGLM and BLOOZ experience varying degrees of performance degradation with increasing numbers. Notably, ChatGLM displays the least sensitivity to \\(n\\). Furthermore, even ChatGPT's performance declines when \\(n\\) is increased from three to four. The performance improvement resulting from larger values of \\(n\\) can be attributed to the increased recall of true similar cases. Conversely, the decline in performance can be attributed to the noise introduced by more false similar cases." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.824, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Performance variations. The change of performance after including an additional demonstration are visualized using heat maps in Figure 8. For each model, the three heat maps stand for the variations from k-shot to \\((\\mathrm{k} + 1)\\) -shot, which are denoted below. For each heat map, the two rows indicate" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.229 + ], + "angle": 0, + "content": "the inclusion of a new demonstration with true (T) or false (F) similar cases, while the columns indicate the combinations of existing demonstrations. Take the second heat map as an example. The cell in the column of (F, T) and the row of (T) displays the performance variation between 2-shot of (F, T) demonstrations and 3-shot of (F, T, T) demonstrations. Purple represents performance improvement, while green represents performance decline." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.231, + 0.885, + 0.521 + ], + "angle": 0, + "content": "For ChatGPT and BLOOMZ, the second rows of their three heat maps are mainly in purple, indicating significant enhancements resulting from the inclusion of true similar cases. However, the first lines of BLOOMZ display a deeper green color than those of ChatGPT, suggesting that BLOOMZ experiences greater degree of performance declines caused by the inclusion of false similar cases. These findings indicate different sensitivity to false similar demonstrations. Powerful language models like GPT-4 and ChatGPT exhibit robustness to noise in false similar cases, allowing them to remain focused on relevant information in true similar cases. In contrast, weaker LLMs are susceptible to the influence of such noise. Overall, ChatGPT performs better when provided with more similar demonstrations, whereas BLOOMZ demonstrates the opposite, as shown in Figure 7." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.523, + 0.884, + 0.604 + ], + "angle": 0, + "content": "The conclusion is that increased numbers of demonstrations have both positive and negative implications for expertise reasoning. However, LLMs could potentially gain from additional demonstrations in tasks that requires clear task illustration." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.62, + 0.844, + 0.651 + ], + "angle": 0, + "content": "6.2 The Impact of Absent Ground Truth Labels" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.66, + 0.884, + 0.837 + ], + "angle": 0, + "content": "We manually incorporate ground-truth labels into label candidates in cases where they are absent, which may occur due to the limited recall capability of the IR system described in Section 2.2. The test samples are categorized into two groups, namely \"Easy\" and \"Hard\", based on the retrieval of their ground truth labels by the IR system. The original performance of the two groups and the performance of the \"Hard\" group with modified prompts to include ground truth labels, namely \"Hard+GT\", are displayed in Figure 9." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.84, + 0.885, + 0.919 + ], + "angle": 0, + "content": "The performance gaps between the \"Easy\" and \"Hard+GT\" groups suggest that challenging samples for IR systems are also difficult for LLMs. However, this gap is insignificant for the powerful GPT-4 who perceives them as equal challeng" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7343" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.18, + 0.085, + 0.835, + 0.228 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.241, + 0.885, + 0.288 + ], + "angle": 0, + "content": "Figure 8: Heat maps of performance variations resulting from the inclusion of an addition demonstration. \"T\" corresponds to demonstrations with true similar cases, while \"F\" represents those with false similar cases. Each row represents the included new demonstration, while each column indicates the status of existing demonstrations." + }, + { + "type": "image", + "bbox": [ + 0.124, + 0.312, + 0.48, + 0.434 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.449, + 0.49, + 0.507 + ], + "angle": 0, + "content": "Figure 9: The performance of \"Easy\" and \"Hard\" samples under the setting of zero-shot multi-choice questions. \"Hard+GT\" refers to improvement of including the absent ground truth labels in label candidates." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.537, + 0.49, + 0.651 + ], + "angle": 0, + "content": "ing. The improvement of \"Hard+GT\" compared to \"Hard\" is notable in GPT-4, ChatGPT and ChatGLM but inconspicuous in Vicuna with inferior competency in the law. Considering the relatively small size of the \"Hard\" group (79/560), the absence of ground truth labels does not have a significant impact, especially for weaker LLMs." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.669, + 0.396, + 0.684 + ], + "angle": 0, + "content": "6.3 Incorporation of Law Articles" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.694, + 0.49, + 0.887 + ], + "angle": 0, + "content": "We examine the effect of incorporating legal articles that explicitly define the charges into prompts. For each charge retrieved by the IR system11, ChatGPT is required to determine whether the defendant is guilty for the particular charge by answering with a yes or no. We find that \\(94.46\\%\\) of the ground truth charges are accurately detected, while only \\(27.31\\%\\) of the detected charges are correct. The high recall and low precision indicate a substantial difference between ChatGPT and legal experts in the ability to distinguish charges and make precise judgments." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.31, + 0.637, + 0.324 + ], + "angle": 0, + "content": "7 Discussion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.335, + 0.885, + 0.463 + ], + "angle": 0, + "content": "We compare the LLMs with supervised baselines. We fine-tune BERT (Devlin et al., 2018) on the same training set and achieve a comparable accuracy of \\(68\\%\\) to ChatGPT but lower than GPT-4. Since LLMs are not fine-tuned on the specific LJP task, this result highlights the remarkable superiority of LLMs in acquiring significant knowledge and leveraging transfer learning Raffel et al. (2020)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.464, + 0.884, + 0.623 + ], + "angle": 0, + "content": "However, we observe that BERT's performance improves to \\(89\\%\\) when trained with the original training set (\\(\\sim 10\\mathrm{K}\\)). We find that certain knowledge is present in shadow features, which can be easily learned with supervision. These superficial features can result in biased supervised models. Fortunately, unsupervised pre-training objectives, make LLMs more robust and less vulnerable to this issue. This depicts a promising future for NLP applications in various domains." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.637, + 0.642, + 0.652 + ], + "angle": 0, + "content": "8 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.662, + 0.885, + 0.774 + ], + "angle": 0, + "content": "To address the deficiency in evaluating the competency of LLMs in the field of law, we focused on the task of legal judgment prediction and devised four settings to facilitate a thorough evaluation that encompassed both open and multiple-choice questions and incorporated similar cases to aid in the decision-making process." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.775, + 0.886, + 0.919 + ], + "angle": 0, + "content": "The evaluation results revealed different behaviors among the prominent LLMs, namely GPT-4 and ChatGPT, compared to their smaller counterparts. Both GPT-4 and ChatGPT exhibited remarkable proficiency in effectively leveraging domain knowledge in various formats. Among the smaller LLMs, ChatGLM displayed greater robustness, while BLOOMZ showcased superior zero-shot ability." + }, + { + "type": "page_footnote", + "bbox": [ + 0.132, + 0.904, + 0.385, + 0.919 + ], + "angle": 0, + "content": "we also include the ground truth charge" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7344" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.182 + ], + "angle": 0, + "content": "We presented an intriguing paradox wherein LLMs could become abundant in the presence of a powerful IR system. When improving IR systems to benefit LLMs, it is crucial for researchers to acknowledge this paradoxical scenario and prevent great disparity between LLMs and IR systems." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.199, + 0.221, + 0.214 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.229, + 0.49, + 0.356 + ], + "angle": 0, + "content": "One limitation of this paper is the use of the close-source GPT-4 and ChatGPT whose availability depends on the commercial company OpenAI. According to OpenAI, the ChatGPT and GPT-4 versions used in this paper, namely gpt-3.5-turbo-0301 and gpt-4-0314, will be deprecated and not available after September 13th, 2023." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.359, + 0.49, + 0.472 + ], + "angle": 0, + "content": "Another limitation pertains to the selection of LLMs. Due to the rapid emergence of new LLMs, we are not able to include all of them with the constraint of limited time. Instead of more models, we focus more on designing comprehensive evaluation settings and conducting insightful analyses to shed light on other domains." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.489, + 0.266, + 0.505 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.519, + 0.49, + 0.761 + ], + "angle": 0, + "content": "The task of legal judgment prediction is used to evaluate LLM's competency in the law. The primary objective of this task is to assist judges and lawyers in comprehending lengthy legal documents by offering them a supplementary tool. It is important to note that this task does not seek to replace the roles of judges and lawyers, nor does it aim to determine the guilt or charges of defendants through machine learning algorithms. Additionally, there is research focused on interpreting LJP models, aiming to enhance the transparency of black-box models for improved utilization by legal practitioners. The paper utilizes a public and anonymized dataset to exclude the potential issue of personal information leakage." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.778, + 0.287, + 0.794 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.807, + 0.49, + 0.904 + ], + "angle": 0, + "content": "We thank all reviewers for their constructive comments. This research is supported by NExT Research Center, the National Natural Science Foundation of China (9227010114) and the University Synergy Innovation Program of Anhui Province (GXXT-2022-040)." + }, + { + "type": "title", + "bbox": [ + 0.511, + 0.084, + 0.61, + 0.099 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.106, + 0.886, + 0.2 + ], + "angle": 0, + "content": "Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206-2240. PMLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.207, + 0.885, + 0.288 + ], + "angle": 0, + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.295, + 0.885, + 0.361 + ], + "angle": 0, + "content": "Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in english. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.369, + 0.885, + 0.45 + ], + "angle": 0, + "content": "Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with \\(90\\%\\) * chatgpt quality." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.457, + 0.885, + 0.512 + ], + "angle": 0, + "content": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.519, + 0.885, + 0.572 + ], + "angle": 0, + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.58, + 0.885, + 0.661 + ], + "angle": 0, + "content": "Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320-335." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.668, + 0.885, + 0.722 + ], + "angle": 0, + "content": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.729, + 0.885, + 0.771 + ], + "angle": 0, + "content": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.778, + 0.885, + 0.845 + ], + "angle": 0, + "content": "Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962-977." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.852, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.106, + 0.886, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.942 + ], + "angle": 0, + "content": "7345" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.139 + ], + "angle": 0, + "content": "Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.149, + 0.487, + 0.202 + ], + "angle": 0, + "content": "Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. 2023. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.212, + 0.486, + 0.238 + ], + "angle": 0, + "content": "Eric Martínez. 2023. Re-evaluating gpt-4's bar exam performance. Available at SSRN 4441311." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.248, + 0.487, + 0.326 + ], + "angle": 0, + "content": "Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.337, + 0.486, + 0.363 + ], + "angle": 0, + "content": "OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.374, + 0.487, + 0.453 + ], + "angle": 0, + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.463, + 0.487, + 0.515 + ], + "angle": 0, + "content": "Aleksandar Petrov, Emanuele La Malfa, Philip HS Torr, and Adel Bibi. 2023. Language model tokenizers introduce unfairness between languages. arXiv preprint arXiv:2305.15425." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.525, + 0.487, + 0.604 + ], + "angle": 0, + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.614, + 0.487, + 0.655 + ], + "angle": 0, + "content": "Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2021. Learning to retrieve prompts for in-context learning. arXiv preprint arXiv:2112.08633." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.664, + 0.487, + 0.743 + ], + "angle": 0, + "content": "Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Galle, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.753, + 0.487, + 0.805 + ], + "angle": 0, + "content": "Olga Shulayeva, Advaith Siddharthan, and Adam Wyner. 2017. Recognizing cited facts and principles in legal judgements. Artificial Intelligence and Law, 25(1):107-126." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.816, + 0.487, + 0.87 + ], + "angle": 0, + "content": "Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, and Guoyin Wang. 2023. Pushing the limits of chatgpt on nlp tasks." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.879, + 0.487, + 0.919 + ], + "angle": 0, + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro," + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.529, + 0.086, + 0.882, + 0.125 + ], + "angle": 0, + "content": "Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.136, + 0.882, + 0.214 + ], + "angle": 0, + "content": "Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, et al. 2023. Shall we pretrain autoregressive language models with retrieval? a comprehensive study. arXiv preprint arXiv:2304.06762." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.224, + 0.882, + 0.291 + ], + "angle": 0, + "content": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.3, + 0.882, + 0.366 + ], + "angle": 0, + "content": "Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.376, + 0.882, + 0.442 + ], + "angle": 0, + "content": "Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Tianyang Zhang, Xianpei Han, Zhen Hu, Heng Wang, et al. 2019. Cail2019-scm: A dataset of similar case matching in legal domain. arXiv preprint arXiv:1911.08962." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.452, + 0.882, + 0.531 + ], + "angle": 0, + "content": "Haoxi Zhong, Yuzhong Wang, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020a. Iteratively questioning and answering for interpretable legal judgment prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1250-1257." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.54, + 0.882, + 0.606 + ], + "angle": 0, + "content": "Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020b. How does nlp benefit legal system: A summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.606 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7346" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.237, + 0.1 + ], + "angle": 0, + "content": "A Appendix" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.111, + 0.308, + 0.126 + ], + "angle": 0, + "content": "A.1 Prompt Templates" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.132, + 0.489, + 0.228 + ], + "angle": 0, + "content": "The prompt template is shown in Figure 10. The translation of the original Chinese prompt is displayed using orange text. The setting of zero-shot open questions use a longer instruction that appends \"Output the charge name directly\" to the instruction in Figure A.1." + }, + { + "type": "image", + "bbox": [ + 0.12, + 0.244, + 0.47, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.397, + 0.486, + 0.412 + ], + "angle": 0, + "content": "Figure 10: The prompt template in Chinese and English." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.44, + 0.419, + 0.455 + ], + "angle": 0, + "content": "A.2 Robust to Fixed Demonstrations" + }, + { + "type": "table", + "bbox": [ + 0.149, + 0.469, + 0.454, + 0.57 + ], + "angle": 0, + "content": "
Model1shot2shot
GPT-449.59 / 48.8450.69
ChatGPT47.01 / 46.5747.55
Vicuna-13B22.74 / 29.3828.37
ChatGLM-6B22.39 / 25.1421.36
BLOOMZ-7B36.65 / 43.9442.24
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.581, + 0.486, + 0.609 + ], + "angle": 0, + "content": "Table 2: The classification accuracy scores with prompts consisting of fixed cases." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.629, + 0.486, + 0.709 + ], + "angle": 0, + "content": "We examine the effects of the two fixed cases mentioned in Section 4.3 in Table 2. We find that GPT-4 and ChatGPT are robust to the selection of the fixed demonstration in 1-shot setting, while Vicuna, ChatGLM and BLOOMZ are less robust." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.721, + 0.47, + 0.737 + ], + "angle": 0, + "content": "A.3 Comparison with Supervised Baselines" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.742, + 0.489, + 0.87 + ], + "angle": 0, + "content": "To understand the performance of supervised finetuning (SFT) baselines on LJP, we experiment on three models: BERT\\(^{12}\\), XLM-RoBERTa\\(^{13}\\) and DeBERTa\\(^{14}\\). These models are fine-tuned on two datasets of different sizes: the original CAIL dataset (~100k samples) and the sampled training set (1120 samples) that is used as retrieval corpus described in Section 3.2, denoted as CAIL_few." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.163 + ], + "angle": 0, + "content": "The SFT models are evaluated on the same evaluation dataset described in Section 3.2. The smaller training set aims to compare the few-shot performance of SFT baselines and LLMs in low data scenario." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.166, + 0.884, + 0.262 + ], + "angle": 0, + "content": "The results of SFT models are shown in Figure 3. Considering the highest accuracy of GPT-4 being \\(74.46\\%\\) (multi-choice, 4shot), GPT-4 can outperform supervised baselines in low data scenario. If there is abundant training data, supervised baselines are still better than GPT-4 by \\(15\\%\\)." + }, + { + "type": "table", + "bbox": [ + 0.543, + 0.272, + 0.85, + 0.341 + ], + "angle": 0, + "content": "
ModelCAILCAIL_few
BERT89.6468.04
XLM-RoBERTa88.7566.43
DeBERTa88.5730.89
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.351, + 0.881, + 0.38 + ], + "angle": 0, + "content": "Table 3: Prediction accuracy of SFT models fine-tuned on two training datasets of different sizes." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.407, + 0.688, + 0.421 + ], + "angle": 0, + "content": "A.4 Detailed Results" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.428, + 0.884, + 0.475 + ], + "angle": 0, + "content": "The specific values of performances displayed in Figure 2 are presented in Table 4. Besides, we also provide the performance of the F1 score in Table 5." + }, + { + "type": "page_footnote", + "bbox": [ + 0.133, + 0.878, + 0.275, + 0.891 + ], + "angle": 0, + "content": "12bert-base-chinese" + }, + { + "type": "page_footnote", + "bbox": [ + 0.134, + 0.892, + 0.266, + 0.904 + ], + "angle": 0, + "content": "13xIm-roberta-base" + }, + { + "type": "page_footnote", + "bbox": [ + 0.134, + 0.905, + 0.342, + 0.918 + ], + "angle": 0, + "content": "\\(^{14}\\)microsoft/mdeberta-v3-base" + }, + { + "type": "list", + "bbox": [ + 0.133, + 0.878, + 0.342, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.52, + 0.941 + ], + "angle": 0, + "content": "7347" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.129, + 0.217, + 0.87, + 0.335 + ], + "angle": 0, + "content": "
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-455.1864.8269.1169.8271.9663.9371.2572.5073.7574.46
ChatGPT46.6160.0062.8664.8266.9661.6164.4666.9670.3667.14
Vicuna-13B28.2150.3649.6451.7935.8947.8644.8243.3935.7119.46
ChatGLM-6B41.4351.7950.0050.3650.5455.7150.5449.6449.4647.32
BLOOMZ-7B49.8254.8252.6852.5051.2553.3931.9631.0727.3226.61
" + }, + { + "type": "table_caption", + "bbox": [ + 0.227, + 0.345, + 0.768, + 0.358 + ], + "angle": 0, + "content": "Table 4: The classification accuracy scores of all models under the four settings." + }, + { + "type": "table", + "bbox": [ + 0.129, + 0.638, + 0.87, + 0.755 + ], + "angle": 0, + "content": "
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-450.5262.7267.5468.6171.0262.3170.4271.8173.2474.00
ChatGPT43.1458.4261.8664.4066.1660.6763.5166.8569.5966.62
Vicuna-13B25.5048.8547.6449.4939.8244.7041.7341.4835.0321.61
ChatGLM-6B41.8950.3047.7648.5948.6753.7449.2647.5647.6145.32
BLOOMZ-7B46.9053.2851.0650.9049.2650.6829.2527.9225.2723.37
" + }, + { + "type": "table_caption", + "bbox": [ + 0.248, + 0.765, + 0.747, + 0.779 + ], + "angle": 0, + "content": "Table 5: The classification F1 scores of all models under the four settings." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "7348" + } + ] +] \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_origin.pdf b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..21b761f5faa2716fb6a9ed32e4db1b292c9d71e8 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/03295168-adb4-4f17-ac96-deb081e11468_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58945123db33158d43b4788d091211618bef9b2089f572e5db62c3558bf83748 +size 508333 diff --git a/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/full.md b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..16d3fd4fc7be96200f98368ad6884d02a18ac770 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/full.md @@ -0,0 +1,325 @@ +# A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction + +Ruihao Shui + +National University of Singapore ruihaoshui@u.nus.edu + +Yixin Cao + +Singapore Management University yxcao@smu.edu.sg + +Wang Xiang* + +University of Science and Technology of China xiangwang1223@gmail.com + +Tat-Seng Chua + +National University of Singapore dcscts@nus.edu.sg + +# Abstract + +Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain. However, recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks. To systematically investigate their competency in the law, we design practical baseline solutions based on LLMs and test on the task of legal judgment prediction. In our solutions, LLMs can work alone to answer open questions or coordinate with an information retrieval (IR) system to learn from similar cases or solve simplified multi-choice questions. We show that similar cases and multi-choice options, namely label candidates, included in prompts can help LLMs recall domain knowledge that is critical for expertise legal reasoning. We additionally present an intriguing paradox wherein an IR system surpasses the performance of LLM+IR due to limited gains acquired by weaker LLMs from powerful IR systems. In such cases, the role of LLMs becomes redundant. Our evaluation pipeline can be easily extended into other tasks to facilitate evaluations in other domains. Code is available at https://github.com/srhthu/LM-CompEval-Legal + +# 1 Introduction + +Large language models have achieved great success in various Natural Language Processing (NLP) tasks (Brown et al., 2020; Touvron et al., 2023), while there are still some disputes over the potential for domain-specific applications (Martínez, 2023). Focusing on the law domain, the leading LLM, GPT-4 (OpenAI, 2023), was claimed to pass the Uniform Bar Exam (UBE) with a 90th percentile score. Although inspiring, however, this result was pointed out to be overestimated (Martínez, 2023). + +![](images/99ef283638db9ae2fce19004b145e404d3b53a5a36b8bac09db03b2bc2773c2c.jpg) +Figure 1: The task of Legal Judgment Prediction and the evaluation settings. Different colors refer to different charges. For similar cases, "T" refers to true similar cases with the same charges as the query cases, while "F" refers to false similar cases. For task settings, "ZS" is the abbreviation for zero-shot and "FS" for few-shot. + +This raises an interesting question: How exactly LLMs perform in various real-world legal tasks? + +In this paper, we design practical baseline solutions based on LLMs and systematically investigate their competency in the law, to shed light on other domains as well. We attribute the main issues of the previous benchmark as follows. First, UBE is too general and not subject to any legal jurisdiction (Martínez, 2023). Second, UBE contains multi-choice questions and open-ended questions that require human experts to evaluate. To avoid human evaluation, some datasets (Hendrycks et al., 2020) replace open-ended questions with multi-choice questions. However, in real-world applications, there are not only multi-choice but also open questions. Using multi-choice questions only may not be comprehensive enough. Third, specifically in but not limited to common law (Shulayeva et al., 2017; Xiao et al., 2019), similar cases are always introduced as evidence to support expertise legal reasoning (Zhong et al., 2020b), which are not fully + +studied in previous benchmark (Hendrycks et al., 2020). + +For the first issue, we choose legal judgment prediction (LJP) (Xiao et al., 2018; Chalkidis et al., 2019; Zhong et al., 2020a) as the example task for investigation. It is a real-world problem to determine the charges committed by the defendants under a juridical system, as shown in Figure 1. LJP is typically formulated as a classification task to predict the most possible one from a list of predefined charges. Then, for the second and third issues, we design four settings derived from two work scenarios of LLMs to cover open and multichoice questions and the usage of similar cases. In the first scenario, LLMs work alone without explicit knowledge in prompts, assuming all domain knowledge is implicitly stored in parameters. In the second scenario, LLMs coordinate with an information retrieval (IR) system that enriches prompts with similar demonstrations and label candidates to benefit expertise reasoning. Specifically, demonstrations consist of pairs of similar cases and their charges, which are retrieved by the IR system based on similarity of case facts. Labels of the retrieved cases can form label candidates, shown as circles of different colors in Figure 1, to hint LLM with label information and narrow down label space (Ma et al., 2023). + +The four evaluation settings in Figure 1 can be categorized based on the presence of two elements in prompts: demonstrations (similar cases) and label candidates. Demonstrations convert the setting from zero-shot to few-shot prompting, while label candidates simplify the task from open questions to multi-choice questions1. The first scenario corresponds to the first setting, where neither element is present, while the second scenario encompasses the remaining three settings. We evaluate five up-to-date LLMs of the close-source GPT-3 (Brown et al., 2020) family, ChatGPT and GPT-4 (OpenAI, 2023), and open-source LLMs including Vicuna (Chiang et al., 2023), ChatGLM (Du et al., 2022) and BLOOMZ (Muennighoff et al., 2022). The evaluation is conducted on a Chinese LJP dataset, namely CAIL (Xiao et al., 2018), which contains cases of 112 criminal law charges2. + +We highlight our key findings as follows: + +1. Similar cases and label candidates can help + +LLMs recall domain knowledge that is critical for expertise legal reasoning. + +2. Label candidates result in more consistent outputs, indicating LLMs gain greater confidence in their domain knowledge (Jiang et al., 2021). +3. Irrelevant demonstrations formed by fixed cases hardly improve performance. This excludes their effect on task illustration. +4. Paradox: An IR system can outperform LLM+IR since weaker LLMs acquire limited gains from informative documents retrieved by a powerful IR system. Thus, it is critical to adapte LLMs to generate with retrieved documents. +5. More similar cases introduce more knowledge and noise simultaneously, whose final outcome depends on LLMs. + +The main contributions are summarized in three aspects: + +- We investigate the law competency of LLMs on the task of legal judgment prediction. +- We propose practical baseline solutions for LLMs that tackle two scenarios: working alone or in coordination with an IR system. +- We evaluate five LLMs and conduct comprehensive analysis to demystify their characteristics of expertise reasoning. + +# 2 Baseline Method + +The goal of legal judgment prediction is to determine the committed charges given case facts. To harness LLMs for LJP, we adopt in-context learning (Brown et al., 2020) and use LLMs to generate the charges conditioned on prompts (Section 2.1). To enhance LLMs, we incorporate label candidates and demonstrations consisting of similar cases into prompts, which are acquired by an IR system (Section 2.2). This derives four settings of baseline solutions, namely zero-shot open questions, few-shot open questions, zero-shot multi-choice questions, and few-shot multi-choice questions. The multi-choice settings employ label candidates while few-shot settings include demonstrations, as shown in Figure 1. Finally, we introduce how to simulate IR systems with different capabilities to understand their effects (Section 2.3). + +# 2.1 LLM Prompting + +Prompt Design. A prompt begins with an instruction to illustrate the task followed by label + +candidates and task demonstrations in the form of input-output pairs. The templates of prompts are displayed in Appendix A.1. + +Parsing. We adopt one automatic parsing function for all LLMs to map LLM outputs to predefined charge labels. No ad hoc heuristics are employed for a fair comparison. Specifically, we use the BM25 algorithm3 to measure text similarity between outputs and pre-defined charges and predict the most similar charges. BM25 is robust and yields comparable performances to neural similarity methods like text2vec4 in our pilot experiments. + +Inference. Sampling is enabled during generation for consistent results, as inspired by Wang et al. (2022). Five outputs are sampled for each prompt with the temperature of 0.8. Their similarity scores of pre-defined labels are averaged. + +# 2.2 IR System for Knowledge Incorporation + +IR systems are utilized to retrieve similar cases, commonly referenced by lawyers and judges, to inform their judgments. In addition to providing demonstrations, these similar cases can also aid in generating potential labels by incorporating the labels from the top similar cases. By employing these smaller sets of predefined charges, namely label candidates, complex open questions can be simplified into multiple-choice questions. This approach is effective in enhancing LM prompting (Ma et al., 2023), as including hundreds of charges directly in prompts is impractical. + +Implementation of IR System. We use the BM25 algorithm to measure the semantic similarity between cases. Similar cases are retrieved from the training dataset. To guarantee that the demonstrations exemplify one of the multi-choice options, we exclude demonstrations with labels that are not among the candidate options5. + +# 2.3 Simulation of IR Systems + +To investigate the effects of IR capabilities, we simulate a series of IR systems of different capabilities as measured by Precision@1 $^{6}$ . Then the top retrieved cases are used as demonstrations. We consider cases with identical charges to the query cases as true similar cases and vice versa. + +3https://pypi.org/project/rank-bm25/ +4https://github.com/crownpku/text2vec +5This condition is not violated for the top four similar cases without filtering. +The accuracy of the top one retrieved case. + +Realistic Simulation. We prioritize the returning of true similar cases for easy query cases, rather than the returning in a random manner. The query difficulty is measured by the Precision@10 of the BM25 retriever described in Section 2.2. The motivation is that queries with shadow linguistic features are more possible to get relevant retrieval results than complex or obscure queries. For a specific value (e.g., a%) of Precision@1 to be simulated, the top a% of easy test cases are assured to have a true similar case, while the rest are assigned false similar cases. + +# 3 Experimental Setup + +# 3.1 Models + +Below is a concise introduction to the five LLMs to be evaluated. + +GPT-4 (OpenAI, 2023) and ChatGPT are available from OpenAI API and the versions of gpt-4-0314 and gpt-3.5-turbo-0301 are used. For technological details, ChatGPT is claimed to be a sibling model to InstructGPT (Ouyang et al., 2022) that is trained to follow instructions and align to human preferences with the RLHF algorithm (Christiano et al., 2017). + +Vicuna-13B (Chiang et al., 2023) is a LLaMA model (Touvron et al., 2023) fine-tuned on 70K public user-shared conversations with ChatGPT. It can be viewed to learn distilled knowledge (Hinton et al., 2015) of ChatGPT. + +ChatGLM-6B7 is a dialog language model based on the GLM (Du et al., 2022) architecture and supports English and Chinese. + +BLOOMZ (Muennighoff et al., 2022) is an instruction fine-tuned BLOOM (Scao et al., 2022), a multilingual language model. We use the bloomz-7b1-mt version that is tuned for multilingual prompts. Except for BLOOMZ, Vicuna and ChatGLM are mainly fine-tuned on conversational data. + +# 3.2 Dataset and Pre-processing + +The Chinese LJP dataset, CAIL (Xiao et al., 2018), is used in our experiments. Each sample consists of the case facts and the committed charge as the label. As the original dataset is very large (~100K for training and ~20K for test), we randomly sample a balanced small test set from the original test set. Five cases are sampled for each charge, accounting + +
TokenizerMedian<=500<=1000
ChatGPT396.568.7592.32
Vicuna496.050.8986.96
ChatGLM206.591.0798.57
BLOOMZ210.590.5498.93
+ +Table 1: Statistics of the number of tokens across tokenizers. The last two columns present the ratios of test samples with token counts below the specified values. + +for 560 test cases in total for 112 charges. Similarly, we also sample the training and validation sets with 10 cases per charge. The training set is used to retrieve similar cases (Section 2.3), while the validation set is used to determine the optimal $k$ of the kNN algorithm. + +Truncation. Since some cases have very long descriptions, we truncate the case facts of demonstrations to 500 tokens and those of test samples to 1000 tokens. It is worth noting that the text is tokenized by the tokenizer of each model before truncation for a fair comparison. Recently, Petrov et al. (2023) address the issue that a tokenizer can lead to different performances of different languages. This suggests that the performance on a particular language can also be influenced by tokenizers from various models with varying language encoding efficiency. + +Table 1 shows the statistics of the number of tokens processed by different tokenizers8. The most efficient tokenizers for Chinese are those of ChatGLM and BLOOMZ, indicated by the medians of token numbers. In contrast, the tokenizer of ChatGPT produces $2 \times$ tokens and that of Vicuna produces $2.5 \times$ tokens. The truncation length is proper to accommodate most samples. + +# 4 LLM vs. LLM with IR System + +We initially present the overall results, highlighting the importance of label candidates and similar cases, and conduct a comparative analysis of the models. Subsequently, we investigate the relationship between label candidates and self-consistency to unveil their actual effects on expertise reasoning. Additionally, we perform an ablation study by replacing similar cases with fixed cases as demonstrations to further understand their impact. + +![](images/620fe7dc4baab808679f7c64609c21920fadcfb6307c9e362caf890ff9646183.jpg) +Figure 2: The macro comparison between the four settings. “+Label” refers to zero-shot multi-choice questions; “+Sim Case” refers to few-shot open questions and “+Label +Sim Case” refers to few-shot multi-choice questions. More than one points of a model in the last two settings refer to runs with different number of demonstrations. + +![](images/058621cf7b920696a5ded18ebf7ce22d573e13f2fbf3fb20ad1dab34e7e7a260.jpg) +Figure 3: Compare the models under each setting. Few-shot performances are averaged among 1-shot to 4-shot. + +![](images/927a75a027158091b5c7362bb8583d03d067384c89367a300592ad4e44d85ee6.jpg) + +# 4.1 Overall Results + +The macro comparison between the four settings is shown in Figure 2, where each point represents the performance of one specific run of one model. + +Significance of label candidates and similar cases. In comparison to the zero-shot open question setting where LLMs work alone, the inclusion of label candidates, similar cases, or both demonstrates noteworthy enhancements. This highlights the effectiveness of our baseline solutions that leverage IR systems to expand the capabilities of LLMs in legal domains. These findings align with previous research that has also recognized the significance of the two components (Ma et al., 2023; Liu et al., 2021). + +The effects of label candidates and similar cases differ slightly in terms of performance mean and + +variance. Label candidates contribute to a higher mean performance, while similar cases introduce greater variance. Examining the model performances in the third setting (+Sim Case) displayed in Figure 2, GPT-4 and ChatGPT exhibit more significant improvements from similar cases compared to their smaller counterparts. They also gain more benefit from similar cases than from label candidates. This observation can be attributed to the varying difficulty levels of knowledge utilization. While the knowledge within label candidates is readily accessible and straightforward, leveraging similar cases requires stronger language understanding and few-shot learning abilities. + +Furthermore, the coexistence of label candidates and similar cases further enhances the performance of GPT-4 and ChatGPT, but it diminishes the performance of Vicuna, ChatGLM, and BLOOMZ. This suggests that smaller LLMs may encounter challenges in effectively managing knowledge in multiple forms simultaneously, leading to confusion. + +Model comparison. The performances of the models under zero-shot and few-shot prompting is shown in Figure 3, where few-shot performances are averaged among 1-shot to 4-shot. + +The zero-shot setting emphasizes the ability to understand instructions. When only instructions are available, BLOOMZ performs better than ChatGPT, indicating a superior multilingual instruction following ability. This result is reasonable as BLOOMZ is the only smaller LLM that is fine-tuned on multilingual instructions. Once provided with explicit domain knowledge, ChatGPT outperforms all smaller LLMs. The case is the same for BLOOMZ and ChatGLM, where ChatGLM overtakes BLOOMZ with knowledge of label candidates. BLOOMZ performs worst when prompted with two forms of knowledge, indicating that BLOOMZ is not very robust to prompts. Among the three smaller LLMs, ChatGLM is the most robust to various forms of knowledge. + +The significant effects of label candidates and similar cases can be explained as they activate LLM's memory of relevant domain knowledge. This view can be supported by two pieces of evidence about the relationship between label candidates and self-consistency (Section 4.2) and the negligible effect of irrelevant cases as fixed demonstrations (Section 4.3). + +![](images/797c37324a6cae109c7d203b9697dd318c533ebb16e08e086e16c03012b985f5.jpg) +Figure 4: Changes of performance and self-consistency after adding label candidates. The change of each model is illustrated by an arrow pointing from the open question setting to the multi-choice setting. + +# 4.2 Label Candidates Enhance Self-consistency and Confidence + +To further understand the effect of label candidates, we propose a metric to measure the self-consistency of LLMs that is calculated as the number of the majority prediction. Consistent outputs indicate a high level of confidence in LLMs, which is often associated with a better grasp of knowledge (Jiang et al., 2021, 2023). + +The changes in performance and self-consistency after introducing label candidates are shown in Figure 4 as the arrows. We observe that the incorporation of label candidates leads to more consistent outputs (8 of 10 cases) and higher confidence in LLMs except zero-shot GPT-4 with a slight decrease and few-shot BLOOMZ. In the zero-shot setting, label candidates significantly boost LLM performances. We postulate that label candidates help by eliciting pre-stored domain knowledge with concise charge names. Besides, the self-consistency also correlates with model performances (7 of 10 cases). Such correlation is also observed in other tasks like question answering (Jiang et al., 2021). It is worth noting that label candidates decrease both self-consistency and performance of few-shot prompted BLOOMZ, which also aligns with the correlation. + +# 4.3 Domain Knowledge Is More Critical Than Task Illustration + +There is a possible argument that similar demonstrations can help LLMs understand instructions and tasks. To disentangle their effects on task illustration and provision of domain knowledge, we + +![](images/eea72bf54c5ffa53aceb9880908d9a5af00872456134b4818d6da649d2fe31a9.jpg) +Figure 5: The effects of fixed (irrelevant) and similar cases as demonstrations. Divided by the baseline setting of zero-shot open questions, the left part refers to fixed demonstrations with increasing numbers of demonstrations, while the right part refers to similar demonstrations. The shadow area represents the range of standard deviation. + +experiment with irrelevant demonstrations fixed for all test samples. We manually select two common cases with frequent charges in the original dataset as the fixed demonstrations. The 1-shot performance was averaged on the two demonstrations. + +We compare the effects of fixed and similar demonstrations with the baseline setting of zero-shot open questions in Figure 5. The change of performance from center to left demonstrates that fixed demonstrations hardly benefit LLMs and sometimes harm the performance (e.g., ChatGLM). This indicates that LLMs can basically understand instructions and do not need general demonstrations for task clarification, implying that the main challenge of expertise reasoning is to recall domain knowledge instead of understanding a specific task. + +We inspect the notable performance drop of ChatGLM resulting from fixed demonstrations. We find that ChatGLM tends to analyze the cases of both demonstrations and test samples and then answer with both of their charges. Its wordy style seems to result from the fine-tuning dialog corpus where an assistant LLM is supposed to provide rich information. In contrast, similar cases seem to encourage more concise outputs following the format of demonstrations. + +# 5 Paradox of Information Retrieval System + +The significance of similar demonstrations illustrated in Section 4.3 has motivated research focusing on prompting-oriented IR systems (Rubin et al., 2021; Sun et al., 2023) to retrieve high qual + +![](images/0474ae253fa7623cb6310b53e5d59e27aaf151b2cd9ad07d38f2f015a7f64a32.jpg) +Figure 6: The performance of ChatGPT coordinated with a series of simulated IR systems with varying capabilities as measured by Precision@1. The vertical blue line represents the threshold of IR capability at which IR systems overtake ChatGPT. The performance of ChatGPT in the real setting (1-shot open questions) is indicated by the red plus sign. + +ity demonstrations. However, we raise an intuitive question: Do LLMs gain substantial improvement from IR systems compared to the kNN baseline that harnesses IR systems for classification tasks? The question is inspired by our observation that the BM25 retriever achieves $48.03\%$ of Precision@1 and $57.68\%$ prediction accuracy by majority vote of top $k = 17$ retrieved similar cases. + +This observation suggests a paradoxical scenario wherein an IR system outperforms the combination of LLM and IR, with the LLM taking on the leading role and the IR serving as a supporting role. In such a scenario, the LLM becomes redundant due to its failure to fully utilize the informative retrieved documents. + +To investigate the paradox, instead of experimenting with different IR systems, we manipulate the BM25 retriever to simulate a series of IR systems with different capabilities measured by Precision@1 as described by Section 2.3. We take a case study of ChatGPT, whose 1-shot performance under different IR systems (denoted as Precision@1) is shown in Figure 6. + +Results Although the performance of ChatGPT enhanced by IR systems improves with IR capability, it will eventually underperform the IR system once the IR capability surpasses a certain threshold. In the ideal situation where true similar cases are always retrieved, ChatGPT is unable to attain $100\%$ accuracy and lags significantly behind the optimal IR system. According to Appendix A.4, all smaller LLMs are not comparable to the BM25 retriever. + +Discussion The findings demonstrate that LLMs face challenges in effectively leveraging informa + +![](images/172f2228b736de67a5247c124932299a26a9018979977cbb67db93e147fb597c.jpg) +Figure 7: Performance vs. the number of similar demonstrations of the five LLMs. + +tive retrieved documents. This underscores the need for significant research efforts to enhance the synergy between auto-regressive language models and retrieval by conditioning model outputs more on retrieved documents. Previous work has explored the augmentation of LLMs with retrieval at both the pre-training and fine-tuning stages (Borgeaud et al., 2022; Wang et al., 2023). Moreover, the marginal and inadequate improvement with retrieval indicates the limited legal reasoning ability of existing general LLMs. There is a need for future efforts to enhance domain-specific reasoning abilities of pre-trained foundation models. + +# 6 Ablation Study + +# 6.1 More Demonstrations Are Not Always Better + +The impact of the number of similar demonstrations $(n)$ is depicted in Figure 7. It is evident that GPT-4 and ChatGPT demonstrate proficiency in handling larger numbers of demonstrations, leading to enhanced performance, whereas Vicuna, ChatGLM and BLOOZ experience varying degrees of performance degradation with increasing numbers. Notably, ChatGLM displays the least sensitivity to $n$ . Furthermore, even ChatGPT's performance declines when $n$ is increased from three to four. The performance improvement resulting from larger values of $n$ can be attributed to the increased recall of true similar cases. Conversely, the decline in performance can be attributed to the noise introduced by more false similar cases. + +Performance variations. The change of performance after including an additional demonstration are visualized using heat maps in Figure 8. For each model, the three heat maps stand for the variations from k-shot to $(\mathrm{k} + 1)$ -shot, which are denoted below. For each heat map, the two rows indicate + +the inclusion of a new demonstration with true (T) or false (F) similar cases, while the columns indicate the combinations of existing demonstrations. Take the second heat map as an example. The cell in the column of (F, T) and the row of (T) displays the performance variation between 2-shot of (F, T) demonstrations and 3-shot of (F, T, T) demonstrations. Purple represents performance improvement, while green represents performance decline. + +For ChatGPT and BLOOMZ, the second rows of their three heat maps are mainly in purple, indicating significant enhancements resulting from the inclusion of true similar cases. However, the first lines of BLOOMZ display a deeper green color than those of ChatGPT, suggesting that BLOOMZ experiences greater degree of performance declines caused by the inclusion of false similar cases. These findings indicate different sensitivity to false similar demonstrations. Powerful language models like GPT-4 and ChatGPT exhibit robustness to noise in false similar cases, allowing them to remain focused on relevant information in true similar cases. In contrast, weaker LLMs are susceptible to the influence of such noise. Overall, ChatGPT performs better when provided with more similar demonstrations, whereas BLOOMZ demonstrates the opposite, as shown in Figure 7. + +The conclusion is that increased numbers of demonstrations have both positive and negative implications for expertise reasoning. However, LLMs could potentially gain from additional demonstrations in tasks that requires clear task illustration. + +# 6.2 The Impact of Absent Ground Truth Labels + +We manually incorporate ground-truth labels into label candidates in cases where they are absent, which may occur due to the limited recall capability of the IR system described in Section 2.2. The test samples are categorized into two groups, namely "Easy" and "Hard", based on the retrieval of their ground truth labels by the IR system. The original performance of the two groups and the performance of the "Hard" group with modified prompts to include ground truth labels, namely "Hard+GT", are displayed in Figure 9. + +The performance gaps between the "Easy" and "Hard+GT" groups suggest that challenging samples for IR systems are also difficult for LLMs. However, this gap is insignificant for the powerful GPT-4 who perceives them as equal challeng + +![](images/fceac87db227ad2a57df6f7e774a27f1bdaa0fa61b7ab09823a4b90fdef37bc8.jpg) +Figure 8: Heat maps of performance variations resulting from the inclusion of an addition demonstration. "T" corresponds to demonstrations with true similar cases, while "F" represents those with false similar cases. Each row represents the included new demonstration, while each column indicates the status of existing demonstrations. + +![](images/37506c50c53710f25935e164f14860b2a53239585ed1d5a3b8f8ac353fd38898.jpg) +Figure 9: The performance of "Easy" and "Hard" samples under the setting of zero-shot multi-choice questions. "Hard+GT" refers to improvement of including the absent ground truth labels in label candidates. + +ing. The improvement of "Hard+GT" compared to "Hard" is notable in GPT-4, ChatGPT and ChatGLM but inconspicuous in Vicuna with inferior competency in the law. Considering the relatively small size of the "Hard" group (79/560), the absence of ground truth labels does not have a significant impact, especially for weaker LLMs. + +# 6.3 Incorporation of Law Articles + +We examine the effect of incorporating legal articles that explicitly define the charges into prompts. For each charge retrieved by the IR system11, ChatGPT is required to determine whether the defendant is guilty for the particular charge by answering with a yes or no. We find that $94.46\%$ of the ground truth charges are accurately detected, while only $27.31\%$ of the detected charges are correct. The high recall and low precision indicate a substantial difference between ChatGPT and legal experts in the ability to distinguish charges and make precise judgments. + +# 7 Discussion + +We compare the LLMs with supervised baselines. We fine-tune BERT (Devlin et al., 2018) on the same training set and achieve a comparable accuracy of $68\%$ to ChatGPT but lower than GPT-4. Since LLMs are not fine-tuned on the specific LJP task, this result highlights the remarkable superiority of LLMs in acquiring significant knowledge and leveraging transfer learning Raffel et al. (2020). + +However, we observe that BERT's performance improves to $89\%$ when trained with the original training set ( $\sim 10\mathrm{K}$ ). We find that certain knowledge is present in shadow features, which can be easily learned with supervision. These superficial features can result in biased supervised models. Fortunately, unsupervised pre-training objectives, make LLMs more robust and less vulnerable to this issue. This depicts a promising future for NLP applications in various domains. + +# 8 Conclusion + +To address the deficiency in evaluating the competency of LLMs in the field of law, we focused on the task of legal judgment prediction and devised four settings to facilitate a thorough evaluation that encompassed both open and multiple-choice questions and incorporated similar cases to aid in the decision-making process. + +The evaluation results revealed different behaviors among the prominent LLMs, namely GPT-4 and ChatGPT, compared to their smaller counterparts. Both GPT-4 and ChatGPT exhibited remarkable proficiency in effectively leveraging domain knowledge in various formats. Among the smaller LLMs, ChatGLM displayed greater robustness, while BLOOMZ showcased superior zero-shot ability. + +We presented an intriguing paradox wherein LLMs could become abundant in the presence of a powerful IR system. When improving IR systems to benefit LLMs, it is crucial for researchers to acknowledge this paradoxical scenario and prevent great disparity between LLMs and IR systems. + +# Limitations + +One limitation of this paper is the use of the close-source GPT-4 and ChatGPT whose availability depends on the commercial company OpenAI. According to OpenAI, the ChatGPT and GPT-4 versions used in this paper, namely gpt-3.5-turbo-0301 and gpt-4-0314, will be deprecated and not available after September 13th, 2023. + +Another limitation pertains to the selection of LLMs. Due to the rapid emergence of new LLMs, we are not able to include all of them with the constraint of limited time. Instead of more models, we focus more on designing comprehensive evaluation settings and conducting insightful analyses to shed light on other domains. + +# Ethics Statement + +The task of legal judgment prediction is used to evaluate LLM's competency in the law. The primary objective of this task is to assist judges and lawyers in comprehending lengthy legal documents by offering them a supplementary tool. It is important to note that this task does not seek to replace the roles of judges and lawyers, nor does it aim to determine the guilt or charges of defendants through machine learning algorithms. Additionally, there is research focused on interpreting LJP models, aiming to enhance the transparency of black-box models for improved utilization by legal practitioners. The paper utilizes a public and anonymized dataset to exclude the potential issue of personal information leakage. + +# Acknowledgements + +We thank all reviewers for their constructive comments. This research is supported by NExT Research Center, the National Natural Science Foundation of China (9227010114) and the University Synergy Innovation Program of Anhui Province (GXXT-2022-040). + +# References + +Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206-2240. PMLR. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. +Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in english. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323. +Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with $90\%$ * chatgpt quality. +Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320-335. +Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. +Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962-977. +Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983. + +Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804. +Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. 2023. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559. +Eric Martínez. 2023. Re-evaluating gpt-4's bar exam performance. Available at SSRN 4441311. +Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786. +OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. +Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744. +Aleksandar Petrov, Emanuele La Malfa, Philip HS Torr, and Adel Bibi. 2023. Language model tokenizers introduce unfairness between languages. arXiv preprint arXiv:2305.15425. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551. +Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2021. Learning to retrieve prompts for in-context learning. arXiv preprint arXiv:2112.08633. +Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Galle, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. +Olga Shulayeva, Advaith Siddharthan, and Adam Wyner. 2017. Recognizing cited facts and principles in legal judgements. Artificial Intelligence and Law, 25(1):107-126. +Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, and Guoyin Wang. 2023. Pushing the limits of chatgpt on nlp tasks. +Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, + +Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. +Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, et al. 2023. Shall we pretrain autoregressive language models with retrieval? a comprehensive study. arXiv preprint arXiv:2304.06762. +Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. +Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478. +Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Tianyang Zhang, Xianpei Han, Zhen Hu, Heng Wang, et al. 2019. Cail2019-scm: A dataset of similar case matching in legal domain. arXiv preprint arXiv:1911.08962. +Haoxi Zhong, Yuzhong Wang, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020a. Iteratively questioning and answering for interpretable legal judgment prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1250-1257. +Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020b. How does nlp benefit legal system: A summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158. + +# A Appendix + +# A.1 Prompt Templates + +The prompt template is shown in Figure 10. The translation of the original Chinese prompt is displayed using orange text. The setting of zero-shot open questions use a longer instruction that appends "Output the charge name directly" to the instruction in Figure A.1. + +![](images/2f571fc406aeb44a37b39fa8436dcaf4cb565f2f8dbdaee851cee92ed3b8a327.jpg) +Figure 10: The prompt template in Chinese and English. + +# A.2 Robust to Fixed Demonstrations + +
Model1shot2shot
GPT-449.59 / 48.8450.69
ChatGPT47.01 / 46.5747.55
Vicuna-13B22.74 / 29.3828.37
ChatGLM-6B22.39 / 25.1421.36
BLOOMZ-7B36.65 / 43.9442.24
+ +We examine the effects of the two fixed cases mentioned in Section 4.3 in Table 2. We find that GPT-4 and ChatGPT are robust to the selection of the fixed demonstration in 1-shot setting, while Vicuna, ChatGLM and BLOOMZ are less robust. + +# A.3 Comparison with Supervised Baselines + +To understand the performance of supervised finetuning (SFT) baselines on LJP, we experiment on three models: BERT $^{12}$ , XLM-RoBERTa $^{13}$ and DeBERTa $^{14}$ . These models are fine-tuned on two datasets of different sizes: the original CAIL dataset (~100k samples) and the sampled training set (1120 samples) that is used as retrieval corpus described in Section 3.2, denoted as CAIL_few. + +The SFT models are evaluated on the same evaluation dataset described in Section 3.2. The smaller training set aims to compare the few-shot performance of SFT baselines and LLMs in low data scenario. + +The results of SFT models are shown in Figure 3. Considering the highest accuracy of GPT-4 being $74.46\%$ (multi-choice, 4shot), GPT-4 can outperform supervised baselines in low data scenario. If there is abundant training data, supervised baselines are still better than GPT-4 by $15\%$ . + +Table 2: The classification accuracy scores with prompts consisting of fixed cases. + +
ModelCAILCAIL_few
BERT89.6468.04
XLM-RoBERTa88.7566.43
DeBERTa88.5730.89
+ +Table 3: Prediction accuracy of SFT models fine-tuned on two training datasets of different sizes. + +# A.4 Detailed Results + +The specific values of performances displayed in Figure 2 are presented in Table 4. Besides, we also provide the performance of the F1 score in Table 5. + +
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-455.1864.8269.1169.8271.9663.9371.2572.5073.7574.46
ChatGPT46.6160.0062.8664.8266.9661.6164.4666.9670.3667.14
Vicuna-13B28.2150.3649.6451.7935.8947.8644.8243.3935.7119.46
ChatGLM-6B41.4351.7950.0050.3650.5455.7150.5449.6449.4647.32
BLOOMZ-7B49.8254.8252.6852.5051.2553.3931.9631.0727.3226.61
+ +Table 4: The classification accuracy scores of all models under the four settings. + +
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-450.5262.7267.5468.6171.0262.3170.4271.8173.2474.00
ChatGPT43.1458.4261.8664.4066.1660.6763.5166.8569.5966.62
Vicuna-13B25.5048.8547.6449.4939.8244.7041.7341.4835.0321.61
ChatGLM-6B41.8950.3047.7648.5948.6753.7449.2647.5647.6145.32
BLOOMZ-7B46.9053.2851.0650.9049.2650.6829.2527.9225.2723.37
+ +Table 5: The classification F1 scores of all models under the four settings. \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/images.zip b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..48d8db77314774e86077a59c77423a2040467a78 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5b02ddce7b7421910eed7373a62beed0e4a1f4b778d01d2aad3a5d91e1f99ba +size 483834 diff --git a/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/layout.json b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ddc692511506304b239cb7b1455b103809f6b339 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction/layout.json @@ -0,0 +1,7869 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 112, + 67, + 483, + 100 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 67, + 483, + 100 + ], + "spans": [ + { + "bbox": [ + 112, + 67, + 483, + 100 + ], + "type": "text", + "content": "A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 165, + 106, + 232, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 106, + 232, + 118 + ], + "spans": [ + { + "bbox": [ + 165, + 106, + 232, + 118 + ], + "type": "text", + "content": "Ruihao Shui" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 118, + 121, + 279, + 147 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 121, + 279, + 147 + ], + "spans": [ + { + "bbox": [ + 118, + 121, + 279, + 147 + ], + "type": "text", + "content": "National University of Singapore ruihaoshui@u.nus.edu" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 369, + 106, + 423, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 106, + 423, + 118 + ], + "spans": [ + { + "bbox": [ + 369, + 106, + 423, + 118 + ], + "type": "text", + "content": "Yixin Cao" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 311, + 120, + 480, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 120, + 480, + 148 + ], + "spans": [ + { + "bbox": [ + 311, + 120, + 480, + 148 + ], + "type": "text", + "content": "Singapore Management University yxcao@smu.edu.sg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 164, + 158, + 234, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 158, + 234, + 172 + ], + "spans": [ + { + "bbox": [ + 164, + 158, + 234, + 172 + ], + "type": "text", + "content": "Wang Xiang*" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 82, + 172, + 312, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 172, + 312, + 200 + ], + "spans": [ + { + "bbox": [ + 82, + 172, + 312, + 200 + ], + "type": "text", + "content": "University of Science and Technology of China xiangwang1223@gmail.com" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 357, + 158, + 436, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 357, + 158, + 436, + 172 + ], + "spans": [ + { + "bbox": [ + 357, + 158, + 436, + 172 + ], + "type": "text", + "content": "Tat-Seng Chua" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 315, + 173, + 477, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 173, + 477, + 200 + ], + "spans": [ + { + "bbox": [ + 315, + 173, + 477, + 200 + ], + "type": "text", + "content": "National University of Singapore dcscts@nus.edu.sg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 155, + 212, + 203, + 225 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 203, + 225 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 203, + 225 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 84, + 238, + 274, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 238, + 274, + 561 + ], + "spans": [ + { + "bbox": [ + 84, + 238, + 274, + 561 + ], + "type": "text", + "content": "Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain. However, recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks. To systematically investigate their competency in the law, we design practical baseline solutions based on LLMs and test on the task of legal judgment prediction. In our solutions, LLMs can work alone to answer open questions or coordinate with an information retrieval (IR) system to learn from similar cases or solve simplified multi-choice questions. We show that similar cases and multi-choice options, namely label candidates, included in prompts can help LLMs recall domain knowledge that is critical for expertise legal reasoning. We additionally present an intriguing paradox wherein an IR system surpasses the performance of LLM+IR due to limited gains acquired by weaker LLMs from powerful IR systems. In such cases, the role of LLMs becomes redundant. Our evaluation pipeline can be easily extended into other tasks to facilitate evaluations in other domains. Code is available at https://github.com/srhthu/LM-CompEval-Legal" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 574, + 154, + 587 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 574, + 154, + 587 + ], + "spans": [ + { + "bbox": [ + 68, + 574, + 154, + 587 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 597, + 292, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 597, + 292, + 734 + ], + "spans": [ + { + "bbox": [ + 67, + 597, + 292, + 734 + ], + "type": "text", + "content": "Large language models have achieved great success in various Natural Language Processing (NLP) tasks (Brown et al., 2020; Touvron et al., 2023), while there are still some disputes over the potential for domain-specific applications (Martínez, 2023). Focusing on the law domain, the leading LLM, GPT-4 (OpenAI, 2023), was claimed to pass the Uniform Bar Exam (UBE) with a 90th percentile score. Although inspiring, however, this result was pointed out to be overestimated (Martínez, 2023)." + } + ] + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 305, + 211, + 526, + 377 + ], + "blocks": [ + { + "bbox": [ + 305, + 211, + 526, + 377 + ], + "lines": [ + { + "bbox": [ + 305, + 211, + 526, + 377 + ], + "spans": [ + { + "bbox": [ + 305, + 211, + 526, + 377 + ], + "type": "image", + "image_path": "99ef283638db9ae2fce19004b145e404d3b53a5a36b8bac09db03b2bc2773c2c.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 384, + 527, + 456 + ], + "lines": [ + { + "bbox": [ + 302, + 384, + 527, + 456 + ], + "spans": [ + { + "bbox": [ + 302, + 384, + 527, + 456 + ], + "type": "text", + "content": "Figure 1: The task of Legal Judgment Prediction and the evaluation settings. Different colors refer to different charges. For similar cases, \"T\" refers to true similar cases with the same charges as the query cases, while \"F\" refers to false similar cases. For task settings, \"ZS\" is the abbreviation for zero-shot and \"FS\" for few-shot." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 486, + 525, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 486, + 525, + 513 + ], + "spans": [ + { + "bbox": [ + 302, + 486, + 525, + 513 + ], + "type": "text", + "content": "This raises an interesting question: How exactly LLMs perform in various real-world legal tasks?" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 301, + 516, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 516, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 301, + 516, + 526, + 773 + ], + "type": "text", + "content": "In this paper, we design practical baseline solutions based on LLMs and systematically investigate their competency in the law, to shed light on other domains as well. We attribute the main issues of the previous benchmark as follows. First, UBE is too general and not subject to any legal jurisdiction (Martínez, 2023). Second, UBE contains multi-choice questions and open-ended questions that require human experts to evaluate. To avoid human evaluation, some datasets (Hendrycks et al., 2020) replace open-ended questions with multi-choice questions. However, in real-world applications, there are not only multi-choice but also open questions. Using multi-choice questions only may not be comprehensive enough. Third, specifically in but not limited to common law (Shulayeva et al., 2017; Xiao et al., 2019), similar cases are always introduced as evidence to support expertise legal reasoning (Zhong et al., 2020b), which are not fully" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": "*Xiang Wang is also affiliated with Institute of Artificial Intelligence, Institute of Dataspace, Hefei Comprehensive National Science Center." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "7337" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 129, + 795, + 464, + 819 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 795, + 464, + 819 + ], + "spans": [ + { + "bbox": [ + 129, + 795, + 464, + 819 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7337-7348 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 97 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 97 + ], + "type": "text", + "content": "studied in previous benchmark (Hendrycks et al., 2020)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 99, + 291, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 99, + 291, + 449 + ], + "spans": [ + { + "bbox": [ + 67, + 99, + 291, + 449 + ], + "type": "text", + "content": "For the first issue, we choose legal judgment prediction (LJP) (Xiao et al., 2018; Chalkidis et al., 2019; Zhong et al., 2020a) as the example task for investigation. It is a real-world problem to determine the charges committed by the defendants under a juridical system, as shown in Figure 1. LJP is typically formulated as a classification task to predict the most possible one from a list of predefined charges. Then, for the second and third issues, we design four settings derived from two work scenarios of LLMs to cover open and multichoice questions and the usage of similar cases. In the first scenario, LLMs work alone without explicit knowledge in prompts, assuming all domain knowledge is implicitly stored in parameters. In the second scenario, LLMs coordinate with an information retrieval (IR) system that enriches prompts with similar demonstrations and label candidates to benefit expertise reasoning. Specifically, demonstrations consist of pairs of similar cases and their charges, which are retrieved by the IR system based on similarity of case facts. Labels of the retrieved cases can form label candidates, shown as circles of different colors in Figure 1, to hint LLM with label information and narrow down label space (Ma et al., 2023)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 451, + 291, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 451, + 291, + 693 + ], + "spans": [ + { + "bbox": [ + 67, + 451, + 291, + 693 + ], + "type": "text", + "content": "The four evaluation settings in Figure 1 can be categorized based on the presence of two elements in prompts: demonstrations (similar cases) and label candidates. Demonstrations convert the setting from zero-shot to few-shot prompting, while label candidates simplify the task from open questions to multi-choice questions1. The first scenario corresponds to the first setting, where neither element is present, while the second scenario encompasses the remaining three settings. We evaluate five up-to-date LLMs of the close-source GPT-3 (Brown et al., 2020) family, ChatGPT and GPT-4 (OpenAI, 2023), and open-source LLMs including Vicuna (Chiang et al., 2023), ChatGLM (Du et al., 2022) and BLOOMZ (Muennighoff et al., 2022). The evaluation is conducted on a Chinese LJP dataset, namely CAIL (Xiao et al., 2018), which contains cases of 112 criminal law charges2." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 78, + 695, + 261, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 695, + 261, + 708 + ], + "spans": [ + { + "bbox": [ + 78, + 695, + 261, + 708 + ], + "type": "text", + "content": "We highlight our key findings as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 78, + 708, + 289, + 722 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 708, + 289, + 722 + ], + "spans": [ + { + "bbox": [ + 78, + 708, + 289, + 722 + ], + "type": "text", + "content": "1. Similar cases and label candidates can help" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 324, + 71, + 524, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 324, + 71, + 524, + 98 + ], + "spans": [ + { + "bbox": [ + 324, + 71, + 524, + 98 + ], + "type": "text", + "content": "LLMs recall domain knowledge that is critical for expertise legal reasoning." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 311, + 99, + 526, + 301 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 312, + 99, + 525, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 99, + 525, + 138 + ], + "spans": [ + { + "bbox": [ + 312, + 99, + 525, + 138 + ], + "type": "text", + "content": "2. Label candidates result in more consistent outputs, indicating LLMs gain greater confidence in their domain knowledge (Jiang et al., 2021)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 312, + 139, + 526, + 178 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 139, + 526, + 178 + ], + "spans": [ + { + "bbox": [ + 312, + 139, + 526, + 178 + ], + "type": "text", + "content": "3. Irrelevant demonstrations formed by fixed cases hardly improve performance. This excludes their effect on task illustration." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 311, + 179, + 525, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 179, + 525, + 259 + ], + "spans": [ + { + "bbox": [ + 311, + 179, + 525, + 259 + ], + "type": "text", + "content": "4. Paradox: An IR system can outperform LLM+IR since weaker LLMs acquire limited gains from informative documents retrieved by a powerful IR system. Thus, it is critical to adapte LLMs to generate with retrieved documents." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 311, + 260, + 525, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 260, + 525, + 301 + ], + "spans": [ + { + "bbox": [ + 311, + 260, + 525, + 301 + ], + "type": "text", + "content": "5. More similar cases introduce more knowledge and noise simultaneously, whose final outcome depends on LLMs." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 301, + 524, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 301, + 524, + 328 + ], + "spans": [ + { + "bbox": [ + 303, + 301, + 524, + 328 + ], + "type": "text", + "content": "The main contributions are summarized in three aspects:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 316, + 335, + 525, + 460 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 316, + 335, + 524, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 335, + 524, + 361 + ], + "spans": [ + { + "bbox": [ + 316, + 335, + 524, + 361 + ], + "type": "text", + "content": "- We investigate the law competency of LLMs on the task of legal judgment prediction." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 316, + 370, + 524, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 370, + 524, + 411 + ], + "spans": [ + { + "bbox": [ + 316, + 370, + 524, + 411 + ], + "type": "text", + "content": "- We propose practical baseline solutions for LLMs that tackle two scenarios: working alone or in coordination with an IR system." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 419, + 525, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 419, + 525, + 460 + ], + "spans": [ + { + "bbox": [ + 316, + 419, + 525, + 460 + ], + "type": "text", + "content": "- We evaluate five LLMs and conduct comprehensive analysis to demystify their characteristics of expertise reasoning." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 470, + 411, + 481 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 470, + 411, + 481 + ], + "spans": [ + { + "bbox": [ + 302, + 470, + 411, + 481 + ], + "type": "text", + "content": "2 Baseline Method" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 490, + 526, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 490, + 526, + 719 + ], + "spans": [ + { + "bbox": [ + 302, + 490, + 526, + 719 + ], + "type": "text", + "content": "The goal of legal judgment prediction is to determine the committed charges given case facts. To harness LLMs for LJP, we adopt in-context learning (Brown et al., 2020) and use LLMs to generate the charges conditioned on prompts (Section 2.1). To enhance LLMs, we incorporate label candidates and demonstrations consisting of similar cases into prompts, which are acquired by an IR system (Section 2.2). This derives four settings of baseline solutions, namely zero-shot open questions, few-shot open questions, zero-shot multi-choice questions, and few-shot multi-choice questions. The multi-choice settings employ label candidates while few-shot settings include demonstrations, as shown in Figure 1. Finally, we introduce how to simulate IR systems with different capabilities to understand their effects (Section 2.3)." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 729, + 409, + 741 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 729, + 409, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 729, + 409, + 741 + ], + "type": "text", + "content": "2.1 LLM Prompting" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "content": "Prompt Design. A prompt begins with an instruction to illustrate the task followed by label" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 729, + 290, + 760 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 729, + 290, + 760 + ], + "spans": [ + { + "bbox": [ + 67, + 729, + 290, + 760 + ], + "type": "text", + "content": "1It is not strict multi-choice questions. LLMs can generate correct answers even though ground-truth labels are absent in candidates." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 80, + 761, + 262, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 761, + 262, + 772 + ], + "spans": [ + { + "bbox": [ + 80, + 761, + 262, + 772 + ], + "type": "text", + "content": "After filtering less frequent (article, charge) pairs" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "7338" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 111 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 111 + ], + "type": "text", + "content": "candidates and task demonstrations in the form of input-output pairs. The templates of prompts are displayed in Appendix A.1." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 112, + 290, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 112, + 290, + 246 + ], + "spans": [ + { + "bbox": [ + 67, + 112, + 290, + 246 + ], + "type": "text", + "content": "Parsing. We adopt one automatic parsing function for all LLMs to map LLM outputs to predefined charge labels. No ad hoc heuristics are employed for a fair comparison. Specifically, we use the BM25 algorithm3 to measure text similarity between outputs and pre-defined charges and predict the most similar charges. BM25 is robust and yields comparable performances to neural similarity methods like text2vec4 in our pilot experiments." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 248, + 290, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 248, + 290, + 316 + ], + "spans": [ + { + "bbox": [ + 67, + 248, + 290, + 316 + ], + "type": "text", + "content": "Inference. Sampling is enabled during generation for consistent results, as inspired by Wang et al. (2022). Five outputs are sampled for each prompt with the temperature of 0.8. Their similarity scores of pre-defined labels are averaged." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 326, + 283, + 338 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 326, + 283, + 338 + ], + "spans": [ + { + "bbox": [ + 67, + 326, + 283, + 338 + ], + "type": "text", + "content": "2.2 IR System for Knowledge Incorporation" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 344, + 290, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 344, + 290, + 506 + ], + "spans": [ + { + "bbox": [ + 67, + 344, + 290, + 506 + ], + "type": "text", + "content": "IR systems are utilized to retrieve similar cases, commonly referenced by lawyers and judges, to inform their judgments. In addition to providing demonstrations, these similar cases can also aid in generating potential labels by incorporating the labels from the top similar cases. By employing these smaller sets of predefined charges, namely label candidates, complex open questions can be simplified into multiple-choice questions. This approach is effective in enhancing LM prompting (Ma et al., 2023), as including hundreds of charges directly in prompts is impractical." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 507, + 290, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 507, + 290, + 602 + ], + "spans": [ + { + "bbox": [ + 67, + 507, + 290, + 602 + ], + "type": "text", + "content": "Implementation of IR System. We use the BM25 algorithm to measure the semantic similarity between cases. Similar cases are retrieved from the training dataset. To guarantee that the demonstrations exemplify one of the multi-choice options, we exclude demonstrations with labels that are not among the candidate options5." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 613, + 214, + 625 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 613, + 214, + 625 + ], + "spans": [ + { + "bbox": [ + 67, + 613, + 214, + 625 + ], + "type": "text", + "content": "2.3 Simulation of IR Systems" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 631, + 290, + 711 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 631, + 290, + 711 + ], + "spans": [ + { + "bbox": [ + 67, + 631, + 290, + 711 + ], + "type": "text", + "content": "To investigate the effects of IR capabilities, we simulate a series of IR systems of different capabilities as measured by Precision@1" + }, + { + "bbox": [ + 67, + 631, + 290, + 711 + ], + "type": "inline_equation", + "content": "^{6}" + }, + { + "bbox": [ + 67, + 631, + 290, + 711 + ], + "type": "text", + "content": ". Then the top retrieved cases are used as demonstrations. We consider cases with identical charges to the query cases as true similar cases and vice versa." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 718, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 81, + 718, + 214, + 729 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 718, + 214, + 729 + ], + "spans": [ + { + "bbox": [ + 81, + 718, + 214, + 729 + ], + "type": "text", + "content": "3https://pypi.org/project/rank-bm25/" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 81, + 729, + 223, + 740 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 729, + 223, + 740 + ], + "spans": [ + { + "bbox": [ + 81, + 729, + 223, + 740 + ], + "type": "text", + "content": "4https://github.com/crownpku/text2vec" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 740, + 289, + 761 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 740, + 289, + 761 + ], + "spans": [ + { + "bbox": [ + 69, + 740, + 289, + 761 + ], + "type": "text", + "content": "5This condition is not violated for the top four similar cases without filtering." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 80, + 761, + 239, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 761, + 239, + 772 + ], + "spans": [ + { + "bbox": [ + 80, + 761, + 239, + 772 + ], + "type": "text", + "content": "The accuracy of the top one retrieved case." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 301, + 71, + 526, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 71, + 526, + 232 + ], + "spans": [ + { + "bbox": [ + 301, + 71, + 526, + 232 + ], + "type": "text", + "content": "Realistic Simulation. We prioritize the returning of true similar cases for easy query cases, rather than the returning in a random manner. The query difficulty is measured by the Precision@10 of the BM25 retriever described in Section 2.2. The motivation is that queries with shadow linguistic features are more possible to get relevant retrieval results than complex or obscure queries. For a specific value (e.g., a%) of Precision@1 to be simulated, the top a% of easy test cases are assured to have a true similar case, while the rest are assigned false similar cases." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 245, + 426, + 259 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 245, + 426, + 259 + ], + "spans": [ + { + "bbox": [ + 302, + 245, + 426, + 259 + ], + "type": "text", + "content": "3 Experimental Setup" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 269, + 365, + 280 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 269, + 365, + 280 + ], + "spans": [ + { + "bbox": [ + 302, + 269, + 365, + 280 + ], + "type": "text", + "content": "3.1 Models" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 286, + 524, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 286, + 524, + 312 + ], + "spans": [ + { + "bbox": [ + 302, + 286, + 524, + 312 + ], + "type": "text", + "content": "Below is a concise introduction to the five LLMs to be evaluated." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 314, + 525, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 314, + 525, + 422 + ], + "spans": [ + { + "bbox": [ + 302, + 314, + 525, + 422 + ], + "type": "text", + "content": "GPT-4 (OpenAI, 2023) and ChatGPT are available from OpenAI API and the versions of gpt-4-0314 and gpt-3.5-turbo-0301 are used. For technological details, ChatGPT is claimed to be a sibling model to InstructGPT (Ouyang et al., 2022) that is trained to follow instructions and align to human preferences with the RLHF algorithm (Christiano et al., 2017)." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 423, + 524, + 490 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 423, + 524, + 490 + ], + "spans": [ + { + "bbox": [ + 302, + 423, + 524, + 490 + ], + "type": "text", + "content": "Vicuna-13B (Chiang et al., 2023) is a LLaMA model (Touvron et al., 2023) fine-tuned on 70K public user-shared conversations with ChatGPT. It can be viewed to learn distilled knowledge (Hinton et al., 2015) of ChatGPT." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 491, + 524, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 491, + 524, + 531 + ], + "spans": [ + { + "bbox": [ + 302, + 491, + 524, + 531 + ], + "type": "text", + "content": "ChatGLM-6B7 is a dialog language model based on the GLM (Du et al., 2022) architecture and supports English and Chinese." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 533, + 525, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 533, + 525, + 626 + ], + "spans": [ + { + "bbox": [ + 302, + 533, + 525, + 626 + ], + "type": "text", + "content": "BLOOMZ (Muennighoff et al., 2022) is an instruction fine-tuned BLOOM (Scao et al., 2022), a multilingual language model. We use the bloomz-7b1-mt version that is tuned for multilingual prompts. Except for BLOOMZ, Vicuna and ChatGLM are mainly fine-tuned on conversational data." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 639, + 458, + 651 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 639, + 458, + 651 + ], + "spans": [ + { + "bbox": [ + 302, + 639, + 458, + 651 + ], + "type": "text", + "content": "3.2 Dataset and Pre-processing" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 657, + 526, + 752 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 657, + 526, + 752 + ], + "spans": [ + { + "bbox": [ + 302, + 657, + 526, + 752 + ], + "type": "text", + "content": "The Chinese LJP dataset, CAIL (Xiao et al., 2018), is used in our experiments. Each sample consists of the case facts and the committed charge as the label. As the original dataset is very large (~100K for training and ~20K for test), we randomly sample a balanced small test set from the original test set. Five cases are sampled for each charge, accounting" + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 315, + 761, + 476, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 761, + 476, + 772 + ], + "spans": [ + { + "bbox": [ + 315, + 761, + 476, + 772 + ], + "type": "text", + "content": "7https://github.com/THUDM/ChatGLM-6B" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "7339" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 83, + 68, + 276, + 140 + ], + "blocks": [ + { + "bbox": [ + 83, + 68, + 276, + 140 + ], + "lines": [ + { + "bbox": [ + 83, + 68, + 276, + 140 + ], + "spans": [ + { + "bbox": [ + 83, + 68, + 276, + 140 + ], + "type": "table", + "html": "
TokenizerMedian<=500<=1000
ChatGPT396.568.7592.32
Vicuna496.050.8986.96
ChatGLM206.591.0798.57
BLOOMZ210.590.5498.93
", + "image_path": "534e04af4b4c52ed8454eb1cad6294e2caec99b499fba08b07118ed2be61461d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 148, + 291, + 185 + ], + "lines": [ + { + "bbox": [ + 67, + 148, + 291, + 185 + ], + "spans": [ + { + "bbox": [ + 67, + 148, + 291, + 185 + ], + "type": "text", + "content": "Table 1: Statistics of the number of tokens across tokenizers. The last two columns present the ratios of test samples with token counts below the specified values." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 209, + 290, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 209, + 290, + 290 + ], + "spans": [ + { + "bbox": [ + 67, + 209, + 290, + 290 + ], + "type": "text", + "content": "for 560 test cases in total for 112 charges. Similarly, we also sample the training and validation sets with 10 cases per charge. The training set is used to retrieve similar cases (Section 2.3), while the validation set is used to determine the optimal " + }, + { + "bbox": [ + 67, + 209, + 290, + 290 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 209, + 290, + 290 + ], + "type": "text", + "content": " of the kNN algorithm." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 292, + 291, + 454 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 292, + 291, + 454 + ], + "spans": [ + { + "bbox": [ + 69, + 292, + 291, + 454 + ], + "type": "text", + "content": "Truncation. Since some cases have very long descriptions, we truncate the case facts of demonstrations to 500 tokens and those of test samples to 1000 tokens. It is worth noting that the text is tokenized by the tokenizer of each model before truncation for a fair comparison. Recently, Petrov et al. (2023) address the issue that a tokenizer can lead to different performances of different languages. This suggests that the performance on a particular language can also be influenced by tokenizers from various models with varying language encoding efficiency." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 456, + 291, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 456, + 291, + 565 + ], + "spans": [ + { + "bbox": [ + 67, + 456, + 291, + 565 + ], + "type": "text", + "content": "Table 1 shows the statistics of the number of tokens processed by different tokenizers8. The most efficient tokenizers for Chinese are those of ChatGLM and BLOOMZ, indicated by the medians of token numbers. In contrast, the tokenizer of ChatGPT produces " + }, + { + "bbox": [ + 67, + 456, + 291, + 565 + ], + "type": "inline_equation", + "content": "2 \\times" + }, + { + "bbox": [ + 67, + 456, + 291, + 565 + ], + "type": "text", + "content": " tokens and that of Vicuna produces " + }, + { + "bbox": [ + 67, + 456, + 291, + 565 + ], + "type": "inline_equation", + "content": "2.5 \\times" + }, + { + "bbox": [ + 67, + 456, + 291, + 565 + ], + "type": "text", + "content": " tokens. The truncation length is proper to accommodate most samples." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 580, + 246, + 595 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 580, + 246, + 595 + ], + "spans": [ + { + "bbox": [ + 67, + 580, + 246, + 595 + ], + "type": "text", + "content": "4 LLM vs. LLM with IR System" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 605, + 291, + 728 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 605, + 291, + 728 + ], + "spans": [ + { + "bbox": [ + 67, + 605, + 291, + 728 + ], + "type": "text", + "content": "We initially present the overall results, highlighting the importance of label candidates and similar cases, and conduct a comparative analysis of the models. Subsequently, we investigate the relationship between label candidates and self-consistency to unveil their actual effects on expertise reasoning. Additionally, we perform an ablation study by replacing similar cases with fixed cases as demonstrations to further understand their impact." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 310, + 74, + 518, + 223 + ], + "blocks": [ + { + "bbox": [ + 310, + 74, + 518, + 223 + ], + "lines": [ + { + "bbox": [ + 310, + 74, + 518, + 223 + ], + "spans": [ + { + "bbox": [ + 310, + 74, + 518, + 223 + ], + "type": "image", + "image_path": "620fe7dc4baab808679f7c64609c21920fadcfb6307c9e362caf890ff9646183.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 236, + 526, + 320 + ], + "lines": [ + { + "bbox": [ + 302, + 236, + 526, + 320 + ], + "spans": [ + { + "bbox": [ + 302, + 236, + 526, + 320 + ], + "type": "text", + "content": "Figure 2: The macro comparison between the four settings. “+Label” refers to zero-shot multi-choice questions; “+Sim Case” refers to few-shot open questions and “+Label +Sim Case” refers to few-shot multi-choice questions. More than one points of a model in the last two settings refer to runs with different number of demonstrations." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 309, + 337, + 420, + 475 + ], + "blocks": [ + { + "bbox": [ + 309, + 337, + 420, + 475 + ], + "lines": [ + { + "bbox": [ + 309, + 337, + 420, + 475 + ], + "spans": [ + { + "bbox": [ + 309, + 337, + 420, + 475 + ], + "type": "image", + "image_path": "058621cf7b920696a5ded18ebf7ce22d573e13f2fbf3fb20ad1dab34e7e7a260.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 488, + 526, + 513 + ], + "lines": [ + { + "bbox": [ + 302, + 488, + 526, + 513 + ], + "spans": [ + { + "bbox": [ + 302, + 488, + 526, + 513 + ], + "type": "text", + "content": "Figure 3: Compare the models under each setting. Few-shot performances are averaged among 1-shot to 4-shot." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 422, + 336, + 518, + 475 + ], + "blocks": [ + { + "bbox": [ + 422, + 336, + 518, + 475 + ], + "lines": [ + { + "bbox": [ + 422, + 336, + 518, + 475 + ], + "spans": [ + { + "bbox": [ + 422, + 336, + 518, + 475 + ], + "type": "image", + "image_path": "927a75a027158091b5c7362bb8583d03d067384c89367a300592ad4e44d85ee6.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 536, + 404, + 548 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 536, + 404, + 548 + ], + "spans": [ + { + "bbox": [ + 302, + 536, + 404, + 548 + ], + "type": "text", + "content": "4.1 Overall Results" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 555, + 525, + 595 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 555, + 525, + 595 + ], + "spans": [ + { + "bbox": [ + 302, + 555, + 525, + 595 + ], + "type": "text", + "content": "The macro comparison between the four settings is shown in Figure 2, where each point represents the performance of one specific run of one model." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 301, + 597, + 526, + 744 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 597, + 526, + 744 + ], + "spans": [ + { + "bbox": [ + 301, + 597, + 526, + 744 + ], + "type": "text", + "content": "Significance of label candidates and similar cases. In comparison to the zero-shot open question setting where LLMs work alone, the inclusion of label candidates, similar cases, or both demonstrates noteworthy enhancements. This highlights the effectiveness of our baseline solutions that leverage IR systems to expand the capabilities of LLMs in legal domains. These findings align with previous research that has also recognized the significance of the two components (Ma et al., 2023; Liu et al., 2021)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "content": "The effects of label candidates and similar cases differ slightly in terms of performance mean and" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 740, + 290, + 772 + ], + "type": "text", + "content": "GPT-4 and ChatGPT have the same results. Following OpenAI's guidance, we use the python package tiktoken for tokenization" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 310, + 791 + ], + "type": "text", + "content": "7340" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "type": "text", + "content": "variance. Label candidates contribute to a higher mean performance, while similar cases introduce greater variance. Examining the model performances in the third setting (+Sim Case) displayed in Figure 2, GPT-4 and ChatGPT exhibit more significant improvements from similar cases compared to their smaller counterparts. They also gain more benefit from similar cases than from label candidates. This observation can be attributed to the varying difficulty levels of knowledge utilization. While the knowledge within label candidates is readily accessible and straightforward, leveraging similar cases requires stronger language understanding and few-shot learning abilities." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 267, + 291, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 267, + 291, + 376 + ], + "spans": [ + { + "bbox": [ + 67, + 267, + 291, + 376 + ], + "type": "text", + "content": "Furthermore, the coexistence of label candidates and similar cases further enhances the performance of GPT-4 and ChatGPT, but it diminishes the performance of Vicuna, ChatGLM, and BLOOMZ. This suggests that smaller LLMs may encounter challenges in effectively managing knowledge in multiple forms simultaneously, leading to confusion." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 381, + 291, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 381, + 291, + 437 + ], + "spans": [ + { + "bbox": [ + 67, + 381, + 291, + 437 + ], + "type": "text", + "content": "Model comparison. The performances of the models under zero-shot and few-shot prompting is shown in Figure 3, where few-shot performances are averaged among 1-shot to 4-shot." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 442, + 291, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 442, + 291, + 659 + ], + "spans": [ + { + "bbox": [ + 67, + 442, + 291, + 659 + ], + "type": "text", + "content": "The zero-shot setting emphasizes the ability to understand instructions. When only instructions are available, BLOOMZ performs better than ChatGPT, indicating a superior multilingual instruction following ability. This result is reasonable as BLOOMZ is the only smaller LLM that is fine-tuned on multilingual instructions. Once provided with explicit domain knowledge, ChatGPT outperforms all smaller LLMs. The case is the same for BLOOMZ and ChatGLM, where ChatGLM overtakes BLOOMZ with knowledge of label candidates. BLOOMZ performs worst when prompted with two forms of knowledge, indicating that BLOOMZ is not very robust to prompts. Among the three smaller LLMs, ChatGLM is the most robust to various forms of knowledge." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 665, + 291, + 773 + ], + "type": "text", + "content": "The significant effects of label candidates and similar cases can be explained as they activate LLM's memory of relevant domain knowledge. This view can be supported by two pieces of evidence about the relationship between label candidates and self-consistency (Section 4.2) and the negligible effect of irrelevant cases as fixed demonstrations (Section 4.3)." + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 313, + 74, + 505, + 199 + ], + "blocks": [ + { + "bbox": [ + 313, + 74, + 505, + 199 + ], + "lines": [ + { + "bbox": [ + 313, + 74, + 505, + 199 + ], + "spans": [ + { + "bbox": [ + 313, + 74, + 505, + 199 + ], + "type": "image", + "image_path": "797c37324a6cae109c7d203b9697dd318c533ebb16e08e086e16c03012b985f5.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 204, + 526, + 253 + ], + "lines": [ + { + "bbox": [ + 302, + 204, + 526, + 253 + ], + "spans": [ + { + "bbox": [ + 302, + 204, + 526, + 253 + ], + "type": "text", + "content": "Figure 4: Changes of performance and self-consistency after adding label candidates. The change of each model is illustrated by an arrow pointing from the open question setting to the multi-choice setting." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 275, + 480, + 302 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 275, + 480, + 302 + ], + "spans": [ + { + "bbox": [ + 302, + 275, + 480, + 302 + ], + "type": "text", + "content": "4.2 Label Candidates Enhance Self-consistency and Confidence" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 301, + 306, + 526, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 306, + 526, + 400 + ], + "spans": [ + { + "bbox": [ + 301, + 306, + 526, + 400 + ], + "type": "text", + "content": "To further understand the effect of label candidates, we propose a metric to measure the self-consistency of LLMs that is calculated as the number of the majority prediction. Consistent outputs indicate a high level of confidence in LLMs, which is often associated with a better grasp of knowledge (Jiang et al., 2021, 2023)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 402, + 526, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 402, + 526, + 645 + ], + "spans": [ + { + "bbox": [ + 302, + 402, + 526, + 645 + ], + "type": "text", + "content": "The changes in performance and self-consistency after introducing label candidates are shown in Figure 4 as the arrows. We observe that the incorporation of label candidates leads to more consistent outputs (8 of 10 cases) and higher confidence in LLMs except zero-shot GPT-4 with a slight decrease and few-shot BLOOMZ. In the zero-shot setting, label candidates significantly boost LLM performances. We postulate that label candidates help by eliciting pre-stored domain knowledge with concise charge names. Besides, the self-consistency also correlates with model performances (7 of 10 cases). Such correlation is also observed in other tasks like question answering (Jiang et al., 2021). It is worth noting that label candidates decrease both self-consistency and performance of few-shot prompted BLOOMZ, which also aligns with the correlation." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 656, + 524, + 682 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 656, + 524, + 682 + ], + "spans": [ + { + "bbox": [ + 302, + 656, + 524, + 682 + ], + "type": "text", + "content": "4.3 Domain Knowledge Is More Critical Than Task Illustration" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 688, + 526, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 688, + 526, + 743 + ], + "spans": [ + { + "bbox": [ + 302, + 688, + 526, + 743 + ], + "type": "text", + "content": "There is a possible argument that similar demonstrations can help LLMs understand instructions and tasks. To disentangle their effects on task illustration and provision of domain knowledge, we" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "text", + "content": "For example, if the five sampled outputs are mapped to labels of (a,a,a,b,c), the consistency score is 3." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 286, + 780, + 308, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 308, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 308, + 791 + ], + "type": "text", + "content": "7341" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 73, + 73, + 283, + 191 + ], + "blocks": [ + { + "bbox": [ + 73, + 73, + 283, + 191 + ], + "lines": [ + { + "bbox": [ + 73, + 73, + 283, + 191 + ], + "spans": [ + { + "bbox": [ + 73, + 73, + 283, + 191 + ], + "type": "image", + "image_path": "eea72bf54c5ffa53aceb9880908d9a5af00872456134b4818d6da649d2fe31a9.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 206, + 291, + 289 + ], + "lines": [ + { + "bbox": [ + 67, + 206, + 291, + 289 + ], + "spans": [ + { + "bbox": [ + 67, + 206, + 291, + 289 + ], + "type": "text", + "content": "Figure 5: The effects of fixed (irrelevant) and similar cases as demonstrations. Divided by the baseline setting of zero-shot open questions, the left part refers to fixed demonstrations with increasing numbers of demonstrations, while the right part refers to similar demonstrations. The shadow area represents the range of standard deviation." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 313, + 290, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 313, + 290, + 380 + ], + "spans": [ + { + "bbox": [ + 67, + 313, + 290, + 380 + ], + "type": "text", + "content": "experiment with irrelevant demonstrations fixed for all test samples. We manually select two common cases with frequent charges in the original dataset as the fixed demonstrations. The 1-shot performance was averaged on the two demonstrations." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 383, + 291, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 383, + 291, + 531 + ], + "spans": [ + { + "bbox": [ + 67, + 383, + 291, + 531 + ], + "type": "text", + "content": "We compare the effects of fixed and similar demonstrations with the baseline setting of zero-shot open questions in Figure 5. The change of performance from center to left demonstrates that fixed demonstrations hardly benefit LLMs and sometimes harm the performance (e.g., ChatGLM). This indicates that LLMs can basically understand instructions and do not need general demonstrations for task clarification, implying that the main challenge of expertise reasoning is to recall domain knowledge instead of understanding a specific task." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 533, + 291, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 533, + 291, + 666 + ], + "spans": [ + { + "bbox": [ + 67, + 533, + 291, + 666 + ], + "type": "text", + "content": "We inspect the notable performance drop of ChatGLM resulting from fixed demonstrations. We find that ChatGLM tends to analyze the cases of both demonstrations and test samples and then answer with both of their charges. Its wordy style seems to result from the fine-tuning dialog corpus where an assistant LLM is supposed to provide rich information. In contrast, similar cases seem to encourage more concise outputs following the format of demonstrations." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 681, + 260, + 708 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 681, + 260, + 708 + ], + "spans": [ + { + "bbox": [ + 67, + 681, + 260, + 708 + ], + "type": "text", + "content": "5 Paradox of Information Retrieval System" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "content": "The significance of similar demonstrations illustrated in Section 4.3 has motivated research focusing on prompting-oriented IR systems (Rubin et al., 2021; Sun et al., 2023) to retrieve high qual" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 310, + 74, + 518, + 174 + ], + "blocks": [ + { + "bbox": [ + 310, + 74, + 518, + 174 + ], + "lines": [ + { + "bbox": [ + 310, + 74, + 518, + 174 + ], + "spans": [ + { + "bbox": [ + 310, + 74, + 518, + 174 + ], + "type": "image", + "image_path": "0474ae253fa7623cb6310b53e5d59e27aaf151b2cd9ad07d38f2f015a7f64a32.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 188, + 526, + 273 + ], + "lines": [ + { + "bbox": [ + 302, + 188, + 526, + 273 + ], + "spans": [ + { + "bbox": [ + 302, + 188, + 526, + 273 + ], + "type": "text", + "content": "Figure 6: The performance of ChatGPT coordinated with a series of simulated IR systems with varying capabilities as measured by Precision@1. The vertical blue line represents the threshold of IR capability at which IR systems overtake ChatGPT. The performance of ChatGPT in the real setting (1-shot open questions) is indicated by the red plus sign." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "spans": [ + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "text", + "content": "ity demonstrations. However, we raise an intuitive question: Do LLMs gain substantial improvement from IR systems compared to the kNN baseline that harnesses IR systems for classification tasks? The question is inspired by our observation that the BM25 retriever achieves " + }, + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "inline_equation", + "content": "48.03\\%" + }, + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "text", + "content": " of Precision@1 and " + }, + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "inline_equation", + "content": "57.68\\%" + }, + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "text", + "content": " prediction accuracy by majority vote of top " + }, + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "inline_equation", + "content": "k = 17" + }, + { + "bbox": [ + 301, + 294, + 525, + 401 + ], + "type": "text", + "content": " retrieved similar cases." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 301, + 402, + 525, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 402, + 525, + 495 + ], + "spans": [ + { + "bbox": [ + 301, + 402, + 525, + 495 + ], + "type": "text", + "content": "This observation suggests a paradoxical scenario wherein an IR system outperforms the combination of LLM and IR, with the LLM taking on the leading role and the IR serving as a supporting role. In such a scenario, the LLM becomes redundant due to its failure to fully utilize the informative retrieved documents." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 497, + 525, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 497, + 525, + 603 + ], + "spans": [ + { + "bbox": [ + 301, + 497, + 525, + 603 + ], + "type": "text", + "content": "To investigate the paradox, instead of experimenting with different IR systems, we manipulate the BM25 retriever to simulate a series of IR systems with different capabilities measured by Precision@1 as described by Section 2.3. We take a case study of ChatGPT, whose 1-shot performance under different IR systems (denoted as Precision@1) is shown in Figure 6." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 605, + 526, + 725 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 605, + 526, + 725 + ], + "spans": [ + { + "bbox": [ + 302, + 605, + 526, + 725 + ], + "type": "text", + "content": "Results Although the performance of ChatGPT enhanced by IR systems improves with IR capability, it will eventually underperform the IR system once the IR capability surpasses a certain threshold. In the ideal situation where true similar cases are always retrieved, ChatGPT is unable to attain " + }, + { + "bbox": [ + 302, + 605, + 526, + 725 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 302, + 605, + 526, + 725 + ], + "type": "text", + "content": " accuracy and lags significantly behind the optimal IR system. According to Appendix A.4, all smaller LLMs are not comparable to the BM25 retriever." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 727, + 525, + 754 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 727, + 525, + 754 + ], + "spans": [ + { + "bbox": [ + 302, + 727, + 525, + 754 + ], + "type": "text", + "content": "Discussion The findings demonstrate that LLMs face challenges in effectively leveraging informa" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 313, + 760, + 498, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 760, + 498, + 772 + ], + "spans": [ + { + "bbox": [ + 313, + 760, + 498, + 772 + ], + "type": "text", + "content": "It is identical to the precision of " + }, + { + "bbox": [ + 313, + 760, + 498, + 772 + ], + "type": "inline_equation", + "content": "k\\mathrm{NN}" + }, + { + "bbox": [ + 313, + 760, + 498, + 772 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 760, + 498, + 772 + ], + "type": "inline_equation", + "content": "k = 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "7342" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 73, + 73, + 286, + 191 + ], + "blocks": [ + { + "bbox": [ + 73, + 73, + 286, + 191 + ], + "lines": [ + { + "bbox": [ + 73, + 73, + 286, + 191 + ], + "spans": [ + { + "bbox": [ + 73, + 73, + 286, + 191 + ], + "type": "image", + "image_path": "172f2228b736de67a5247c124932299a26a9018979977cbb67db93e147fb597c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 204, + 291, + 227 + ], + "lines": [ + { + "bbox": [ + 67, + 204, + 291, + 227 + ], + "spans": [ + { + "bbox": [ + 67, + 204, + 291, + 227 + ], + "type": "text", + "content": "Figure 7: Performance vs. the number of similar demonstrations of the five LLMs." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 250, + 291, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 250, + 291, + 425 + ], + "spans": [ + { + "bbox": [ + 67, + 250, + 291, + 425 + ], + "type": "text", + "content": "tive retrieved documents. This underscores the need for significant research efforts to enhance the synergy between auto-regressive language models and retrieval by conditioning model outputs more on retrieved documents. Previous work has explored the augmentation of LLMs with retrieval at both the pre-training and fine-tuning stages (Borgeaud et al., 2022; Wang et al., 2023). Moreover, the marginal and inadequate improvement with retrieval indicates the limited legal reasoning ability of existing general LLMs. There is a need for future efforts to enhance domain-specific reasoning abilities of pre-trained foundation models." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 436, + 167, + 449 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 436, + 167, + 449 + ], + "spans": [ + { + "bbox": [ + 67, + 436, + 167, + 449 + ], + "type": "text", + "content": "6 Ablation Study" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 458, + 274, + 484 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 458, + 274, + 484 + ], + "spans": [ + { + "bbox": [ + 67, + 458, + 274, + 484 + ], + "type": "text", + "content": "6.1 More Demonstrations Are Not Always Better" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "spans": [ + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "content": "The impact of the number of similar demonstrations " + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "inline_equation", + "content": "(n)" + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "content": " is depicted in Figure 7. It is evident that GPT-4 and ChatGPT demonstrate proficiency in handling larger numbers of demonstrations, leading to enhanced performance, whereas Vicuna, ChatGLM and BLOOZ experience varying degrees of performance degradation with increasing numbers. Notably, ChatGLM displays the least sensitivity to " + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "content": ". Furthermore, even ChatGPT's performance declines when " + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "content": " is increased from three to four. The performance improvement resulting from larger values of " + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "content": " can be attributed to the increased recall of true similar cases. Conversely, the decline in performance can be attributed to the noise introduced by more false similar cases." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "content": "Performance variations. The change of performance after including an additional demonstration are visualized using heat maps in Figure 8. For each model, the three heat maps stand for the variations from k-shot to " + }, + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "inline_equation", + "content": "(\\mathrm{k} + 1)" + }, + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "content": " -shot, which are denoted below. For each heat map, the two rows indicate" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 192 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 192 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 192 + ], + "type": "text", + "content": "the inclusion of a new demonstration with true (T) or false (F) similar cases, while the columns indicate the combinations of existing demonstrations. Take the second heat map as an example. The cell in the column of (F, T) and the row of (T) displays the performance variation between 2-shot of (F, T) demonstrations and 3-shot of (F, T, T) demonstrations. Purple represents performance improvement, while green represents performance decline." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 194, + 526, + 438 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 194, + 526, + 438 + ], + "spans": [ + { + "bbox": [ + 302, + 194, + 526, + 438 + ], + "type": "text", + "content": "For ChatGPT and BLOOMZ, the second rows of their three heat maps are mainly in purple, indicating significant enhancements resulting from the inclusion of true similar cases. However, the first lines of BLOOMZ display a deeper green color than those of ChatGPT, suggesting that BLOOMZ experiences greater degree of performance declines caused by the inclusion of false similar cases. These findings indicate different sensitivity to false similar demonstrations. Powerful language models like GPT-4 and ChatGPT exhibit robustness to noise in false similar cases, allowing them to remain focused on relevant information in true similar cases. In contrast, weaker LLMs are susceptible to the influence of such noise. Overall, ChatGPT performs better when provided with more similar demonstrations, whereas BLOOMZ demonstrates the opposite, as shown in Figure 7." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 439, + 525, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 439, + 525, + 507 + ], + "spans": [ + { + "bbox": [ + 302, + 439, + 525, + 507 + ], + "type": "text", + "content": "The conclusion is that increased numbers of demonstrations have both positive and negative implications for expertise reasoning. However, LLMs could potentially gain from additional demonstrations in tasks that requires clear task illustration." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 521, + 502, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 521, + 502, + 547 + ], + "spans": [ + { + "bbox": [ + 302, + 521, + 502, + 547 + ], + "type": "text", + "content": "6.2 The Impact of Absent Ground Truth Labels" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 555, + 525, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 555, + 525, + 703 + ], + "spans": [ + { + "bbox": [ + 301, + 555, + 525, + 703 + ], + "type": "text", + "content": "We manually incorporate ground-truth labels into label candidates in cases where they are absent, which may occur due to the limited recall capability of the IR system described in Section 2.2. The test samples are categorized into two groups, namely \"Easy\" and \"Hard\", based on the retrieval of their ground truth labels by the IR system. The original performance of the two groups and the performance of the \"Hard\" group with modified prompts to include ground truth labels, namely \"Hard+GT\", are displayed in Figure 9." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 706, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 706, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 706, + 526, + 772 + ], + "type": "text", + "content": "The performance gaps between the \"Easy\" and \"Hard+GT\" groups suggest that challenging samples for IR systems are also difficult for LLMs. However, this gap is insignificant for the powerful GPT-4 who perceives them as equal challeng" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "7343" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 107, + 71, + 496, + 191 + ], + "blocks": [ + { + "bbox": [ + 107, + 71, + 496, + 191 + ], + "lines": [ + { + "bbox": [ + 107, + 71, + 496, + 191 + ], + "spans": [ + { + "bbox": [ + 107, + 71, + 496, + 191 + ], + "type": "image", + "image_path": "fceac87db227ad2a57df6f7e774a27f1bdaa0fa61b7ab09823a4b90fdef37bc8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 202, + 526, + 242 + ], + "lines": [ + { + "bbox": [ + 67, + 202, + 526, + 242 + ], + "spans": [ + { + "bbox": [ + 67, + 202, + 526, + 242 + ], + "type": "text", + "content": "Figure 8: Heat maps of performance variations resulting from the inclusion of an addition demonstration. \"T\" corresponds to demonstrations with true similar cases, while \"F\" represents those with false similar cases. Each row represents the included new demonstration, while each column indicates the status of existing demonstrations." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 73, + 262, + 285, + 364 + ], + "blocks": [ + { + "bbox": [ + 73, + 262, + 285, + 364 + ], + "lines": [ + { + "bbox": [ + 73, + 262, + 285, + 364 + ], + "spans": [ + { + "bbox": [ + 73, + 262, + 285, + 364 + ], + "type": "image", + "image_path": "37506c50c53710f25935e164f14860b2a53239585ed1d5a3b8f8ac353fd38898.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 377, + 291, + 426 + ], + "lines": [ + { + "bbox": [ + 67, + 377, + 291, + 426 + ], + "spans": [ + { + "bbox": [ + 67, + 377, + 291, + 426 + ], + "type": "text", + "content": "Figure 9: The performance of \"Easy\" and \"Hard\" samples under the setting of zero-shot multi-choice questions. \"Hard+GT\" refers to improvement of including the absent ground truth labels in label candidates." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 451, + 291, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 451, + 291, + 547 + ], + "spans": [ + { + "bbox": [ + 67, + 451, + 291, + 547 + ], + "type": "text", + "content": "ing. The improvement of \"Hard+GT\" compared to \"Hard\" is notable in GPT-4, ChatGPT and ChatGLM but inconspicuous in Vicuna with inferior competency in the law. Considering the relatively small size of the \"Hard\" group (79/560), the absence of ground truth labels does not have a significant impact, especially for weaker LLMs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 562, + 235, + 575 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 562, + 235, + 575 + ], + "spans": [ + { + "bbox": [ + 67, + 562, + 235, + 575 + ], + "type": "text", + "content": "6.3 Incorporation of Law Articles" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 583, + 291, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 583, + 291, + 745 + ], + "spans": [ + { + "bbox": [ + 67, + 583, + 291, + 745 + ], + "type": "text", + "content": "We examine the effect of incorporating legal articles that explicitly define the charges into prompts. For each charge retrieved by the IR system11, ChatGPT is required to determine whether the defendant is guilty for the particular charge by answering with a yes or no. We find that " + }, + { + "bbox": [ + 67, + 583, + 291, + 745 + ], + "type": "inline_equation", + "content": "94.46\\%" + }, + { + "bbox": [ + 67, + 583, + 291, + 745 + ], + "type": "text", + "content": " of the ground truth charges are accurately detected, while only " + }, + { + "bbox": [ + 67, + 583, + 291, + 745 + ], + "type": "inline_equation", + "content": "27.31\\%" + }, + { + "bbox": [ + 67, + 583, + 291, + 745 + ], + "type": "text", + "content": " of the detected charges are correct. The high recall and low precision indicate a substantial difference between ChatGPT and legal experts in the ability to distinguish charges and make precise judgments." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 303, + 260, + 379, + 272 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 260, + 379, + 272 + ], + "spans": [ + { + "bbox": [ + 303, + 260, + 379, + 272 + ], + "type": "text", + "content": "7 Discussion" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 281, + 526, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 281, + 526, + 389 + ], + "spans": [ + { + "bbox": [ + 302, + 281, + 526, + 389 + ], + "type": "text", + "content": "We compare the LLMs with supervised baselines. We fine-tune BERT (Devlin et al., 2018) on the same training set and achieve a comparable accuracy of " + }, + { + "bbox": [ + 302, + 281, + 526, + 389 + ], + "type": "inline_equation", + "content": "68\\%" + }, + { + "bbox": [ + 302, + 281, + 526, + 389 + ], + "type": "text", + "content": " to ChatGPT but lower than GPT-4. Since LLMs are not fine-tuned on the specific LJP task, this result highlights the remarkable superiority of LLMs in acquiring significant knowledge and leveraging transfer learning Raffel et al. (2020)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 390, + 525, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 390, + 525, + 523 + ], + "spans": [ + { + "bbox": [ + 302, + 390, + 525, + 523 + ], + "type": "text", + "content": "However, we observe that BERT's performance improves to " + }, + { + "bbox": [ + 302, + 390, + 525, + 523 + ], + "type": "inline_equation", + "content": "89\\%" + }, + { + "bbox": [ + 302, + 390, + 525, + 523 + ], + "type": "text", + "content": " when trained with the original training set (" + }, + { + "bbox": [ + 302, + 390, + 525, + 523 + ], + "type": "inline_equation", + "content": "\\sim 10\\mathrm{K}" + }, + { + "bbox": [ + 302, + 390, + 525, + 523 + ], + "type": "text", + "content": "). We find that certain knowledge is present in shadow features, which can be easily learned with supervision. These superficial features can result in biased supervised models. Fortunately, unsupervised pre-training objectives, make LLMs more robust and less vulnerable to this issue. This depicts a promising future for NLP applications in various domains." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 535, + 381, + 548 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 535, + 381, + 548 + ], + "spans": [ + { + "bbox": [ + 302, + 535, + 381, + 548 + ], + "type": "text", + "content": "8 Conclusion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 556, + 526, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 556, + 526, + 650 + ], + "spans": [ + { + "bbox": [ + 302, + 556, + 526, + 650 + ], + "type": "text", + "content": "To address the deficiency in evaluating the competency of LLMs in the field of law, we focused on the task of legal judgment prediction and devised four settings to facilitate a thorough evaluation that encompassed both open and multiple-choice questions and incorporated similar cases to aid in the decision-making process." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 651, + 527, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 651, + 527, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 651, + 527, + 772 + ], + "type": "text", + "content": "The evaluation results revealed different behaviors among the prominent LLMs, namely GPT-4 and ChatGPT, compared to their smaller counterparts. Both GPT-4 and ChatGPT exhibited remarkable proficiency in effectively leveraging domain knowledge in various formats. Among the smaller LLMs, ChatGLM displayed greater robustness, while BLOOMZ showcased superior zero-shot ability." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 78, + 760, + 229, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 760, + 229, + 772 + ], + "spans": [ + { + "bbox": [ + 78, + 760, + 229, + 772 + ], + "type": "text", + "content": "we also include the ground truth charge" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "7344" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 153 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 153 + ], + "type": "text", + "content": "We presented an intriguing paradox wherein LLMs could become abundant in the presence of a powerful IR system. When improving IR systems to benefit LLMs, it is crucial for researchers to acknowledge this paradoxical scenario and prevent great disparity between LLMs and IR systems." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 167, + 131, + 179 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 167, + 131, + 179 + ], + "spans": [ + { + "bbox": [ + 67, + 167, + 131, + 179 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 192, + 291, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 192, + 291, + 299 + ], + "spans": [ + { + "bbox": [ + 67, + 192, + 291, + 299 + ], + "type": "text", + "content": "One limitation of this paper is the use of the close-source GPT-4 and ChatGPT whose availability depends on the commercial company OpenAI. According to OpenAI, the ChatGPT and GPT-4 versions used in this paper, namely gpt-3.5-turbo-0301 and gpt-4-0314, will be deprecated and not available after September 13th, 2023." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 301, + 291, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 301, + 291, + 396 + ], + "spans": [ + { + "bbox": [ + 67, + 301, + 291, + 396 + ], + "type": "text", + "content": "Another limitation pertains to the selection of LLMs. Due to the rapid emergence of new LLMs, we are not able to include all of them with the constraint of limited time. Instead of more models, we focus more on designing comprehensive evaluation settings and conducting insightful analyses to shed light on other domains." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 411, + 158, + 424 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 411, + 158, + 424 + ], + "spans": [ + { + "bbox": [ + 68, + 411, + 158, + 424 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 436, + 291, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 436, + 291, + 640 + ], + "spans": [ + { + "bbox": [ + 67, + 436, + 291, + 640 + ], + "type": "text", + "content": "The task of legal judgment prediction is used to evaluate LLM's competency in the law. The primary objective of this task is to assist judges and lawyers in comprehending lengthy legal documents by offering them a supplementary tool. It is important to note that this task does not seek to replace the roles of judges and lawyers, nor does it aim to determine the guilt or charges of defendants through machine learning algorithms. Additionally, there is research focused on interpreting LJP models, aiming to enhance the transparency of black-box models for improved utilization by legal practitioners. The paper utilizes a public and anonymized dataset to exclude the potential issue of personal information leakage." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 654, + 170, + 667 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 654, + 170, + 667 + ], + "spans": [ + { + "bbox": [ + 68, + 654, + 170, + 667 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 678, + 291, + 760 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 678, + 291, + 760 + ], + "spans": [ + { + "bbox": [ + 67, + 678, + 291, + 760 + ], + "type": "text", + "content": "We thank all reviewers for their constructive comments. This research is supported by NExT Research Center, the National Natural Science Foundation of China (9227010114) and the University Synergy Innovation Program of Anhui Province (GXXT-2022-040)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 304, + 70, + 362, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 70, + 362, + 83 + ], + "spans": [ + { + "bbox": [ + 304, + 70, + 362, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 89, + 527, + 772 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 304, + 89, + 527, + 168 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 89, + 527, + 168 + ], + "spans": [ + { + "bbox": [ + 304, + 89, + 527, + 168 + ], + "type": "text", + "content": "Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206-2240. PMLR." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 174, + 526, + 242 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 174, + 526, + 242 + ], + "spans": [ + { + "bbox": [ + 304, + 174, + 526, + 242 + ], + "type": "text", + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 248, + 526, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 248, + 526, + 303 + ], + "spans": [ + { + "bbox": [ + 304, + 248, + 526, + 303 + ], + "type": "text", + "content": "Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in english. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 310, + 526, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 310, + 526, + 378 + ], + "spans": [ + { + "bbox": [ + 304, + 310, + 526, + 378 + ], + "type": "text", + "content": "Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with " + }, + { + "bbox": [ + 304, + 310, + 526, + 378 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 304, + 310, + 526, + 378 + ], + "type": "text", + "content": " * chatgpt quality." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 384, + 526, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 384, + 526, + 430 + ], + "spans": [ + { + "bbox": [ + 304, + 384, + 526, + 430 + ], + "type": "text", + "content": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 436, + 526, + 481 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 436, + 526, + 481 + ], + "spans": [ + { + "bbox": [ + 304, + 436, + 526, + 481 + ], + "type": "text", + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 487, + 526, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 487, + 526, + 555 + ], + "spans": [ + { + "bbox": [ + 304, + 487, + 526, + 555 + ], + "type": "text", + "content": "Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320-335." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 561, + 526, + 607 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 561, + 526, + 607 + ], + "spans": [ + { + "bbox": [ + 304, + 561, + 526, + 607 + ], + "type": "text", + "content": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 613, + 526, + 648 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 613, + 526, + 648 + ], + "spans": [ + { + "bbox": [ + 304, + 613, + 526, + 648 + ], + "type": "text", + "content": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 654, + 526, + 710 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 654, + 526, + 710 + ], + "spans": [ + { + "bbox": [ + 304, + 654, + 526, + 710 + ], + "type": "text", + "content": "Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962-977." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 716, + 526, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 716, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 716, + 526, + 772 + ], + "type": "text", + "content": "Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 792 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 792 + ], + "type": "text", + "content": "7345" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "text", + "content": "Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 125, + 289, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 125, + 289, + 169 + ], + "spans": [ + { + "bbox": [ + 69, + 125, + 289, + 169 + ], + "type": "text", + "content": "Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. 2023. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 178, + 289, + 200 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 178, + 289, + 200 + ], + "spans": [ + { + "bbox": [ + 69, + 178, + 289, + 200 + ], + "type": "text", + "content": "Eric Martínez. 2023. Re-evaluating gpt-4's bar exam performance. Available at SSRN 4441311." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 208, + 289, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 208, + 289, + 274 + ], + "spans": [ + { + "bbox": [ + 69, + 208, + 289, + 274 + ], + "type": "text", + "content": "Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 283, + 289, + 305 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 283, + 289, + 305 + ], + "spans": [ + { + "bbox": [ + 69, + 283, + 289, + 305 + ], + "type": "text", + "content": "OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 314, + 289, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 314, + 289, + 380 + ], + "spans": [ + { + "bbox": [ + 69, + 314, + 289, + 380 + ], + "type": "text", + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 389, + 289, + 433 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 389, + 289, + 433 + ], + "spans": [ + { + "bbox": [ + 69, + 389, + 289, + 433 + ], + "type": "text", + "content": "Aleksandar Petrov, Emanuele La Malfa, Philip HS Torr, and Adel Bibi. 2023. Language model tokenizers introduce unfairness between languages. arXiv preprint arXiv:2305.15425." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 441, + 289, + 507 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 441, + 289, + 507 + ], + "spans": [ + { + "bbox": [ + 69, + 441, + 289, + 507 + ], + "type": "text", + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 516, + 289, + 550 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 516, + 289, + 550 + ], + "spans": [ + { + "bbox": [ + 69, + 516, + 289, + 550 + ], + "type": "text", + "content": "Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2021. Learning to retrieve prompts for in-context learning. arXiv preprint arXiv:2112.08633." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 558, + 289, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 558, + 289, + 624 + ], + "spans": [ + { + "bbox": [ + 69, + 558, + 289, + 624 + ], + "type": "text", + "content": "Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Galle, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 633, + 289, + 677 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 633, + 289, + 677 + ], + "spans": [ + { + "bbox": [ + 69, + 633, + 289, + 677 + ], + "type": "text", + "content": "Olga Shulayeva, Advaith Siddharthan, and Adam Wyner. 2017. Recognizing cited facts and principles in legal judgements. Artificial Intelligence and Law, 25(1):107-126." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 686, + 289, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 686, + 289, + 731 + ], + "spans": [ + { + "bbox": [ + 69, + 686, + 289, + 731 + ], + "type": "text", + "content": "Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, and Guoyin Wang. 2023. Pushing the limits of chatgpt on nlp tasks." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 739, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 739, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 739, + 289, + 772 + ], + "type": "text", + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro," + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 509 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 314, + 72, + 524, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 524, + 105 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 524, + 105 + ], + "type": "text", + "content": "Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 114, + 524, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 114, + 524, + 179 + ], + "spans": [ + { + "bbox": [ + 304, + 114, + 524, + 179 + ], + "type": "text", + "content": "Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, et al. 2023. Shall we pretrain autoregressive language models with retrieval? a comprehensive study. arXiv preprint arXiv:2304.06762." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 188, + 524, + 244 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 188, + 524, + 244 + ], + "spans": [ + { + "bbox": [ + 304, + 188, + 524, + 244 + ], + "type": "text", + "content": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 252, + 524, + 307 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 252, + 524, + 307 + ], + "spans": [ + { + "bbox": [ + 304, + 252, + 524, + 307 + ], + "type": "text", + "content": "Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 316, + 524, + 371 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 316, + 524, + 371 + ], + "spans": [ + { + "bbox": [ + 304, + 316, + 524, + 371 + ], + "type": "text", + "content": "Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Tianyang Zhang, Xianpei Han, Zhen Hu, Heng Wang, et al. 2019. Cail2019-scm: A dataset of similar case matching in legal domain. arXiv preprint arXiv:1911.08962." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 380, + 524, + 446 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 380, + 524, + 446 + ], + "spans": [ + { + "bbox": [ + 304, + 380, + 524, + 446 + ], + "type": "text", + "content": "Haoxi Zhong, Yuzhong Wang, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020a. Iteratively questioning and answering for interpretable legal judgment prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1250-1257." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 454, + 524, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 454, + 524, + 509 + ], + "spans": [ + { + "bbox": [ + 304, + 454, + 524, + 509 + ], + "type": "text", + "content": "Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020b. How does nlp benefit legal system: A summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158." + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "7346" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 141, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 141, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 141, + 84 + ], + "type": "text", + "content": "A Appendix" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 93, + 183, + 105 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 93, + 183, + 105 + ], + "spans": [ + { + "bbox": [ + 68, + 93, + 183, + 105 + ], + "type": "text", + "content": "A.1 Prompt Templates" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 111, + 290, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 111, + 290, + 191 + ], + "spans": [ + { + "bbox": [ + 67, + 111, + 290, + 191 + ], + "type": "text", + "content": "The prompt template is shown in Figure 10. The translation of the original Chinese prompt is displayed using orange text. The setting of zero-shot open questions use a longer instruction that appends \"Output the charge name directly\" to the instruction in Figure A.1." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 71, + 205, + 279, + 322 + ], + "blocks": [ + { + "bbox": [ + 71, + 205, + 279, + 322 + ], + "lines": [ + { + "bbox": [ + 71, + 205, + 279, + 322 + ], + "spans": [ + { + "bbox": [ + 71, + 205, + 279, + 322 + ], + "type": "image", + "image_path": "2f571fc406aeb44a37b39fa8436dcaf4cb565f2f8dbdaee851cee92ed3b8a327.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 333, + 289, + 346 + ], + "lines": [ + { + "bbox": [ + 67, + 333, + 289, + 346 + ], + "spans": [ + { + "bbox": [ + 67, + 333, + 289, + 346 + ], + "type": "text", + "content": "Figure 10: The prompt template in Chinese and English." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 370, + 249, + 382 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 370, + 249, + 382 + ], + "spans": [ + { + "bbox": [ + 68, + 370, + 249, + 382 + ], + "type": "text", + "content": "A.2 Robust to Fixed Demonstrations" + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 88, + 394, + 270, + 479 + ], + "blocks": [ + { + "bbox": [ + 88, + 394, + 270, + 479 + ], + "lines": [ + { + "bbox": [ + 88, + 394, + 270, + 479 + ], + "spans": [ + { + "bbox": [ + 88, + 394, + 270, + 479 + ], + "type": "table", + "html": "
Model1shot2shot
GPT-449.59 / 48.8450.69
ChatGPT47.01 / 46.5747.55
Vicuna-13B22.74 / 29.3828.37
ChatGLM-6B22.39 / 25.1421.36
BLOOMZ-7B36.65 / 43.9442.24
", + "image_path": "ac846278a1128c582af753f3fa49f81f2815f8dfaa13c2157b3810c6f4b2d04b.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 528, + 289, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 528, + 289, + 596 + ], + "spans": [ + { + "bbox": [ + 67, + 528, + 289, + 596 + ], + "type": "text", + "content": "We examine the effects of the two fixed cases mentioned in Section 4.3 in Table 2. We find that GPT-4 and ChatGPT are robust to the selection of the fixed demonstration in 1-shot setting, while Vicuna, ChatGLM and BLOOMZ are less robust." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 68, + 606, + 279, + 619 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 606, + 279, + 619 + ], + "spans": [ + { + "bbox": [ + 68, + 606, + 279, + 619 + ], + "type": "text", + "content": "A.3 Comparison with Supervised Baselines" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "text", + "content": "To understand the performance of supervised finetuning (SFT) baselines on LJP, we experiment on three models: BERT" + }, + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "inline_equation", + "content": "^{12}" + }, + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "text", + "content": ", XLM-RoBERTa" + }, + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "inline_equation", + "content": "^{13}" + }, + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "text", + "content": " and DeBERTa" + }, + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "inline_equation", + "content": "^{14}" + }, + { + "bbox": [ + 67, + 624, + 290, + 731 + ], + "type": "text", + "content": ". These models are fine-tuned on two datasets of different sizes: the original CAIL dataset (~100k samples) and the sampled training set (1120 samples) that is used as retrieval corpus described in Section 3.2, denoted as CAIL_few." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 71, + 525, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 137 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 137 + ], + "type": "text", + "content": "The SFT models are evaluated on the same evaluation dataset described in Section 3.2. The smaller training set aims to compare the few-shot performance of SFT baselines and LLMs in low data scenario." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 139, + 525, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 139, + 525, + 220 + ], + "spans": [ + { + "bbox": [ + 302, + 139, + 525, + 220 + ], + "type": "text", + "content": "The results of SFT models are shown in Figure 3. Considering the highest accuracy of GPT-4 being " + }, + { + "bbox": [ + 302, + 139, + 525, + 220 + ], + "type": "inline_equation", + "content": "74.46\\%" + }, + { + "bbox": [ + 302, + 139, + 525, + 220 + ], + "type": "text", + "content": " (multi-choice, 4shot), GPT-4 can outperform supervised baselines in low data scenario. If there is abundant training data, supervised baselines are still better than GPT-4 by " + }, + { + "bbox": [ + 302, + 139, + 525, + 220 + ], + "type": "inline_equation", + "content": "15\\%" + }, + { + "bbox": [ + 302, + 139, + 525, + 220 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 12 + }, + { + "type": "table", + "bbox": [ + 323, + 228, + 505, + 286 + ], + "blocks": [ + { + "bbox": [ + 67, + 488, + 289, + 512 + ], + "lines": [ + { + "bbox": [ + 67, + 488, + 289, + 512 + ], + "spans": [ + { + "bbox": [ + 67, + 488, + 289, + 512 + ], + "type": "text", + "content": "Table 2: The classification accuracy scores with prompts consisting of fixed cases." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 323, + 228, + 505, + 286 + ], + "lines": [ + { + "bbox": [ + 323, + 228, + 505, + 286 + ], + "spans": [ + { + "bbox": [ + 323, + 228, + 505, + 286 + ], + "type": "table", + "html": "
ModelCAILCAIL_few
BERT89.6468.04
XLM-RoBERTa88.7566.43
DeBERTa88.5730.89
", + "image_path": "9ff0ebb167c5e4bf5cbfeac94073da6a547845d2e4d07cd2fc8c9b9ce5e51518.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_body" + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 295, + 524, + 319 + ], + "lines": [ + { + "bbox": [ + 302, + 295, + 524, + 319 + ], + "spans": [ + { + "bbox": [ + 302, + 295, + 524, + 319 + ], + "type": "text", + "content": "Table 3: Prediction accuracy of SFT models fine-tuned on two training datasets of different sizes." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 342, + 409, + 354 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 342, + 409, + 354 + ], + "spans": [ + { + "bbox": [ + 302, + 342, + 409, + 354 + ], + "type": "text", + "content": "A.4 Detailed Results" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 359, + 525, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 359, + 525, + 399 + ], + "spans": [ + { + "bbox": [ + 302, + 359, + 525, + 399 + ], + "type": "text", + "content": "The specific values of performances displayed in Figure 2 are presented in Table 4. Besides, we also provide the performance of the F1 score in Table 5." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 79, + 738, + 163, + 749 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 738, + 163, + 749 + ], + "spans": [ + { + "bbox": [ + 79, + 738, + 163, + 749 + ], + "type": "text", + "content": "12bert-base-chinese" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 79, + 750, + 158, + 760 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 750, + 158, + 760 + ], + "spans": [ + { + "bbox": [ + 79, + 750, + 158, + 760 + ], + "type": "text", + "content": "13xIm-roberta-base" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 79, + 761, + 203, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 761, + 203, + 772 + ], + "spans": [ + { + "bbox": [ + 79, + 761, + 203, + 772 + ], + "type": "inline_equation", + "content": "^{14}" + }, + { + "bbox": [ + 79, + 761, + 203, + 772 + ], + "type": "text", + "content": "microsoft/mdeberta-v3-base" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "7347" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 76, + 182, + 517, + 281 + ], + "blocks": [ + { + "bbox": [ + 76, + 182, + 517, + 281 + ], + "lines": [ + { + "bbox": [ + 76, + 182, + 517, + 281 + ], + "spans": [ + { + "bbox": [ + 76, + 182, + 517, + 281 + ], + "type": "table", + "html": "
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-455.1864.8269.1169.8271.9663.9371.2572.5073.7574.46
ChatGPT46.6160.0062.8664.8266.9661.6164.4666.9670.3667.14
Vicuna-13B28.2150.3649.6451.7935.8947.8644.8243.3935.7119.46
ChatGLM-6B41.4351.7950.0050.3650.5455.7150.5449.6449.4647.32
BLOOMZ-7B49.8254.8252.6852.5051.2553.3931.9631.0727.3226.61
", + "image_path": "f4de87165a1fedaa2c922df6fe9b182d5d6f862f35ff28c6fa9598eaeea95886.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 76, + 536, + 517, + 634 + ], + "blocks": [ + { + "bbox": [ + 135, + 290, + 456, + 301 + ], + "lines": [ + { + "bbox": [ + 135, + 290, + 456, + 301 + ], + "spans": [ + { + "bbox": [ + 135, + 290, + 456, + 301 + ], + "type": "text", + "content": "Table 4: The classification accuracy scores of all models under the four settings." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 76, + 536, + 517, + 634 + ], + "lines": [ + { + "bbox": [ + 76, + 536, + 517, + 634 + ], + "spans": [ + { + "bbox": [ + 76, + 536, + 517, + 634 + ], + "type": "table", + "html": "
ModelOpen QuestionsMultiple-choice Questions
0shot1shot2shot3shot4shot0shot1shot2shot3shot4shot
GPT-450.5262.7267.5468.6171.0262.3170.4271.8173.2474.00
ChatGPT43.1458.4261.8664.4066.1660.6763.5166.8569.5966.62
Vicuna-13B25.5048.8547.6449.4939.8244.7041.7341.4835.0321.61
ChatGLM-6B41.8950.3047.7648.5948.6753.7449.2647.5647.6145.32
BLOOMZ-7B46.9053.2851.0650.9049.2650.6829.2527.9225.2723.37
", + "image_path": "7dc56daa2ba608a95e02a93d3ad5bc4b3b63d157d6d482ad9dbe66ee92f49115.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 147, + 643, + 444, + 655 + ], + "lines": [ + { + "bbox": [ + 147, + 643, + 444, + 655 + ], + "spans": [ + { + "bbox": [ + 147, + 643, + 444, + 655 + ], + "type": "text", + "content": "Table 5: The classification F1 scores of all models under the four settings." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "7348" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_content_list.json b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..800901e705c14be314250a8697233f03639c7186 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_content_list.json @@ -0,0 +1,2190 @@ +[ + { + "type": "text", + "text": "A Comprehensive Evaluation of Tool-Assisted Generation Strategies", + "text_level": 1, + "bbox": [ + 144, + 89, + 852, + 111 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Alon Jacovi $^{1*}$ Avi Caciularu $^{2}$ Jonathan Herzig $^{2}$", + "bbox": [ + 280, + 129, + 719, + 147 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Roee Aharoni² Bernd Bohnet³ Mor Geva³", + "bbox": [ + 299, + 152, + 702, + 168 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1Bar Ilan University 2Google Research 3Google DeepMind alonjacovi@gmail.com", + "bbox": [ + 240, + 181, + 759, + 215 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "A growing area of research investigates augmenting language models with tools (e.g., search engines, calculators) to overcome their shortcomings (e.g., missing or incorrect knowledge, incorrect logical inferences). Various few-shot tool-usage strategies have been proposed. However, there is no systematic and fair comparison across different strategies, or between these strategies and strong baselines that do not leverage tools. We conduct an extensive empirical analysis, finding that (1) across various datasets, example difficulty levels, and models, strong no-tool baselines are competitive to tool-assisted strategies, implying that effectively using tools with in-context demonstrations is a difficult unsolved problem; (2) for knowledge-retrieval tasks, strategies that refine incorrect outputs with tools outperform strategies that retrieve relevant information ahead of or during generation; (3) tool-assisted strategies are expensive in the number of tokens they require to work—incurring additional costs by orders of magnitude—which does not translate into significant improvement in performance. Overall, our findings suggest that few-shot tool integration is still an open challenge, emphasizing the need for comprehensive evaluations of future strategies to accurately assess their benefits and costs.", + "bbox": [ + 144, + 282, + 460, + 693 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 708, + 258, + 722 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Augmenting language models (LMs) with tools has been proposed to overcome LMs' inherent weaknesses (Mialon et al., 2023; Qian et al., 2022), such as the lack of grounding to reliable or updated sources (Jiang et al., 2023), incoherent logical ability (Liu et al., 2022; Ling et al., 2023) and arithmetic ability (Gao et al., 2023b), among others. This is done through tool-assisted (TA) generation, where LMs are trained or instructed to use external tools, such as search engines over the web—e.g.,", + "bbox": [ + 112, + 734, + 489, + 896 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Google search (Gao et al., 2023a; Press et al., 2023; Nakano et al., 2022), Wikipedia search (Trivedi et al., 2022a), a calculator (Schick et al., 2023), or a python interpreter (Paranjape et al., 2023). Often, tool invocations are structured as Chain-of-Thought (CoT) long-form answers (Wei et al., 2023).", + "bbox": [ + 507, + 252, + 884, + 348 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent work proposed a variety of strategies for interfacing between the LM and the tool, such as through demonstrations of API calls (Paranjape et al., 2023) or using the tool to refine the model's output (Gao et al., 2023a)—see Figure 2 for an overview. But what are the advantages and tradeoffs of different TA strategies? For example, some strategies incur significantly higher computation costs than others with little to no improvement in performance. There is a gap in the literature on the evaluation of such strategies, in particular against strong baselines and against each other. Concretely, works that report empirical evaluations are often restricted to comparisons of a single proposed strategy against a limited selection of non-TA baselines, using a limited selection of LMs or even a single LM, or focus on evaluating various LMs with a specific TA strategy (Li et al., 2023). Additionally, comparisons often do not consider the increase in computation that each TA strategy requires, which vary significantly, and have a large effect on inference time or cost.", + "bbox": [ + 507, + 353, + 884, + 705 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The above issues are only some of the pitfalls we observed in the literature, limiting the scope of current evaluations. In §3, we analyze the literature for common pitfalls and collect a set of guidelines towards a fair and reliable evaluation procedure specifically for TA strategies. Next (§4), we conduct a study which addresses all of the observed pitfalls, using GPT3, Flan-UL2 and Flan-PaLM, and complex reasoning benchmarks StrategyQA, MuSiQue, GSM8K, and DROP. We report a fair, systematic comparison of five few-shot TA strategies across multiple models and demonstrations, and all strategies use the same set of tools.", + "bbox": [ + 507, + 709, + 884, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Work done during an internship at Google Research.", + "bbox": [ + 141, + 904, + 470, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "13856", + "bbox": [ + 475, + 927, + 524, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13856-13878 December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 210, + 945, + 786, + 972 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/1c5421e70ff7f119d3c54f2e5ed5b8a0634970e8cb91f3aab86bd620f7368751.jpg", + "image_caption": [ + "Figure 1: Illustration of tool-assistance strategies that invoke tools and insert their outputs into the prompt (a), and strategies that first generate some output, and only use tools to fix and refine it (b)." + ], + "image_footnote": [], + "bbox": [ + 126, + 84, + 297, + 299 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/a20317809cde42b83fa8c5607c78d8b4b0f21cf28bae768bb6671f7e6942dd2c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 302, + 83, + 475, + 299 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We analyze the study results (§5) and arrive at surprising conclusions: (1) Non-TA baselines are stronger than initially reported. In most cases, TA strategies do not significantly or at all improve on non-TA strategies on popular Question Answering datasets. (2) For retrieval tools in knowledge tasks, TA strategies that fix model output after it is generated perform better than TA strategies that prompt the model to interface with the tool directly during generation. For calculator tools in calculation-intensive tasks, the relationship is not decisive. (3) TA strategies incur significantly higher computation costs than non-TA baselines by multiplicative factors, and there is no general correlation between computation cost and performance, with the exception that refinement strategies in retrieval settings are more costly than non-refinement strategies.", + "bbox": [ + 112, + 386, + 489, + 658 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In §6 we report a fine-grained analysis of the results. We investigate the effect of each example's difficulty—e.g., very large numbers, or very rare entities) on improvement from tool usage, and find that tools do not systematically improve model performance on harder examples, where they were expected to have the strongest improvement. Finally, based on an error analysis of failure cases, we find that the majority of mistakes follow incorrect tool invocations, rather than incorrect tool responses (in the case of the retrieval tool) or incorrect inferences based on correct tool usage.", + "bbox": [ + 112, + 661, + 489, + 852 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In conclusion, we conduct an extensive evaluation of few-shot TA strategies, finding that previous estimates of tool-usage performance is not representative. Overall, this suggests that few-shot tool", + "bbox": [ + 112, + 854, + 489, + 917 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "integration is still an open challenge. We call the community to evaluate future strategies systematically, while taking into account the significant costs that these strategies require in comparison to their benefits. Towards this, we provide a set of concrete guidelines for fair and reliable evaluation of TA strategies. Moreover, We release the handcrafted collection of 184 demonstrations used in our study (attached in the supplementary material).", + "bbox": [ + 507, + 84, + 884, + 229 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Tool-Assisted Language Models", + "text_level": 1, + "bbox": [ + 507, + 242, + 816, + 259 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We describe existing few-shot strategies for augmenting LMs with tools and discuss related work.", + "bbox": [ + 507, + 268, + 882, + 300 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 Few-shot TA strategies", + "text_level": 1, + "bbox": [ + 507, + 313, + 734, + 328 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Strategies for tool usage can be broadly divided into two categories: (a) Using tools during generation and insert the tools' outputs into the model's prompt (Figures 1a, 2a); (b) Using tools to refine the LM's output after generation (Figures 1b, 2b). Strategies can be further categorized into settings where the tool is heuristically called in a pipeline or called when the model generates pre-specified tool calls. Refer to Mialon et al. (2023) for a review of the literature on TA strategies and models.", + "bbox": [ + 507, + 334, + 884, + 494 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Among TA strategies of type (a): SelfAsk (Press et al., 2023) decomposes the task into subtasks as simpler questions, such that a tool can be called on each question. A related strategy is Demonstrate-Search-Predict (Khattab et al., 2023). Inline strategies such as Toolformer (Schick et al., 2023)1, ART (Paranjape et al., 2023), inter alia (Chen et al., 2022; Gao et al., 2023b; Lyu et al., 2023) demonstrate tool usage with pre-defined words or tokens and tool arguments, halt generation when those tokens and arguments are generated, invoke the tool, and insert its output into the prompt to resume generation. Interleaving Retrieval (Trivedi et al., 2022a) does not directly instruct the model to use tools, but calls the tool on each reasoning step, to provide the model with additional context for future steps. (Jiang et al., 2023) propose a similar strategy, opting to re-write each step after using it as a query. There are also strategies such as Decomposed Prompting (Khot et al., 2023) that are generalizations of the previous strategies.", + "bbox": [ + 507, + 495, + 884, + 833 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Among TA strategies of type (b): RARR (Gao et al., 2023a) involves a pipeline designed for knowledge-based tasks: verifying the relevance", + "bbox": [ + 507, + 834, + 882, + 882 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "1Schick et al. primarily discusses tool usage with training. We adapt only the few-shot strategy in our experiments.", + "bbox": [ + 507, + 892, + 882, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "13857", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/ad8b756c88daba3fe46a897f52a1a2033fba139188f1f2989272dd6bc4926d3f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 122, + 99, + 875, + 254 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/8cda9432777f2f391b342d52be331e08c439f215932cef30fc8c448fab0a1ca7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 122, + 256, + 875, + 451 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/56b4cd0ce032c7e6ed5ff221971fa9d4a43294255f9ec8ab67388abf55e7f2d6.jpg", + "image_caption": [ + "Figure 2: Overview of the TA strategies implemented in this work. Blue text marks tool queries, tool responses are in turquoise cells, refinement is in orange cells and dashed arrows, and yellow cells are LM generations." + ], + "image_footnote": [], + "bbox": [ + 122, + 453, + 875, + 588 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "and factuality of each claim by generating questions based on the claim, retrieving snippets that answer these questions, and checking if the answers match the information in the claim. If not, the claim is refined to match the snippets. Check & Fix, a method we introduce in this work, uses each CoT step as a search query, and checks whether the step is entailed by the retrieved snippets by prompting the model to classify this entailment. This strategy is similar to Jiang et al. (2023, contemporaneous work), which additionally uses low-confidence filtering but omits the entailment verification.", + "bbox": [ + 112, + 639, + 489, + 832 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 Related Work", + "text_level": 1, + "bbox": [ + 112, + 847, + 270, + 862 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Training LMs to use tools. While we are primarily concerned with few-shot tool assistance of LM generation, the literature also explores LMs which", + "bbox": [ + 112, + 871, + 489, + 919 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "are trained to use specific tools (Parisi et al., 2022; Hao et al., 2023; Patil et al., 2023). These methods are constrained to the tools seen during training, and require data (annotated, bootstrapped, or synthetically constructed) of tool demonstrations.", + "bbox": [ + 507, + 639, + 884, + 719 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Other tool-assisted neural networks. There is adjacent research on augmenting neural networks, in ways besides textual interfaces, with tools (e.g., Andor et al., 2019; Jacovi et al., 2019) or training differentiable subnetworks that heavily mimic tools (Neelakantan et al., 2017; Trask et al., 2018).", + "bbox": [ + 507, + 731, + 884, + 828 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Evaluation Pitfalls", + "text_level": 1, + "bbox": [ + 507, + 843, + 705, + 858 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While there is a plethora of TA strategies (§2.1), no systematic comparison of these strategies has been conducted. Research that proposes TA strategies in", + "bbox": [ + 507, + 871, + 882, + 919 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Who lived longer, Muhammad Ali or Alan Turing?", + "bbox": [ + 374, + 80, + 643, + 93 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "13858", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/5d7729ae102ec068b6b348f158fca39595c8b1721908888c58a765ba8bc62418.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
PitfallRecommendation
(1)Coupling the TA strategy and the tool together.Comparisons of TA strategies should use the same tools across strategies.
(2)Forcing no-tool baselines to the framework of the TA strategy.The optimal way to solve the task without tools may be different from solving the task with tools: No-tool baselines should include multiple variants of both free-form and structured strategies, to ensure the TA strategies are not given an advantage.
(3)Using one model across all comparisons.Different models may behave differently when it comes to using tools effectively, based on their training data. Multiple models should be tested, if possible.
(4)Using one prompt and set of demonstrations across all comparisons.Multiple different sets of demonstrations should be used to get reliable estimates of few-shot performance.
(5)Not considering TA strategy costs.TA strategies can be efficient or inefficient with regards to the prompt tokens and generation tokens they require to work, with respect to no-tool baselines or with respect to each other. The differences can be significant (§5). Comparisons of TA strategies should factor the computation cost of the strategy, which we term as token efficiency.
", + "bbox": [ + 122, + 80, + 873, + 305 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 1: Summary of evaluation pitfalls of TA strategies (§3) and recommendations to mitigate them.", + "bbox": [ + 154, + 313, + 838, + 330 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "few-shot settings is often not focused on evaluating properties of those strategies, but other aspects of LM capabilities (Press et al., 2023; Gao et al., 2023a), usage in particular strict contexts (Paranjape et al., 2023), evaluating various LM models themselves with a particular strategy (Mialon et al., 2023), and so on.", + "bbox": [ + 112, + 354, + 487, + 464 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Below we collect observations from the literature that demonstrate the limited evaluation scope of TA strategies, in an effort to establish a set of criteria for future evaluations to be reliable and fair (a summary is provided in Table 1).", + "bbox": [ + 112, + 468, + 489, + 548 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(1) Coupling the TA strategy and the tool together. Comparisons may vary the tools and methods together (e.g., a TA strategy $A$ with a tool $A$ versus a TA strategy $B$ with a tool $B$ ).", + "(2) Forcing baselines to the framework of the TA strategy. Typical baselines to a given TA strategy are to apply that strategy while letting the model generate the tool's output instead of the tool, and using CoT prompting. However, the optimal way to solve the problem without tools may not be the same as the TA strategy in question. In this work, we implement three different baselines (§4) and find that there is no clear winner among two of them (we explore this empirically in §5).", + "(3) Using one model across all comparisons. Often, a single model is chosen to use as the underlying model for the TA strategy. This limits the insights from the evaluation to this model in particular, since conclusions may not carry over to other models. In this work, we find that the best-performing strategies vary significantly across different LMs (we explore this empirically in §5)." + ], + "bbox": [ + 112, + 552, + 489, + 917 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(4) Using one prompt and one set of demonstrations across all comparisons. Few-shot evaluation is known to be unreliable when using a single set of demonstrations as a single prompt (Perez et al., 2021). Furthermore, some prompts used in TA strategy evaluations—in particular, CoT demonstrations—appear so often on the internet that they are suspected to be part of the models' training data, further compromising their function (Jacovi et al., 2023).", + "(5) Not considering TA strategy costs. In many cases, the TA strategy requires significantly more compute than no-tool baselines, and different TA strategies also require different amounts of computation. Computation cost is not traditionally considered in comparisons." + ], + "bbox": [ + 507, + 354, + 884, + 617 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4 Experimental Setup", + "text_level": 1, + "bbox": [ + 507, + 633, + 717, + 650 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Our goal is to conduct a fair and reliable comparison of TA strategies, without being influenced by properties of specific models, tools or prompts. To this end, we focus on few-shot tool usage, a popular TA scheme that allows flexibility around using new tools and adapting tools to specific tasks.", + "bbox": [ + 507, + 659, + 882, + 756 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In what follows, we describe our experimental setup. What guides this experimental setup is to perform a comprehensive, rigorous evaluation without the pitfalls of §3. Our evaluation covers 5 different TA strategies, 4 recent LMs, 4 complex reasoning datasets, 3 few-shot prompts, and 2 tools. For each TA strategy + dataset + model combination, we run three experiments with a different number of demonstrations. Overall, our evaluation includes an execution of 342 experiments, each of which", + "bbox": [ + 507, + 758, + 884, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "13859", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "generates 250 (GPT-3) or 500 (non-GPT-3) long-form answers. Additional implementation details are in Appendix A.", + "bbox": [ + 112, + 84, + 489, + 134 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Tool-assisted strategies. We evaluate the TA strategies shown in Figure 2: SelfAsk, Inline, Interleaving, C&F and RARR. We additionally include variants of SelfAsk and Inline where the model is separately called to summarize tool output in relevant context, as it can often be very long (SelfAskQA and InlineQA; see Appendix A for details). Finally, in the retrieval settings, we use Top-1 retrieval for all models, and additionally Top-5 retrieval for the Flan-PaLM-540B model (see \"Models\" below) to check whether additional retrieved information can improve performance despite the significantly longer input and processing cost.", + "bbox": [ + 112, + 141, + 489, + 350 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For SelfAsk and RARR we use the original implementation provided by the methods' creators. We implement Interleaving (Trivedi et al., 2022a), as at the time of this research no implementation was available. Importantly, this implementation yields similar performance to that of existing approaches that combine CoT with retrieval from Wikipedia by He et al. (2022); Jiang et al. (2023) (see full results in Appendix B). Additionally, Jiang et al. (2023, Figure 4) implemented methods that apply retrieval and refinement over generated CoT that are similar to C&F and achieve similar performance to ours, as well (see Appendix B). For Inline, we are not aware of reports on few-shot performance of a similar strategy in the literature.", + "bbox": [ + 115, + 354, + 490, + 596 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Baseline strategies. We use no-tool versions of SelfAsk, Inline, and standard CoT prompting. The SelfAsk and Inline baselines simply involve giving the model the prompts used for the tool-based versions, while disabling tool calls (such that the model generates the output in-place of the tools). These are the baselines used by Press et al. (2023) and Schick et al. (2023) respectively.", + "bbox": [ + 112, + 604, + 489, + 734 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets. We consider tasks that require complex reasoning, where models could potentially benefit from external tool usage. Specifically, we use StrategyQA (Geva et al., 2021) and MuSiQue (Trivedi et al., 2022b), which require reasoning about entity knowledge, and GSM8k (Cobbe et al., 2021) and DROP (Dua et al., 2019) that evaluate arithmetic reasoning. In DROP we select examples that have numerical answers. We randomly sample 500 examples from the development set of each dataset (with the exception of StrategyQA, whose", + "bbox": [ + 112, + 741, + 489, + 920 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "test set has 229 examples), and use it for performance evaluation of UL2, Flan-PaLM-540B and Flan-PaLM-62B. For GPT-3, we use a subset of 250 examples of that set, due to cost. We use standard evaluation measures for every dataset (F1 in the case of MuSiQue). We provide data examples in Appendix A.", + "bbox": [ + 507, + 84, + 885, + 198 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Models. We evaluate the methods across four LMs: Flan-UL2-20B (Tay et al., 2023), GPT-3 (text-davinci-003) (Brown et al., 2020), Flan-PaLM-540B and Flan-PaLM-62B (Chung et al., 2022). We omit GPT-3 experiments on RARR and Interleaving due to cost. Importantly, our focus is not in comparing performance of these models, but to use them as samples of different model instances and training schemes against which to compare different TA strategies.", + "bbox": [ + 507, + 212, + 885, + 375 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Tools. We strictly use the same tools across all strategies, to ensure a fair comparison: Google Search (Press et al., 2023; Schick et al., 2023; Lewis et al., 2021) for knowledge tasks, and a calculator (Schick et al., 2023; Qin et al., 2023) for the calculation tasks. RARR, SelfAsk and Interleaving are designed for retrieval settings only, while Inline and Check & Fix can be used in all settings. For the retrieval settings using Google Search and Flan-PaLM-540B, we test retrieval with both the top 1 and top 5 tool-retrieved snippets: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information.", + "bbox": [ + 507, + 388, + 885, + 630 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Few-shot demonstrations. In order to overcome bias from using demonstrations from prior work that were likely seen during training (Jacovi et al., 2023), we re-announce prompts for all TA strategies, datasets and tools. We randomly sample 8 examples from each dataset's training set, and annotate each example with demonstrations for each TA strategy. Some of the strategies call the model multiple times with different prompts (e.g., Check & Fix, RARR), which requires separate annotations. This effort results in a total of 184 annotated demonstrations, which we release as a resource for future works on TA generation. From each set of 8 demonstrations, we then construct three separate prompts—3-shot, 5-shot and 7-shot—randomly sampled from the original 8 demonstrations, to get a better estimation of few-shot performance.", + "bbox": [ + 507, + 646, + 885, + 920 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "13860", + "bbox": [ + 477, + 927, + 527, + 941 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/ad09b9fc8f1984f62245138a07a699d525eb34e52bdace5161196d53592a2e6a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 115, + 84, + 875, + 198 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/a06af40183a6ac224b425d7bba84e8385dc98a63a1760df3a658f1b03fc2b068.jpg", + "image_caption": [ + "Figure 3: A comparison of evaluation scores across two areas ( $\\S 5$ ): (a) No-tool baselines vs. TA strategies; (b) Tool usage via refinement of generated text vs. tool usage during generation, where the generated text contains tool arguments is conditioned on tool outputs. The dark line marks the confidence interval among samples." + ], + "image_footnote": [], + "bbox": [ + 115, + 202, + 875, + 309 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5 Comparative Results", + "text_level": 1, + "bbox": [ + 112, + 379, + 329, + 395 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Organization of the results. Due to the", + "text_level": 1, + "bbox": [ + 112, + 404, + 415, + 419 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Tool vs. no tool. Previous work that propose TA strategies found that using such strategies consistently improve performance in comparison to no-tool baselines (Press et al., 2023; Jiang et al., 2023; Trivedi et al., 2022a, inter alia).", + "bbox": [ + 112, + 420, + 487, + 499 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Figure 3 shows that the TA strategies do not improve performance over the no-tool baselines in our selection of datasets. The figure shows results against the average of the different few-shot scores, though we observe similar trends when using the maximum of scores as well. Full results are in Appendix B. Similarly to us, Gao et al. (2023a, §6.2) found that StrategyQA performance slightly decreased with tools in RARR compared to no-tool baselines for PaLM-540B (Chowdhery et al., 2022), and Jiang et al. (2023, §6.2) found that performance decreased on StrategyQA in two settings comparable to our implementations of Interleaving and Check & Fix with GPT-3.", + "bbox": [ + 112, + 500, + 489, + 724 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conclude that for the settings in this work, the no-tool baselines are stronger than initially expected based on the literature. More research is required to investigate whether this relationship holds in other contexts, though we note that the datasets and models used in our experiments are common in TA research (Mialon et al., 2023).", + "bbox": [ + 112, + 726, + 487, + 838 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Additionally, our experiments provide empirical justification to Recommendations (2) and (3) in §3. First, we find that the CoT and Inline baselines outperform each other at a roughly equal rate, and neither emerges as a clear winner. This shows", + "bbox": [ + 112, + 839, + 489, + 917 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/b1069e3adec691b1a3429c547844e3c6ae619081af760ba7dfcedd0fb3161e12.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelDatasetBest strategy
GPT-3StrategyQABaseline-Inline
GPT-3DROPBaseline-Inline
GPT-3GSM8KCheck & Fix
GPT-3MuSiQueInline
Flan-PaLM-540BStrategyQABaseline-CoT
Flan-PaLM-540BDROPBaseline-Inline
Flan-PaLM-540BGSM8KBaseline-Inline
Flan-PaLM-540BMuSiQueRARR-Top5
Flan-UL2-20BStrategyQABaseline-Inline
Flan-UL2-20BDROPBaseline-Inline
Flan-UL2-20BGSM8KInline
Flan-UL2-20BMuSiQueBaseline-CoT
Flan-PaLM-62BStrategyQABaseline-CoT
Flan-PaLM-62BDROPBaseline-CoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCheck & Fix
", + "bbox": [ + 529, + 376, + 863, + 636 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2: For each combination of dataset and model, we derive the best-performing strategy on the average score across the few-shot prompts. Notably, the best-performing strategy varies across different models, datasets or prompts, which means that it is necessary to evaluate over all axes to get a better estimation of general performance.", + "bbox": [ + 507, + 645, + 882, + 746 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "that different baselines obtain different results, and so, relying on only a single baseline in evaluation does not necessarily provide a good estimation for no-tool performance (recommendation (2)). Also, the best-performing strategies vary significantly across models, which highlights the importance of using multiple models for evaluation (recommendation (3))—for illustration, we report the highest-performing strategies in each setting in Table 2, to", + "bbox": [ + 507, + 774, + 884, + 919 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "13861", + "bbox": [ + 477, + 927, + 522, + 940 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/4f8aebf138ab26d7d2873e49d6d58b19e29cfc71354db056804c3f4f786266c9.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TA strategyPrompt tokens (canonical)Prompt tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinen3533531418801
SelfAskt(n+kt+1/2)22811399--
SelfAskQAt(2n+k)35892736--
Inlinet(n+kt+1/2)1793177534531083
InlineQAt(2n+k)33753672--
Check & fixt(2n+k)3839354775483647
RARR3n(t+1)4729--
Interleavingt(n+kt+1/2)3221--
", + "bbox": [ + 213, + 80, + 783, + 271 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/874550e24f4d8b69d2d2da186f71d66a3453552cdcbcbbd9444b3b3bbb363d22.jpg", + "table_caption": [ + "Table 3: Average number of prompt tokens per strategy (5-shot), with $n$ as the CoT prompt length, $t$ as the number of tool calls, $k$ as the tool's output length. Flan-PaLM-540B has a shorter context window than GPT-3, which limits prompt length. The canonical formula for RARR favorably assumes a single verification question." + ], + "table_footnote": [], + "table_body": "
TA strategyAnswer tokens (canonical)Answer tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinem44425888
SelfAskm2072--
SelfAskQA2m5964--
Inlinem10324862102
InlineQA2m114256--
Check & fix2m8917775177
RARR3m181--
Interleavingm72--
", + "bbox": [ + 213, + 335, + 783, + 524 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 4: Average number of answer tokens across the 5-shot experiments, for each strategy. The RARR formula assumes a single verification question per step.", + "bbox": [ + 112, + 533, + 882, + 564 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "show that the overall conclusion can be distorted by choosing a particular model or strategy Extended details are in Appendix B.1.", + "bbox": [ + 112, + 576, + 487, + 625 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Tool use during generation vs. post-generation refinement. In Figure 3 we compare the strategies that use tools during generation against the strategies that first generate an answer, and then use tools to improve the answer. For retrieval tasks, refinement clearly outperforms non-refinement strategies, but the same does not apply to the calculation tasks. We conjecture that planning calculations ahead of time during generation is more aligned with LM pretraining data, based on internet text, than planning retrieval queries in similar contexts.", + "bbox": [ + 112, + 634, + 489, + 812 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Token efficiency. TA strategies are typically evaluated in terms of task performance and properties such as factuality and logic correctness. We argue that computational cost is another important factor to consider. Specifically, we propose to evaluate token efficiency, that is, the amount of prompt tokens", + "bbox": [ + 112, + 822, + 489, + 919 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "and generated tokens, which have direct effect on the cost of the TA strategy. Notably, the cost of a TA strategy depends on various variables, including model size, GPU type, caching optimizations, vocabulary size, beam search size, and so on. However, token counts can serve as a plausibly generic proxy for the purpose of comparing the cost of different TA strategies, as other factors are roughly equal across strategies, as long as the same models and tools are used. We consider prompt tokens and generated tokens separately, as they often have different consequences on cost. $^2$", + "bbox": [ + 507, + 576, + 884, + 769 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Tables 3, 4 show both canonical and empirical comparisons across TA strategies with regards to token efficiency. The canonical comparison is a function of the relevant variables in the \"canonical\" setting where the model was expected to answer", + "bbox": [ + 507, + 771, + 882, + 853 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "2Depending on model architecture and quantity of times reusing the same prompt, prompt processing cost can be optimized, whereas the token generation cost varies with other factors such as vocabulary size.", + "bbox": [ + 507, + 869, + 882, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "13862", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "the question perfectly, and use the tool perfectly as intended. Across all TA strategy experiments, we found no general correlation between token efficiency and performance. Concretely: (1) All TA strategies are significantly more expensive than the no-tool baselines by orders of magnitude, while not incurring an improvement worthy of this extra cost. Empirically, using tools in each case can incur extra costs by a factor of $5x$ to $10x$ for prompt processing, and $2x$ to $5x$ for generation. (2) The refinement strategies are more expensive than the no-refinement strategies. So while they improve performance for retrieval tasks, it comes at a cost.", + "bbox": [ + 112, + 84, + 489, + 294 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 Analytical Results", + "text_level": 1, + "bbox": [ + 112, + 305, + 307, + 322 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We discuss further analyses of our results, findings that (a) our observations generally hold across different levels of example difficulty, and (b) most prediction errors of tool-augmented LMs stem from incorrect inputs to the tool and bad outputs from it, and not from a lack of tool usage.", + "bbox": [ + 112, + 330, + 489, + 428 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.1 Example Difficulty", + "text_level": 1, + "bbox": [ + 112, + 438, + 309, + 454 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "It has been shown that LMs have difficulty solving problems involving long-tail entities (Kandpal et al., 2022; Mallen et al., 2022) and complex mathematical reasoning challenges (Mishra et al., 2022; Imani et al., 2023). Accordingly, we ablate the results from §5 along the following axes of example difficulty, in order to understand how tools can affect performance on difficult examples. We provide an overview of the trends here, and extended results are available in Appendix B.", + "bbox": [ + 112, + 458, + 489, + 619 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Measures of difficulty. We investigate the effectiveness of tool-usage across varying levels of example difficulty, which we approximate in two axes: (A) Long-tail entities (retrieval): Following Mallen et al. (2022), we extract the entities from the question and associated gold answers in StrategyQA and MuSiQue, and use the corresponding entity Wikipedia page views as a measure of popularity. (B) Large numbers (calculation): We segment the examples in the calculation tasks based on the range of the median and largest number in the example (question and gold solution in GSM8k, or question and context paragraph in DROP).", + "bbox": [ + 112, + 624, + 489, + 834 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Results. Performance across increasing levels of entity popularity and computation complexity, with different LMs and TA strategies, are shown in Figure 4a and Figure 4b, respectively. We find that performance uniformly decreases for harder ex", + "bbox": [ + 112, + 838, + 489, + 920 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "amples in the retrieval setting for all models, but in the calculation setting, this only manifests for Flan-UL2-20B (implying that the larger models are more robust to the numerical ranges in GSM8K and DROP). Overall, in all cases tool use does not improve upon the baselines even when controlling for the harder cases where tools are expected to be more useful. This conclusion is aligned with our error analysis in §6.3, which shows that the common errors stem from incorrect tool arguments, more than correct tool arguments but incorrect inferences based on them. Flan-UL2 with a calculator is an exception, where tool use indeed helps, though moreso on the easier examples, likely due to a higher rate of correct arguments to the calculator.", + "bbox": [ + 505, + 84, + 884, + 326 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.2 Tool Usage Statistics", + "text_level": 1, + "bbox": [ + 507, + 336, + 717, + 351 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "A possible explanation for the similar performance of no-tool baselines could be a lack of tool usage. To check this, we aggregate usage over the different TA strategies, and find that the models indeed use tools in the majority of the cases; $70\\% - 80\\%$ in SelfAsk, and $>90\\%$ in others (see Appendix B). We also investigate usage across other axes, such as models and number of demonstrations, and find similar trends. However, the datasets and tasks we investigate are designed to benefit from the tools in all cases, which shows that few-shot demonstrations are not always sufficient in inducing tool use in models. In particular, the SelfAsk strategies receive the lowest tool use, being the strategies that use natural language to query whether to use the tool (the answer begins with \"Are follow up questions needed here: to which the model answers \"No\" in the cases where the tool is not used).", + "bbox": [ + 505, + 357, + 884, + 646 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.3 Error Analysis", + "text_level": 1, + "bbox": [ + 507, + 657, + 672, + 673 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We sampled 50 instances for which an error was made by the TA models, randomly across the 5-shot experiments, and categorized them across three categories: (A) Incorrect tool input; (B) incorrect tool output; (C) incorrect model inferences based on correct tool usage. Error B applies only to the retrieval settings, where the retrieval tool (Google Search in our case) retrieved a wrong or irrelevant snippet. The errors were distributed approximately to $60\\%$ (A), $10\\%$ (B), and $30\\%$ (C) in the retrieval setting, and $80\\%$ (A) and $20\\%$ (C) in the calculation setting. Li et al. (2023) reported an error analysis for tool-assistance in dialogue customer assistance settings, with similar conclusions regarding error A, although errors B and C do not apply in their", + "bbox": [ + 505, + 677, + 884, + 920 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "13863", + "bbox": [ + 477, + 927, + 524, + 941 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/7039a5cdcf52dcaa696b43188a45a62ab3501ce48d2f6c8a6d93a2e5da6c93b3.jpg", + "image_caption": [ + "Figure 4: We analyze performance of the strategies across two area (no-tool baselines vs. TA strategies), conditioned on example difficulty as defined by the existence of rare or common entities in the retrieval settings (via percentile of page views) and small or large numbers in the calculation settings (via percentile of numeric range). In (a), lower page views imply higher difficulty, and in (b), larger numbers imply higher difficulty." + ], + "image_footnote": [], + "bbox": [ + 119, + 84, + 873, + 334 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "context, and other error types manifest instead.", + "bbox": [ + 112, + 418, + 463, + 432 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Our results suggest that the majority of errors are not due to the incorrect tool responses (i.e., issues with Google Search as a choice of retriever), and overall more influenced by incorrectly invoking tools to begin with, in comparison to invoking them correctly but composing the solution incorrectly.", + "bbox": [ + 112, + 435, + 487, + 532 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "7 Conclusions and Takeaways", + "text_level": 1, + "bbox": [ + 112, + 549, + 391, + 565 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We conduct a comprehensive assessment of few-shot tool augmentation strategies for LMs, covering hundreds of experiments with multiple LMs, datasets, and tools. Our experiments show that current tool-usage integration approaches are presently a false promise; prompting strategies that do not use tools typically obtain similar task performance, without the high cost of tool execution. Controlling for example difficulty, where tools are expected to provide the most benefit, does not explain the relative strength of the no-tool baselines. Instead, the primary errors we observe are related to incorrect usage of the tools to begin with (i.e., generating incorrect arguments to the tool).", + "bbox": [ + 112, + 579, + 489, + 804 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Our findings call for more robust evaluation of future TA strategies, primarily in more practical settings where models are not expected to leverage inherent abilities to solve tasks. To this end, our work provides concrete evaluation guidelines, such as employing stronger baselines and factoring in computation costs.", + "bbox": [ + 112, + 806, + 489, + 917 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 509, + 416, + 613, + 432 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "While our study aims to provide a comprehensive evaluation of TA strategies, there are some limitations. First, recent work (Dodge et al., 2021; Magar and Schwartz, 2022; OpenAI, 2023) suggests that examples from public datasets, like those used in our evaluation, may have leaked to the training data of recent LMs. Such contamination can introduce biases to the evaluation, such as lack of need for external tools. We are not aware of alternatives without this issue at the time of this writing.", + "bbox": [ + 507, + 447, + 882, + 608 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Second, due to the high cost of executing large LMs in an exhaustive evaluation, we ran only a single experiment for each combination of TA strategy, model, dataset, and number of demonstrations. However, given the sensitivity of models to the demonstrations (Perez et al., 2021), future work should extend this evaluation to use multiple sets of demonstrations for each such combination.", + "bbox": [ + 507, + 611, + 882, + 739 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Last, while our findings show that non-tool models often perform on par with existing TA strategies, our setting favors tool usage. For example, our tasks only require a single type of tool such that the model does not need to choose between multiple tools. Future work that investigates when and how tools can improve performance should consider more realistic evaluation settings, for example, by considering tasks where the model may need to use multiple types of tools together, or tasks where tools may sometimes give unhelpful answers.", + "bbox": [ + 507, + 741, + 884, + 917 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "13864", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 115, + 84, + 213, + 98 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. 2019. Giving BERT a calculator: Finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5947-5952, Hong Kong, China. Association for Computational Linguistics.", + "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. CoRR, abs/2005.14165.", + "Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks.", + "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.", + "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le," + ], + "bbox": [ + 115, + 107, + 489, + 919 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "and Jason Wei. 2022. Scaling instruction-finetuned language models.", + "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168.", + "Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", + "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics.", + "Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models.", + "Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023b. Pal: Program-aided language models.", + "Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies. Transactions of the Association for Computational Linguistics (TACL).", + "Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings.", + "Hangfeng He, Hongming Zhang, and Dan Roth. 2022. Rethinking with retrieval: Faithful large language model inference. arXiv preprint arXiv:2301.00303.", + "Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398.", + "Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg. 2023. Stop uploading test data in plain text: Practical strategies for mitigating data contamination by evaluation benchmarks.", + "Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, and Jonathan Berant. 2019. Neural network gradient-based learning" + ], + "bbox": [ + 510, + 85, + 884, + 919 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "13865", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "of black-box function interfaces. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", + "Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation.", + "Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. arXiv preprint arXiv:2211.08411.", + "Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2023. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp.", + "Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2023. Decomposed prompting: A modular approach for solving complex tasks.", + "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-augmented generation for knowledge-intensive nlp tasks.", + "Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api-bank: A benchmark for tool-augmented llms.", + "Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. 2023. Deductive verification of chain-of-thought reasoning.", + "Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M. Dai. 2022. Mind's eye: Grounded language model reasoning through simulation.", + "Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of-thought reasoning.", + "Inbal Magar and Roy Schwartz. 2022. Data contamination: From memorization to exploitation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 157-165, Dublin, Ireland. Association for Computational Linguistics.", + "Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511." + ], + "bbox": [ + 115, + 85, + 487, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey.", + "Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505-3523, Dublin, Ireland. Association for Computational Linguistics.", + "Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browser-assisted question-answering with human feedback.", + "Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. In International Conference on Learning Representations.", + "Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9844–9855, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.", + "OpenAI. 2023. Gpt-4 technical report.", + "Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. 2023. Art: Automatic multi-step reasoning and tool-use for large language models.", + "Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models.", + "Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis.", + "Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models.", + "Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models.", + "Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. 2022. Limitations of language models in arithmetic and symbolic induction." + ], + "bbox": [ + 510, + 85, + 882, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "13866", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool learning with foundation models.", + "Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools.", + "Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Siamak Shakeri, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby, and Donald Metzler. 2023. UI2: Unifying language learning paradigms.", + "Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, and Phil Blunsom. 2018. Neural arithmetic logic units.", + "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022a. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions.", + "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022b. MuSiQue: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics.", + "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models." + ], + "bbox": [ + 115, + 85, + 489, + 645 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "13867", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A Implementation Details", + "text_level": 1, + "bbox": [ + 114, + 84, + 356, + 99 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.1 Tool-Assisted Strategies.", + "text_level": 1, + "bbox": [ + 114, + 111, + 354, + 126 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "General Details. In all cases, if the tool invocation fails (e.g., with an ill-formatted calculation, or a null response from Google Search), the model is used to generate the tool's output instead. For all retrieval settings using Google Search, we test both Top-1 and Top-5 retrieval: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information. Illustrative examples of the data are available in Table 5.", + "bbox": [ + 112, + 133, + 489, + 309 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "SelfAsk and SelfAskQA. SelfAsk involves decomposing each question into a series of simpler sub-questions, and calling the tool directly for each sub-question. The tool's output is inserted into the prompt as an intermediate answer. When the model generates a step that begins with the string \"So the answer is:,\" it is expected to generate an answer that builds on the previous intermediate answers which were tool outputs. In this work, we use Google Search as the tool as in the original work by (Press et al., 2023).", + "bbox": [ + 110, + 321, + 489, + 495 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Our SelfAsk implementation reuses the original implementation by Press et al. (2023). Since Self-Ask is designed specifically for knowledge-based QA, we only evaluate this strategy for the knowledge tasks MuSiQue and StrategyQA.", + "bbox": [ + 112, + 499, + 489, + 579 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "The SelfAskQA variant involves calling the model for each pair of sub-question and retrieved snippet that (hopefully) contains its answer. This method of recursively calling the model with different a different prompt as if it were another tool is a technique proposed by Khot et al. (2023). We collect all sub-questions from the SelfAsk prompts in order to construct QA prompts (using the tool to retrieve supporting snippets). The model is called with the QA prompts in order to answer each sub-question based on its snippet. The SelfAskQA variant in essence summarizes each Google Search snippet, which can be as long as a paragraph, into a short answer to the given sub-question, effectively simplifying and shortening the overall answer.", + "bbox": [ + 112, + 581, + 489, + 820 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Among the two SelfAsk implementations, neither decisively outperforms the other: SelfAskQA outperforms SelfAsk for GPT-3 and Flan-PaLM-62B on both MuSiQue and StrategyQA, but for Flan-PaLM-540B and Flan-UL2-20B the relationship flips.", + "bbox": [ + 112, + 822, + 489, + 917 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Inline and InlineQA. The Inline strategy format largely mimics the Toolformer format by Schick et al. (2023), but can also be cast into the ART framework by Paranjape et al. (2023) or the Decomposed Prompting framework by Khot et al. (2023). In general, the strategy simply calls for generating the tool call in a predefined format—in our case, square brackets and the tool name. The tool is invoked with the arguments generated by the model inside the brackets, and the tool's output is inserted into the model. Our implementation is based on the inference code implemented by Schick et al. (2023), although notably, we focus on few-shot usage, and do not perform the tool-usage pretraining step that largely concerns the referenced work.", + "bbox": [ + 507, + 84, + 884, + 325 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "We implement two variants: Inline, which uses a tool called \"Search\" that appends the retrieved snippet or calculation output directly into the prompt, and InlineQA, which uses a tool called \"QA\" which calls the model with a separate prompt in order to summarize the retrieved snippet into a concise answer, identically to the aforementioned SelfAskQA variant. As with the SelfAsk and SelfAskQA variants, among Inline and InlineQA in the knowledge-based tasks, neither consistently outperforms the other in particular.", + "bbox": [ + 507, + 326, + 884, + 502 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Interleaving. The Interleaving Retrieval strategy (Trivedi et al., 2022a) proposes to use each reasoning step by the model in its CoT answer as a query to a retrieval model. The retrieved snippet is then added to the prompt in order to provide additional information to the model. The structure for each demonstration becomes: (1) All retrieved documents thus far; (2) The question; (3) The generated answer thus far (see Trivedi et al., 2022a for details). In this way, the tool is used heuristically without explicit demonstrations from the model, but the generation of the answer at each CoT step is still conditioned on tool usage based on the previous steps.", + "bbox": [ + 507, + 517, + 882, + 743 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Check & Fix. We propose this strategy as a more lightweight variant of refinement based on tools in comparison to RARR, and it is comparable to contemporaneously proposed (Jiang et al., 2023): After each CoT step, the step is checked for accuracy using a tool, and if found inaccurate, a new fixed step is generated to replace it.", + "bbox": [ + 507, + 757, + 882, + 869 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "In the retrieval setting, each step is verified and fixed by prompting the model to classify whether the step is contradicted by the retrieved paragraphs,", + "bbox": [ + 507, + 871, + 882, + 917 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13868", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/b946b0f1e03b0d2abe3348f065673ba189de3b2058b279813692f75960394dbc.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelDatasetStrategyExampleModel Answer
GPT-3StrategyQABaseline-CoTQuestion: Was a woman Prime Minister directly before or after Stanley Baldwin? Answer: No.Stanley Baldwin was Prime Minister of the United Kingdom from 1923 to 1929. The woman Prime Minister directly before him was Margaret Thatcher, who served from 1979 to 1990. The woman Prime Minister directly after him was Theresa May, who served from 2016 to 2019. So the answer is no.
Flan-UL2MuSiQueSelfAskQuestion: What administrative territorial entity contains the place where KPRM is licensed to broadcast to? Answer: Hubbard County.Are follow up questions needed here: Yes. What is the place where KPRM is licensed to broadcast to? Intermediate answer: KPRM is licensed to broadcast to the city of Houston. Follow up: What administrative territorial entity contains the city of Houston? Intermediate answer: The city of Houston is in Texas state. Follow up: What is the administrative territorial entity that contains Texas state? Intermediate answer: Texas state is in the United States. So the final answer is: The United States
Flan-PaLM-62BDROPCheck & FixContext: [omitted paragraph about the 2000 census] Question: How many households and families reside in the county according to the 2000 census? Answer: 2648426484 households and 12068 families reside in the county. There were 40543 people, 15416 households, and 11068 families residing in the county. Thus, 26484 + 12068 = 38552. So the final answer is: 38552.
Flan-PaLM-540BGSM8KInlineQuestion: How many girls are in the school if 40% of a school population is made up of 240 boys? Answer: 360If 40% of the school population is made up of 240 boys, then 1% is made up of [Calculator(240 / 40) -> 6.0] 6 boys. Therefore, the total school population is made up of [Calculator(6 * 100) -> 600] 600 students. If the school population is made up of 600 students, and 40% of them are boys, then the remaining 60% are girls, a total of [ Calculator(60 / 100) * 600 -> 360] 360 girls. So the final answer is: 360.
", + "bbox": [ + 122, + 80, + 878, + 407 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Table 5: Illustrative examples of various datasets, strategies and model outputs. The answers from the Interleaving, Check & Fix and RARR models are of the same format as the CoT baseline.", + "bbox": [ + 112, + 417, + 882, + 445 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "and if so, to generate the fixed step based on demonstrations. In the calculation setting, each step is first heuristically checked for whether it contains a calculation, and if so, the calculation is inserted into the calculator tool, and the model is prompted to verify whether the tool output is consistent with the calculation in the text. If this is incorrect, the model generates the fixed step. In both cases, the answer generation continues where the fixed step completely replaces the original incorrect step.", + "bbox": [ + 112, + 470, + 487, + 633 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "RARR. RARR (Retrofit Attribution using Research and Revision, Gao et al., 2023a) was proposed as a post processing method for refining any text, including LM chain-of-thought outputs. This is done via automatically finding attribution for each claim in the text, and post-editing the output to fix unsupported content while preserving the original output as much as possible. Our RARR implementation reuses the original implementation by Gao et al. (2023a).", + "bbox": [ + 112, + 645, + 487, + 804 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "The RARR process involves the following steps, with each considered as a separate tool:", + "bbox": [ + 112, + 807, + 487, + 839 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "1. Question Generation: First, they generate a series of questions that cover various aspects of a passage, referred to as passage x. The questions generated aim to verify and attribute", + "bbox": [ + 129, + 854, + 487, + 919 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "information from the passage. This is done via prompting the LM with few-shot examples.", + "bbox": [ + 544, + 470, + 880, + 502 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "2. Evidence Retrieval: For each generated question, the Google Search tool is utilized to retrieve the top- $k$ passages that are related to the question. In this work, we evaluate both Top-1 and Top-5.", + "3. Evidence Ranking: The retrieved evidences are next ranked using a query-document relevance model scorer. Unlike the original RARR implementation (Gao et al., 2023a), which uses the GTR retrieval model (Ni et al., 2022), we instead implement the scorer via few-shot LM prompting, as suggested by the authors. The output of this stage is thus the top-1 ranked evidence.", + "4. Agreement Phase: Given a triplet of a text, question, and an evidence, this phase determines whether both the text and the question imply the same answer to the question. This is implemented via few-shot LM prompting using a chain-of-thought style prompt.", + "5. Editing Phase: If the previous Agreement Phase outputs disagreement between the text and the evidence, the (text, question, evidence) triplet is fed to a model that outputs a revised" + ], + "bbox": [ + 522, + 511, + 882, + 917 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "13869", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/055d637244534fe5b256bba514b353da7eaf90c3a9ab23736fd5fd801ffdec3b.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelDatasetBest baseline
GPT-3StrategyQAInline
GPT-3DROPInline
GPT-3GSM8KCoT
GPT-3MuSiQueInline
Flan-UL2-20BStrategyQAInline
Flan-UL2-20BDROPInline
Flan-UL2-20BGSM8KCoT
Flan-UL2-20BMuSiQueCoT
Flan-PaLM-540BStrategyQACoT
Flan-PaLM-540BDROPInline
Flan-PaLM-540BGSM8KInline
Flan-PaLM-540BMuSiQueCoT
Flan-PaLM-62BStrategyQACoT
Flan-PaLM-62BDROPCoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCoT
", + "bbox": [ + 139, + 80, + 460, + 361 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "version of the text, considering the discrepancy between the previous text and the evidence. This is implemented via few-shot LM prompting using a similar chain-of-thought style prompt from the previous stage (see Gao et al., 2023a for the exact prompting template). The agreement and editing phases run iteratively until there are no needed revisions, detected in the Agreement Phase.", + "bbox": [ + 149, + 467, + 489, + 612 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A.2 Baselines", + "text_level": 1, + "bbox": [ + 114, + 623, + 238, + 636 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Chain-of-Thought. The CoT baseline is the standard baseline proposed by Wei et al. (2023) and implemented as a baseline by Press et al. (2023); Paranjape et al. (2023), inter alia. Often, the demonstrations used for this baseline are those originally published by Wei et al. (2023). In this work we annotate a new sample of examples with CoT answers for the purpose of a better estimation of CoT few-shot performance, and release our annotations.", + "bbox": [ + 112, + 643, + 489, + 788 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Self-Ask. The Self-Ask baseline uses the Self-Ask tool demonstrations, but does not invoke the tool after each \"Follow up:\" call, and instead generates the entire answer. This is the original no-tool baseline in Press et al. (2023).", + "bbox": [ + 112, + 797, + 489, + 878 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Inline. The Inline baseline uses the Inline tool demonstrations, but does not invoke the tool after", + "bbox": [ + 112, + 887, + 487, + 917 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/0925aa44a4efd33b4180ed8edcbb477c5e4e83bc090aaf1ae25529fb13dab0a8.jpg", + "table_caption": [ + "Table 6: For each combination of dataset and model, we derive the best-performing baseline on the average score across the few-shot experiments. There is no clear winner: Two of the baselines achieve the best score in $50\\%$ of cases." + ], + "table_footnote": [], + "table_body": "
ModelUsage (%)
Flan-PaLM-540B70.9
Flan-PaLM-62B80.6
Flan-UL2-20B82.6
GPT-395.1
", + "bbox": [ + 588, + 80, + 803, + 175 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/7736a5dafe824533877271c1cd1c4eac67b9a1641ba3b940dc7940f16c000c9e.jpg", + "table_caption": [ + "Table 7: Note that RARR and Interleaving are guaranteed to use tools so they are omitted." + ], + "table_footnote": [], + "table_body": "
StrategyUsage (%)
Check & Fix92.9
SelfAsk80.4
SelfAskQA72.8
Inline99.9
InlineQA96.1
", + "bbox": [ + 601, + 227, + 791, + 338 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Table 8: Overview of average rate of tool usage across experiments. Note that RARR and Interleaving are guaranteed to use tools.", + "bbox": [ + 507, + 348, + 882, + 390 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "each tool call, and instead generates the entire answer. This is the original no-tool baseline in Schick et al. (2023).", + "bbox": [ + 507, + 417, + 882, + 464 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "B Extended Results", + "text_level": 1, + "bbox": [ + 507, + 479, + 697, + 493 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "We provide the full results for our experiments (described in §4) in §B.1, and further analysis of TA strategy performance and tool usage in §B.2.", + "bbox": [ + 507, + 505, + 882, + 554 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "B.1 Full Experiment Results", + "text_level": 1, + "bbox": [ + 507, + 565, + 749, + 580 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Tables 9, 10 detail our experiment results. Tables 11, 12, 13, 14 detail average and max aggregations over the few-shot prompts. As mentioned, we sample 500 examples for Flan-PaLM-62B , FlanPaLM-540B and Flan-UL2-20B experiments, and 250 for GPT-3 experiments, with the exception of StrategyQA whose test set has 229 examples.", + "bbox": [ + 507, + 586, + 882, + 697 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "For DROP and MuSiQue, we report the F1 measures using the evaluation scripts provided by Dua et al. (2019); Trivedi et al. (2022b) respectively. For GSM8K, we normalize the numerical answers and measure exact-match. For StrategyQA, we normalize the answers (for capitalization, prefix and suffix punctuation, and so on) and measure exact-match to \"yes\" and \"no\".", + "bbox": [ + 507, + 699, + 882, + 827 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Best-performing strategies and baselines in each setting. In Tables 2, 6 we show the best-performing baseline and best-performing general strategy for each setting of model and dataset, among the average scores across the three few-shot", + "bbox": [ + 507, + 839, + 882, + 917 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "13870", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "experiments. For strategies in general (Table 2), we see that the winning strategies vary significantly for different models, which supports Guideline (3) in Table 1.", + "bbox": [ + 112, + 84, + 489, + 145 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The distribution among the baselines is split $50\\% - 50\\%$ among CoT and Inline. When considering each few-shot experiment separately (i.e., not taking the average), the distribution is $60.0\\%$ , $37.5\\%$ , and $2\\%$ for Baseline-CoT, Baseline-Inline and Baseline-SelfAsk respectively for which baseline achieves the best-performing score. This supports Guideline (2) in Table 1.", + "bbox": [ + 112, + 149, + 489, + 275 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B.2 Analysis", + "text_level": 1, + "bbox": [ + 112, + 288, + 231, + 303 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Example Difficulty. Figures 5, 6 show extended results for the example difficulty analyses in §6. Here we consider the median of each difficulty metric—i.e., the difficulty across all entities or numbers in the example—rather than the minimum or maximum, as well as the ablation of refinement strategies against no-refinement strategies. We additionally checked for two alternative axes: operation complexity (addition and subtraction as “easy” examples, and multiplication and division as “hard” examples) and popularity links rather than popularity views. The trends we observe in the main paper hold in all of these cases.", + "bbox": [ + 112, + 309, + 489, + 517 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Tool Usage. Tables 7, 8 show aggregate tool usage percentages over multiple axes. Overall, few-shot demonstrations induce tool usage in the majority of cases, though not completely so (i.e., below $100\\%$ ).", + "bbox": [ + 112, + 527, + 489, + 606 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "13871", + "bbox": [ + 477, + 927, + 522, + 940 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/ac4a0dde76cc7d60efb1b1a72b7167e5d83cc236108f7ff0a0d445f57c449897.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 144, + 263, + 848, + 369 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/9e9aa15aaaedd0de974d1aef476063f9aed9e6102bea1d72aeaac09fbf896ab9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 142, + 375, + 848, + 479 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/d891c4106bc180293d8705b06b931c249a0fa7197a273df43947839153b0b755.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 142, + 485, + 848, + 583 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/90fceda5634230bcbc932418e38619436b8dcecefc9b958c8161556d30ba5cfa.jpg", + "image_caption": [ + "Figure 5: An extension of Table 3 with results for both the average across few-shot experiments (a-b) and the maximum across few-shot experiments (c-d)—i.e., the maximum between 3-shot, 5-shot and 7-shot for each experiments setting." + ], + "image_footnote": [], + "bbox": [ + 142, + 588, + 848, + 688 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "13872", + "bbox": [ + 477, + 927, + 526, + 941 + ], + "page_idx": 16 + }, + { + "type": "table", + "img_path": "images/757db40d7275d21575c568699949beaf95b45cb6610312417691092f485cdf1b.jpg", + "table_caption": [], + "table_footnote": [ + "Table 9: Results for the knowledge-retrieval tasks of MuSiQue and StrategyQA. MuSiQue scores are F1 scores. Missing cells, such as \"Interleaving\" with Flan-UL2-20B, are experiments where the model failed to converge to an answer." + ], + "table_body": "
StrategyModelMuSiQueStrategyQA
3-shot5-shot7-shot3-shot5-shot7-shot
RARRFlan-PaLM-540B34.8635.0934.1480.3581.2280.79
RARRFlan-UL2-20B13.4012.0112.9855.9040.1742.79
RARRFlan-PaLM-62B23.6023.4224.0775.9877.7377.73
Baseline-CoTFlan-PaLM-540B33.0733.3633.8079.9184.2882.10
Baseline-CoTFlan-UL2-20B15.1416.5016.1067.2571.6272.05
Baseline-CoTGPT-327.3729.3130.2570.7471.6271.62
Baseline-CoTFlan-PaLM-62B23.6023.4224.2775.9879.0480.35
Baseline-SelfAskFlan-PaLM-540B25.8025.3424.3176.8673.3675.55
Baseline-SelfAskFlan-UL2-20B11.4011.5211.5234.0648.4753.71
Baseline-SelfAskGPT-327.9828.1329.8072.0574.2473.36
Baseline-SelfAskFlan-PaLM-62B5.289.525.4358.9575.9874.24
Baseline-InlineFlan-PaLM-540B30.3930.7131.1971.6279.9172.49
Baseline-InlineFlan-UL2-20B13.6613.339.7472.0568.5671.18
Baseline-InlineGPT-329.1130.3328.1570.3175.9878.60
Baseline-InlineFlan-PaLM-62B23.4222.6921.8675.1173.3675.55
SelfAskFlan-PaLM-540B20.0223.1423.2671.6271.1873.80
SelfAskFlan-UL2-20B11.867.687.4149.7825.7623.14
SelfAskGPT-324.3824.1522.3364.1967.2565.94
SelfAskFlan-PaLM-62B13.7914.8012.6867.2567.6966.38
SelfAskQAFlan-PaLM-540B21.0821.9222.9171.6269.4373.80
SelfAskQAFlan-UL2-20B8.535.352.3047.1617.0311.79
SelfAskQAGPT-332.7431.3030.3465.5067.6970.31
SelfAskQAFlan-PaLM-62B15.4217.4914.5167.2568.1269.00
InlineQAFlan-PaLM-540B31.8632.7832.1070.3172.9373.36
InlineQAFlan-UL2-20B18.0717.941.5671.1870.3156.77
InlineQAGPT-334.9036.6531.3270.3172.0570.31
InlineQAFlan-PaLM-62B12.5211.6510.5561.1463.3261.57
Check & FixFlan-PaLM-540B30.7333.1733.4880.3580.7978.17
Check & FixFlan-UL2-20B10.9011.7713.5252.4060.7069.87
Check & FixGPT-329.6632.9532.2672.0573.8070.74
Check & FixFlan-PaLM-62B25.2126.3926.4775.5571.1876.42
InlineFlan-PaLM-540B18.9724.4222.6174.2474.2475.11
InlineFlan-UL2-20B14.7014.9314.7848.4752.8444.98
InlineGPT-328.8531.0333.5470.3169.4368.56
InlineFlan-PaLM-62B9.959.4513.3254.5968.5670.31
InterleavingFlan-PaLM-540B23.7121.2920.5176.8678.6075.98
InterleavingFlan-PaLM-62B23.4323.7124.4274.6771.6274.24
RARR-Top5Flan-PaLM-540B36.1235.4035.4480.3579.9179.91
SelfAskQA-Top5Flan-PaLM-540B19.7521.6021.9969.8770.3172.05
Inline-Top5Flan-PaLM-540B32.6734.5331.6965.5077.7372.93
Check & Fix-Top5Flan-PaLM-540B31.7432.6833.8778.6081.6681.22
", + "bbox": [ + 157, + 93, + 842, + 851 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "13873", + "bbox": [ + 478, + 928, + 522, + 940 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/1cb6a22edfd66c97427a1f7b6ba78ff98a6a07655d0e11308cc5e5cb0c48a9f4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
StrategyModelDROPGSM8K
3-shot5-shot7-shot3-shot5-shot7-shot
Baseline-CoTFlan-PaLM-540B77.275.074.267.470.870.8
Baseline-CoTFlan-UL2-20B7.227.226.2
Baseline-CoTGPT-357.655.655.658.858.058.4
Baseline-CoTFlan-PaLM-62B65.663.659.247.446.247.4
Baseline-InlineFlan-PaLM-540B77.875.674.469.872.671.2
Baseline-InlineFlan-UL2-20B3.65.63.6
Baseline-InlineGPT-357.666.059.651.654.053.2
Baseline-InlineFlan-PaLM-62B59.064.059.248.847.848.0
InlineFlan-PaLM-540B76.275.274.461.461.870.6
InlineFlan-UL2-20B26.626.226.0
InlineGPT-356.866.045.250.852.452.8
InlineFlan-PaLM-62B57.064.057.848.847.848.2
Check & FixFlan-PaLM-540B76.073.645.068.470.470.2
Check & FixFlan-UL2-20B23.225.823.2
Check & FixGPT-354.854.454.856.058.461.6
Check & FixFlan-PaLM-62B65.063.644.246.844.046.6
", + "bbox": [ + 157, + 85, + 840, + 428 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/312edb6df25d872382dc3f317cb783a660696c70e4aed1c28b3932aeafa168fc.jpg", + "table_caption": [ + "Table 10: Results for the calculator settings of DROP and GSM8K. We omit Flan-UL2-20B results on DROP, as the model could not converge to solve the task with our prompts, likely since each example in this task is very long." + ], + "table_footnote": [], + "table_body": "
StrategyAggregationModelMuSiQueStrategyQA
Baseline-CoTMaxGPT-330.271.6
Baseline-CoTAverageGPT-329.071.3
Baseline-CoTMaxFlan-UL2-20B16.572.1
Baseline-CoTAverageFlan-UL2-20B15.970.3
Baseline-CoTMaxFlan-PaLM-62B24.380.3
Baseline-CoTAverageFlan-PaLM-62B23.878.5
Baseline-CoTMaxFlan-PaLM-540B33.884.3
Baseline-CoTAverageFlan-PaLM-540B33.482.1
Baseline-SelfAskMaxGPT-329.874.2
Baseline-SelfAskAverageGPT-328.673.2
Baseline-SelfAskMaxFlan-UL2-20B11.553.7
Baseline-SelfAskAverageFlan-UL2-20B11.545.4
Baseline-SelfAskMaxFlan-PaLM-62B9.576.0
Baseline-SelfAskAverageFlan-PaLM-62B6.769.7
Baseline-SelfAskMaxFlan-PaLM-540B25.876.9
Baseline-SelfAskAverageFlan-PaLM-540B25.175.3
Baseline-InlineMaxGPT-330.378.6
Baseline-InlineAverageGPT-329.275.0
Baseline-InlineMaxFlan-UL2-20B13.772.1
Baseline-InlineAverageFlan-UL2-20B12.270.6
Baseline-InlineMaxFlan-PaLM-62B23.475.5
Baseline-InlineAverageFlan-PaLM-62B22.774.7
Baseline-InlineMaxFlan-PaLM-540B31.279.9
Baseline-InlineAverageFlan-PaLM-540B30.874.7
", + "bbox": [ + 231, + 483, + 768, + 885 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Table 11: Aggregations by few-shot prompt of the results in Table 9 (basiines).", + "bbox": [ + 226, + 895, + 769, + 910 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "13874", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/3bea2fe03cccd890f0ff782f744e84a91fdfbf876666e0691569d43890f62e6b.jpg", + "image_caption": [ + "Figure 6: An extension of Table 4. (a-b) refer to taking the minimum of entity page views to ablate examples that have rare entities, and maximum of numbers to ablate examples with large numbers. (c-e) take the median in both cases, and (f) shows the results when comparing TA strategies between refinement and non-refinement types." + ], + "image_footnote": [], + "bbox": [ + 149, + 128, + 843, + 824 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "13875", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/f250cf129b18f01d2e3ed98880c61ce7682534398a8988b8e50e58f536b2511e.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
StrategyAggregationModelMuSiQueStrategyQA
InterleavingMaxFlan-PaLM-62B24.474.7
InterleavingAverageFlan-PaLM-62B23.973.9
InterleavingMaxFlan-PaLM-540B23.778.2
InterleavingAverageFlan-PaLM-540B21.877.0
RARRMaxFlan-UL2-20B13.455.9
RARRAverageFlan-UL2-20B12.846.3
RARRMaxFlan-PaLM-62B24.177.7
RARRAverageFlan-PaLM-62B23.777.1
RARRMaxFlan-PaLM-540B35.181.2
RARRAverageFlan-PaLM-540B34.780.6
RARR-Top5MaxFlan-PaLM-540B36.180.3
RARR-Top5AverageFlan-PaLM-540B35.780.1
Check & FixMaxGPT-332.973.8
Check & FixAverageGPT-331.672.2
Check & FixMaxFlan-UL2-20B13.569.9
Check & FixAverageFlan-UL2-20B12.161.0
Check & FixMaxFlan-PaLM-62B26.576.4
Check & FixAverageFlan-PaLM-62B26.074.4
Check & FixMaxFlan-PaLM-540B33.580.8
Check & FixAverageFlan-PaLM-540B32.379.6
Check & Fix-Top5MaxFlan-PaLM-540B33.981.7
Check & Fix-Top5AverageFlan-PaLM-540B32.880.5
", + "bbox": [ + 233, + 303, + 766, + 667 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Table 12: Aggregations by few-shot prompt of the results in Table 9 (TA strategies).", + "bbox": [ + 213, + 677, + 779, + 693 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "13876", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/1c64a02b1a185df962b9b3e7638b6449b334d051949fba3f12899e13e37e14cd.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
StrategyAggregationModelMuSiQueStrategyQA
SelfAskMaxGPT-324.467.2
SelfAskAverageGPT-323.665.8
SelfAskMaxFlan-UL2-20B11.949.8
SelfAskAverageFlan-UL2-20B9.032.9
SelfAskMaxFlan-PaLM-62B14.867.7
SelfAskAverageFlan-PaLM-62B13.867.1
SelfAskAverageFlan-PaLM-540B22.372.2
SelfAskMaxFlan-PaLM-540B23.474.2
SelfAskQAMaxGPT-332.770.3
SelfAskQAAverageGPT-331.567.8
SelfAskQAMaxFlan-UL2-20B8.547.2
SelfAskQAAverageFlan-UL2-20B5.425.3
SelfAskQAMaxFlan-PaLM-62B17.569.0
SelfAskQAAverageFlan-PaLM-62B15.868.1
SelfAskQAMaxFlan-PaLM-540B22.875.1
SelfAskQAAverageFlan-PaLM-540B21.971.9
SelfAskQA-Top5MaxFlan-PaLM-540B22.072.1
SelfAskQA-Top5AverageFlan-PaLM-540B21.170.7
InlineQAMaxGPT-336.772.1
InlineQAAverageGPT-334.370.9
InlineQAMaxFlan-UL2-20B18.171.2
InlineQAAverageFlan-UL2-20B12.566.1
InlineQAMaxFlan-PaLM-62B12.563.3
InlineQAAverageFlan-PaLM-62B11.662.0
InlineQAMaxFlan-PaLM-540B32.473.4
InlineQAAverageFlan-PaLM-540B32.172.2
InlineMaxGPT-333.570.3
InlineAverageGPT-331.169.4
InlineMaxFlan-UL2-20B14.952.8
InlineAverageFlan-UL2-20B14.848.8
InlineMaxFlan-PaLM-62B13.370.3
InlineAverageFlan-PaLM-62B10.964.5
InlineMaxFlan-PaLM-540B24.374.7
InlineAverageFlan-PaLM-540B22.074.2
InlineQA-Top5MaxFlan-PaLM-540B34.577.7
InlineQA-Top5AverageFlan-PaLM-540B33.072.1
", + "bbox": [ + 196, + 151, + 805, + 820 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Table 13: Aggregations by few-shot prompt of the results in Table 9 (TA strategies).", + "bbox": [ + 213, + 829, + 779, + 844 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "13877", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 21 + }, + { + "type": "table", + "img_path": "images/0fa32e1c134f1cd3d2d3f844e902ee3bfb3faa979ff62b8eb735b3636532e218.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
StrategyAggregationModelDROPGSM8K
Baseline-CoTMaxGPT-357.658.8
Baseline-CoTAverageGPT-356.358.4
Baseline-CoTMaxFlan-UL2-20B27.2
Baseline-CoTAverageFlan-UL2-20B20.2
Baseline-CoTMaxFlan-PaLM-62B65.647.4
Baseline-CoTAverageFlan-PaLM-62B62.847.0
Baseline-CoTMaxFlan-PaLM-540B77.270.8
Baseline-CoTAverageFlan-PaLM-540B75.569.7
Baseline-InlineMaxGPT-366.054.0
Baseline-InlineAverageGPT-361.152.9
Baseline-InlineMaxFlan-UL2-20B9.25.6
Baseline-InlineAverageFlan-UL2-20B4.24.3
Baseline-InlineMaxFlan-PaLM-62B64.048.8
Baseline-InlineAverageFlan-PaLM-62B60.748.2
Baseline-InlineMaxFlan-PaLM-540B77.872.6
Baseline-InlineAverageFlan-PaLM-540B75.971.2
Check & FixMaxGPT-354.861.6
Check & FixAverageGPT-354.758.7
Check & FixMaxFlan-UL2-20B25.8
Check & FixAverageFlan-UL2-20B24.1
Check & FixMaxFlan-PaLM-62B65.046.8
Check & FixAverageFlan-PaLM-62B57.645.8
Check & FixMaxFlan-PaLM-540B76.070.4
Check & FixAverageFlan-PaLM-540B64.969.7
InlineMaxGPT-366.052.8
InlineAverageGPT-356.052.0
InlineMaxFlan-UL2-20B26.6
InlineAverageFlan-UL2-20B26.3
InlineMaxFlan-PaLM-62B64.048.8
InlineAverageFlan-PaLM-62B59.648.3
InlineMaxFlan-PaLM-540B76.270.8
InlineAverageFlan-PaLM-540B75.364.5
", + "bbox": [ + 233, + 193, + 764, + 777 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Table 14: Aggregations by few-shot prompt of the results in Table 10.", + "bbox": [ + 260, + 788, + 734, + 803 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "13878", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 22 + } +] \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_model.json b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b8ac329811e4cff2be44ddbd07256b42eea2791e --- /dev/null +++ b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_model.json @@ -0,0 +1,2798 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.146, + 0.09, + 0.853, + 0.112 + ], + "angle": 0, + "content": "A Comprehensive Evaluation of Tool-Assisted Generation Strategies" + }, + { + "type": "text", + "bbox": [ + 0.281, + 0.13, + 0.721, + 0.148 + ], + "angle": 0, + "content": "Alon Jacovi\\(^{1*}\\) Avi Caciularu\\(^{2}\\) Jonathan Herzig\\(^{2}\\)" + }, + { + "type": "text", + "bbox": [ + 0.3, + 0.153, + 0.703, + 0.17 + ], + "angle": 0, + "content": "Roee Aharoni² Bernd Bohnet³ Mor Geva³" + }, + { + "type": "text", + "bbox": [ + 0.241, + 0.182, + 0.761, + 0.216 + ], + "angle": 0, + "content": "1Bar Ilan University 2Google Research 3Google DeepMind alonjacovi@gmail.com" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.269 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.283, + 0.461, + 0.694 + ], + "angle": 0, + "content": "A growing area of research investigates augmenting language models with tools (e.g., search engines, calculators) to overcome their shortcomings (e.g., missing or incorrect knowledge, incorrect logical inferences). Various few-shot tool-usage strategies have been proposed. However, there is no systematic and fair comparison across different strategies, or between these strategies and strong baselines that do not leverage tools. We conduct an extensive empirical analysis, finding that (1) across various datasets, example difficulty levels, and models, strong no-tool baselines are competitive to tool-assisted strategies, implying that effectively using tools with in-context demonstrations is a difficult unsolved problem; (2) for knowledge-retrieval tasks, strategies that refine incorrect outputs with tools outperform strategies that retrieve relevant information ahead of or during generation; (3) tool-assisted strategies are expensive in the number of tokens they require to work—incurring additional costs by orders of magnitude—which does not translate into significant improvement in performance. Overall, our findings suggest that few-shot tool integration is still an open challenge, emphasizing the need for comprehensive evaluations of future strategies to accurately assess their benefits and costs." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.709, + 0.26, + 0.724 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.736, + 0.49, + 0.897 + ], + "angle": 0, + "content": "Augmenting language models (LMs) with tools has been proposed to overcome LMs' inherent weaknesses (Mialon et al., 2023; Qian et al., 2022), such as the lack of grounding to reliable or updated sources (Jiang et al., 2023), incoherent logical ability (Liu et al., 2022; Ling et al., 2023) and arithmetic ability (Gao et al., 2023b), among others. This is done through tool-assisted (TA) generation, where LMs are trained or instructed to use external tools, such as search engines over the web—e.g.," + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.253, + 0.885, + 0.349 + ], + "angle": 0, + "content": "Google search (Gao et al., 2023a; Press et al., 2023; Nakano et al., 2022), Wikipedia search (Trivedi et al., 2022a), a calculator (Schick et al., 2023), or a python interpreter (Paranjape et al., 2023). Often, tool invocations are structured as Chain-of-Thought (CoT) long-form answers (Wei et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.354, + 0.885, + 0.706 + ], + "angle": 0, + "content": "Recent work proposed a variety of strategies for interfacing between the LM and the tool, such as through demonstrations of API calls (Paranjape et al., 2023) or using the tool to refine the model's output (Gao et al., 2023a)—see Figure 2 for an overview. But what are the advantages and tradeoffs of different TA strategies? For example, some strategies incur significantly higher computation costs than others with little to no improvement in performance. There is a gap in the literature on the evaluation of such strategies, in particular against strong baselines and against each other. Concretely, works that report empirical evaluations are often restricted to comparisons of a single proposed strategy against a limited selection of non-TA baselines, using a limited selection of LMs or even a single LM, or focus on evaluating various LMs with a specific TA strategy (Li et al., 2023). Additionally, comparisons often do not consider the increase in computation that each TA strategy requires, which vary significantly, and have a large effect on inference time or cost." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.71, + 0.885, + 0.919 + ], + "angle": 0, + "content": "The above issues are only some of the pitfalls we observed in the literature, limiting the scope of current evaluations. In §3, we analyze the literature for common pitfalls and collect a set of guidelines towards a fair and reliable evaluation procedure specifically for TA strategies. Next (§4), we conduct a study which addresses all of the observed pitfalls, using GPT3, Flan-UL2 and Flan-PaLM, and complex reasoning benchmarks StrategyQA, MuSiQue, GSM8K, and DROP. We report a fair, systematic comparison of five few-shot TA strategies across multiple models and demonstrations, and all strategies use the same set of tools." + }, + { + "type": "page_footnote", + "bbox": [ + 0.142, + 0.905, + 0.471, + 0.919 + ], + "angle": 0, + "content": "*Work done during an internship at Google Research." + }, + { + "type": "page_number", + "bbox": [ + 0.477, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13856" + }, + { + "type": "footer", + "bbox": [ + 0.211, + 0.946, + 0.788, + 0.973 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13856-13878 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.127, + 0.085, + 0.299, + 0.3 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.303, + 0.084, + 0.477, + 0.3 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.314, + 0.49, + 0.371 + ], + "angle": 0, + "content": "Figure 1: Illustration of tool-assistance strategies that invoke tools and insert their outputs into the prompt (a), and strategies that first generate some output, and only use tools to fix and refine it (b)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.387, + 0.49, + 0.659 + ], + "angle": 0, + "content": "We analyze the study results (§5) and arrive at surprising conclusions: (1) Non-TA baselines are stronger than initially reported. In most cases, TA strategies do not significantly or at all improve on non-TA strategies on popular Question Answering datasets. (2) For retrieval tools in knowledge tasks, TA strategies that fix model output after it is generated perform better than TA strategies that prompt the model to interface with the tool directly during generation. For calculator tools in calculation-intensive tasks, the relationship is not decisive. (3) TA strategies incur significantly higher computation costs than non-TA baselines by multiplicative factors, and there is no general correlation between computation cost and performance, with the exception that refinement strategies in retrieval settings are more costly than non-refinement strategies." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.662, + 0.49, + 0.853 + ], + "angle": 0, + "content": "In §6 we report a fine-grained analysis of the results. We investigate the effect of each example's difficulty—e.g., very large numbers, or very rare entities) on improvement from tool usage, and find that tools do not systematically improve model performance on harder examples, where they were expected to have the strongest improvement. Finally, based on an error analysis of failure cases, we find that the majority of mistakes follow incorrect tool invocations, rather than incorrect tool responses (in the case of the retrieval tool) or incorrect inferences based on correct tool usage." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.856, + 0.49, + 0.919 + ], + "angle": 0, + "content": "In conclusion, we conduct an extensive evaluation of few-shot TA strategies, finding that previous estimates of tool-usage performance is not representative. Overall, this suggests that few-shot tool" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.23 + ], + "angle": 0, + "content": "integration is still an open challenge. We call the community to evaluate future strategies systematically, while taking into account the significant costs that these strategies require in comparison to their benefits. Towards this, we provide a set of concrete guidelines for fair and reliable evaluation of TA strategies. Moreover, We release the handcrafted collection of 184 demonstrations used in our study (attached in the supplementary material)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.243, + 0.818, + 0.26 + ], + "angle": 0, + "content": "2 Tool-Assisted Language Models" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.269, + 0.884, + 0.301 + ], + "angle": 0, + "content": "We describe existing few-shot strategies for augmenting LMs with tools and discuss related work." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.314, + 0.736, + 0.329 + ], + "angle": 0, + "content": "2.1 Few-shot TA strategies" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.335, + 0.885, + 0.495 + ], + "angle": 0, + "content": "Strategies for tool usage can be broadly divided into two categories: (a) Using tools during generation and insert the tools' outputs into the model's prompt (Figures 1a, 2a); (b) Using tools to refine the LM's output after generation (Figures 1b, 2b). Strategies can be further categorized into settings where the tool is heuristically called in a pipeline or called when the model generates pre-specified tool calls. Refer to Mialon et al. (2023) for a review of the literature on TA strategies and models." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.497, + 0.885, + 0.834 + ], + "angle": 0, + "content": "Among TA strategies of type (a): SelfAsk (Press et al., 2023) decomposes the task into subtasks as simpler questions, such that a tool can be called on each question. A related strategy is Demonstrate-Search-Predict (Khattab et al., 2023). Inline strategies such as Toolformer (Schick et al., 2023)1, ART (Paranjape et al., 2023), inter alia (Chen et al., 2022; Gao et al., 2023b; Lyu et al., 2023) demonstrate tool usage with pre-defined words or tokens and tool arguments, halt generation when those tokens and arguments are generated, invoke the tool, and insert its output into the prompt to resume generation. Interleaving Retrieval (Trivedi et al., 2022a) does not directly instruct the model to use tools, but calls the tool on each reasoning step, to provide the model with additional context for future steps. (Jiang et al., 2023) propose a similar strategy, opting to re-write each step after using it as a query. There are also strategies such as Decomposed Prompting (Khot et al., 2023) that are generalizations of the previous strategies." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.835, + 0.884, + 0.883 + ], + "angle": 0, + "content": "Among TA strategies of type (b): RARR (Gao et al., 2023a) involves a pipeline designed for knowledge-based tasks: verifying the relevance" + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.893, + 0.884, + 0.919 + ], + "angle": 0, + "content": "1Schick et al. primarily discusses tool usage with training. We adapt only the few-shot strategy in our experiments." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13857" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.376, + 0.082, + 0.645, + 0.094 + ], + "angle": 0, + "content": "Who lived longer, Muhammad Ali or Alan Turing?" + }, + { + "type": "image", + "bbox": [ + 0.123, + 0.1, + 0.877, + 0.255 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.123, + 0.258, + 0.877, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.123, + 0.454, + 0.877, + 0.589 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.597, + 0.883, + 0.627 + ], + "angle": 0, + "content": "Figure 2: Overview of the TA strategies implemented in this work. Blue text marks tool queries, tool responses are in turquoise cells, refinement is in orange cells and dashed arrows, and yellow cells are LM generations." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.64, + 0.49, + 0.833 + ], + "angle": 0, + "content": "and factuality of each claim by generating questions based on the claim, retrieving snippets that answer these questions, and checking if the answers match the information in the claim. If not, the claim is refined to match the snippets. Check & Fix, a method we introduce in this work, uses each CoT step as a search query, and checks whether the step is entailed by the retrieved snippets by prompting the model to classify this entailment. This strategy is similar to Jiang et al. (2023, contemporaneous work), which additionally uses low-confidence filtering but omits the entailment verification." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.848, + 0.271, + 0.863 + ], + "angle": 0, + "content": "2.2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.872, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Training LMs to use tools. While we are primarily concerned with few-shot tool assistance of LM generation, the literature also explores LMs which" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.64, + 0.885, + 0.72 + ], + "angle": 0, + "content": "are trained to use specific tools (Parisi et al., 2022; Hao et al., 2023; Patil et al., 2023). These methods are constrained to the tools seen during training, and require data (annotated, bootstrapped, or synthetically constructed) of tool demonstrations." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.732, + 0.885, + 0.829 + ], + "angle": 0, + "content": "Other tool-assisted neural networks. There is adjacent research on augmenting neural networks, in ways besides textual interfaces, with tools (e.g., Andor et al., 2019; Jacovi et al., 2019) or training differentiable subnetworks that heavily mimic tools (Neelakantan et al., 2017; Trask et al., 2018)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.844, + 0.707, + 0.859 + ], + "angle": 0, + "content": "3 Evaluation Pitfalls" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.872, + 0.884, + 0.92 + ], + "angle": 0, + "content": "While there is a plethora of TA strategies (§2.1), no systematic comparison of these strategies has been conducted. Research that proposes TA strategies in" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13858" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.123, + 0.082, + 0.875, + 0.306 + ], + "angle": 0, + "content": "
PitfallRecommendation
(1)Coupling the TA strategy and the tool together.Comparisons of TA strategies should use the same tools across strategies.
(2)Forcing no-tool baselines to the framework of the TA strategy.The optimal way to solve the task without tools may be different from solving the task with tools: No-tool baselines should include multiple variants of both free-form and structured strategies, to ensure the TA strategies are not given an advantage.
(3)Using one model across all comparisons.Different models may behave differently when it comes to using tools effectively, based on their training data. Multiple models should be tested, if possible.
(4)Using one prompt and set of demonstrations across all comparisons.Multiple different sets of demonstrations should be used to get reliable estimates of few-shot performance.
(5)Not considering TA strategy costs.TA strategies can be efficient or inefficient with regards to the prompt tokens and generation tokens they require to work, with respect to no-tool baselines or with respect to each other. The differences can be significant (§5). Comparisons of TA strategies should factor the computation cost of the strategy, which we term as token efficiency.
" + }, + { + "type": "table_caption", + "bbox": [ + 0.156, + 0.315, + 0.84, + 0.331 + ], + "angle": 0, + "content": "Table 1: Summary of evaluation pitfalls of TA strategies (§3) and recommendations to mitigate them." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.355, + 0.489, + 0.466 + ], + "angle": 0, + "content": "few-shot settings is often not focused on evaluating properties of those strategies, but other aspects of LM capabilities (Press et al., 2023; Gao et al., 2023a), usage in particular strict contexts (Paranjape et al., 2023), evaluating various LM models themselves with a particular strategy (Mialon et al., 2023), and so on." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.469, + 0.49, + 0.549 + ], + "angle": 0, + "content": "Below we collect observations from the literature that demonstrate the limited evaluation scope of TA strategies, in an effort to establish a set of criteria for future evaluations to be reliable and fair (a summary is provided in Table 1)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.554, + 0.49, + 0.618 + ], + "angle": 0, + "content": "(1) Coupling the TA strategy and the tool together. Comparisons may vary the tools and methods together (e.g., a TA strategy \\(A\\) with a tool \\(A\\) versus a TA strategy \\(B\\) with a tool \\(B\\))." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.624, + 0.489, + 0.786 + ], + "angle": 0, + "content": "(2) Forcing baselines to the framework of the TA strategy. Typical baselines to a given TA strategy are to apply that strategy while letting the model generate the tool's output instead of the tool, and using CoT prompting. However, the optimal way to solve the problem without tools may not be the same as the TA strategy in question. In this work, we implement three different baselines (§4) and find that there is no clear winner among two of them (we explore this empirically in §5)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.791, + 0.49, + 0.919 + ], + "angle": 0, + "content": "(3) Using one model across all comparisons. Often, a single model is chosen to use as the underlying model for the TA strategy. This limits the insights from the evaluation to this model in particular, since conclusions may not carry over to other models. In this work, we find that the best-performing strategies vary significantly across different LMs (we explore this empirically in §5)." + }, + { + "type": "list", + "bbox": [ + 0.113, + 0.554, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.355, + 0.885, + 0.516 + ], + "angle": 0, + "content": "(4) Using one prompt and one set of demonstrations across all comparisons. Few-shot evaluation is known to be unreliable when using a single set of demonstrations as a single prompt (Perez et al., 2021). Furthermore, some prompts used in TA strategy evaluations—in particular, CoT demonstrations—appear so often on the internet that they are suspected to be part of the models' training data, further compromising their function (Jacovi et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.522, + 0.885, + 0.618 + ], + "angle": 0, + "content": "(5) Not considering TA strategy costs. In many cases, the TA strategy requires significantly more compute than no-tool baselines, and different TA strategies also require different amounts of computation. Computation cost is not traditionally considered in comparisons." + }, + { + "type": "list", + "bbox": [ + 0.508, + 0.355, + 0.885, + 0.618 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.634, + 0.719, + 0.651 + ], + "angle": 0, + "content": "4 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.661, + 0.884, + 0.757 + ], + "angle": 0, + "content": "Our goal is to conduct a fair and reliable comparison of TA strategies, without being influenced by properties of specific models, tools or prompts. To this end, we focus on few-shot tool usage, a popular TA scheme that allows flexibility around using new tools and adapting tools to specific tasks." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.759, + 0.885, + 0.919 + ], + "angle": 0, + "content": "In what follows, we describe our experimental setup. What guides this experimental setup is to perform a comprehensive, rigorous evaluation without the pitfalls of §3. Our evaluation covers 5 different TA strategies, 4 recent LMs, 4 complex reasoning datasets, 3 few-shot prompts, and 2 tools. For each TA strategy + dataset + model combination, we run three experiments with a different number of demonstrations. Overall, our evaluation includes an execution of 342 experiments, each of which" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13859" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.49, + 0.135 + ], + "angle": 0, + "content": "generates 250 (GPT-3) or 500 (non-GPT-3) long-form answers. Additional implementation details are in Appendix A." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.142, + 0.49, + 0.351 + ], + "angle": 0, + "content": "Tool-assisted strategies. We evaluate the TA strategies shown in Figure 2: SelfAsk, Inline, Interleaving, C&F and RARR. We additionally include variants of SelfAsk and Inline where the model is separately called to summarize tool output in relevant context, as it can often be very long (SelfAskQA and InlineQA; see Appendix A for details). Finally, in the retrieval settings, we use Top-1 retrieval for all models, and additionally Top-5 retrieval for the Flan-PaLM-540B model (see \"Models\" below) to check whether additional retrieved information can improve performance despite the significantly longer input and processing cost." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.355, + 0.492, + 0.598 + ], + "angle": 0, + "content": "For SelfAsk and RARR we use the original implementation provided by the methods' creators. We implement Interleaving (Trivedi et al., 2022a), as at the time of this research no implementation was available. Importantly, this implementation yields similar performance to that of existing approaches that combine CoT with retrieval from Wikipedia by He et al. (2022); Jiang et al. (2023) (see full results in Appendix B). Additionally, Jiang et al. (2023, Figure 4) implemented methods that apply retrieval and refinement over generated CoT that are similar to C&F and achieve similar performance to ours, as well (see Appendix B). For Inline, we are not aware of reports on few-shot performance of a similar strategy in the literature." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.605, + 0.49, + 0.735 + ], + "angle": 0, + "content": "Baseline strategies. We use no-tool versions of SelfAsk, Inline, and standard CoT prompting. The SelfAsk and Inline baselines simply involve giving the model the prompts used for the tool-based versions, while disabling tool calls (such that the model generates the output in-place of the tools). These are the baselines used by Press et al. (2023) and Schick et al. (2023) respectively." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.743, + 0.49, + 0.921 + ], + "angle": 0, + "content": "Datasets. We consider tasks that require complex reasoning, where models could potentially benefit from external tool usage. Specifically, we use StrategyQA (Geva et al., 2021) and MuSiQue (Trivedi et al., 2022b), which require reasoning about entity knowledge, and GSM8k (Cobbe et al., 2021) and DROP (Dua et al., 2019) that evaluate arithmetic reasoning. In DROP we select examples that have numerical answers. We randomly sample 500 examples from the development set of each dataset (with the exception of StrategyQA, whose" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.887, + 0.199 + ], + "angle": 0, + "content": "test set has 229 examples), and use it for performance evaluation of UL2, Flan-PaLM-540B and Flan-PaLM-62B. For GPT-3, we use a subset of 250 examples of that set, due to cost. We use standard evaluation measures for every dataset (F1 in the case of MuSiQue). We provide data examples in Appendix A." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.213, + 0.887, + 0.376 + ], + "angle": 0, + "content": "Models. We evaluate the methods across four LMs: Flan-UL2-20B (Tay et al., 2023), GPT-3 (text-davinci-003) (Brown et al., 2020), Flan-PaLM-540B and Flan-PaLM-62B (Chung et al., 2022). We omit GPT-3 experiments on RARR and Interleaving due to cost. Importantly, our focus is not in comparing performance of these models, but to use them as samples of different model instances and training schemes against which to compare different TA strategies." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.389, + 0.886, + 0.631 + ], + "angle": 0, + "content": "Tools. We strictly use the same tools across all strategies, to ensure a fair comparison: Google Search (Press et al., 2023; Schick et al., 2023; Lewis et al., 2021) for knowledge tasks, and a calculator (Schick et al., 2023; Qin et al., 2023) for the calculation tasks. RARR, SelfAsk and Interleaving are designed for retrieval settings only, while Inline and Check & Fix can be used in all settings. For the retrieval settings using Google Search and Flan-PaLM-540B, we test retrieval with both the top 1 and top 5 tool-retrieved snippets: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.647, + 0.886, + 0.921 + ], + "angle": 0, + "content": "Few-shot demonstrations. In order to overcome bias from using demonstrations from prior work that were likely seen during training (Jacovi et al., 2023), we re-announce prompts for all TA strategies, datasets and tools. We randomly sample 8 examples from each dataset's training set, and annotate each example with demonstrations for each TA strategy. Some of the strategies call the model multiple times with different prompts (e.g., Check & Fix, RARR), which requires separate annotations. This effort results in a total of 184 annotated demonstrations, which we release as a resource for future works on TA generation. From each set of 8 demonstrations, we then construct three separate prompts—3-shot, 5-shot and 7-shot—randomly sampled from the original 8 demonstrations, to get a better estimation of few-shot performance." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.528, + 0.942 + ], + "angle": 0, + "content": "13860" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.117, + 0.085, + 0.877, + 0.199 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.117, + 0.203, + 0.877, + 0.31 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.112, + 0.323, + 0.884, + 0.368 + ], + "angle": 0, + "content": "Figure 3: A comparison of evaluation scores across two areas (\\(\\S 5\\)): (a) No-tool baselines vs. TA strategies; (b) Tool usage via refinement of generated text vs. tool usage during generation, where the generated text contains tool arguments is conditioned on tool outputs. The dark line marks the confidence interval among samples." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.38, + 0.331, + 0.396 + ], + "angle": 0, + "content": "5 Comparative Results" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.405, + 0.416, + 0.42 + ], + "angle": 0, + "content": "Organization of the results. Due to the" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.421, + 0.489, + 0.5 + ], + "angle": 0, + "content": "Tool vs. no tool. Previous work that propose TA strategies found that using such strategies consistently improve performance in comparison to no-tool baselines (Press et al., 2023; Jiang et al., 2023; Trivedi et al., 2022a, inter alia)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.501, + 0.49, + 0.725 + ], + "angle": 0, + "content": "Figure 3 shows that the TA strategies do not improve performance over the no-tool baselines in our selection of datasets. The figure shows results against the average of the different few-shot scores, though we observe similar trends when using the maximum of scores as well. Full results are in Appendix B. Similarly to us, Gao et al. (2023a, §6.2) found that StrategyQA performance slightly decreased with tools in RARR compared to no-tool baselines for PaLM-540B (Chowdhery et al., 2022), and Jiang et al. (2023, §6.2) found that performance decreased on StrategyQA in two settings comparable to our implementations of Interleaving and Check & Fix with GPT-3." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.727, + 0.489, + 0.839 + ], + "angle": 0, + "content": "We conclude that for the settings in this work, the no-tool baselines are stronger than initially expected based on the literature. More research is required to investigate whether this relationship holds in other contexts, though we note that the datasets and models used in our experiments are common in TA research (Mialon et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.84, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Additionally, our experiments provide empirical justification to Recommendations (2) and (3) in §3. First, we find that the CoT and Inline baselines outperform each other at a roughly equal rate, and neither emerges as a clear winner. This shows" + }, + { + "type": "table", + "bbox": [ + 0.53, + 0.378, + 0.865, + 0.637 + ], + "angle": 0, + "content": "
ModelDatasetBest strategy
GPT-3StrategyQABaseline-Inline
GPT-3DROPBaseline-Inline
GPT-3GSM8KCheck & Fix
GPT-3MuSiQueInline
Flan-PaLM-540BStrategyQABaseline-CoT
Flan-PaLM-540BDROPBaseline-Inline
Flan-PaLM-540BGSM8KBaseline-Inline
Flan-PaLM-540BMuSiQueRARR-Top5
Flan-UL2-20BStrategyQABaseline-Inline
Flan-UL2-20BDROPBaseline-Inline
Flan-UL2-20BGSM8KInline
Flan-UL2-20BMuSiQueBaseline-CoT
Flan-PaLM-62BStrategyQABaseline-CoT
Flan-PaLM-62BDROPBaseline-CoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCheck & Fix
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.646, + 0.884, + 0.747 + ], + "angle": 0, + "content": "Table 2: For each combination of dataset and model, we derive the best-performing strategy on the average score across the few-shot prompts. Notably, the best-performing strategy varies across different models, datasets or prompts, which means that it is necessary to evaluate over all axes to get a better estimation of general performance." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.775, + 0.885, + 0.92 + ], + "angle": 0, + "content": "that different baselines obtain different results, and so, relying on only a single baseline in evaluation does not necessarily provide a good estimation for no-tool performance (recommendation (2)). Also, the best-performing strategies vary significantly across models, which highlights the importance of using multiple models for evaluation (recommendation (3))—for illustration, we report the highest-performing strategies in each setting in Table 2, to" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "13861" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.215, + 0.082, + 0.784, + 0.272 + ], + "angle": 0, + "content": "
TA strategyPrompt tokens (canonical)Prompt tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinen3533531418801
SelfAskt(n+kt+1/2)22811399--
SelfAskQAt(2n+k)35892736--
Inlinet(n+kt+1/2)1793177534531083
InlineQAt(2n+k)33753672--
Check & fixt(2n+k)3839354775483647
RARR3n(t+1)4729--
Interleavingt(n+kt+1/2)3221--
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.28, + 0.883, + 0.325 + ], + "angle": 0, + "content": "Table 3: Average number of prompt tokens per strategy (5-shot), with \\( n \\) as the CoT prompt length, \\( t \\) as the number of tool calls, \\( k \\) as the tool's output length. Flan-PaLM-540B has a shorter context window than GPT-3, which limits prompt length. The canonical formula for RARR favorably assumes a single verification question." + }, + { + "type": "table", + "bbox": [ + 0.215, + 0.336, + 0.784, + 0.525 + ], + "angle": 0, + "content": "
TA strategyAnswer tokens (canonical)Answer tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinem44425888
SelfAskm2072--
SelfAskQA2m5964--
Inlinem10324862102
InlineQA2m114256--
Check & fix2m8917775177
RARR3m181--
Interleavingm72--
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.535, + 0.884, + 0.565 + ], + "angle": 0, + "content": "Table 4: Average number of answer tokens across the 5-shot experiments, for each strategy. The RARR formula assumes a single verification question per step." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.577, + 0.489, + 0.626 + ], + "angle": 0, + "content": "show that the overall conclusion can be distorted by choosing a particular model or strategy Extended details are in Appendix B.1." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.636, + 0.49, + 0.813 + ], + "angle": 0, + "content": "Tool use during generation vs. post-generation refinement. In Figure 3 we compare the strategies that use tools during generation against the strategies that first generate an answer, and then use tools to improve the answer. For retrieval tasks, refinement clearly outperforms non-refinement strategies, but the same does not apply to the calculation tasks. We conjecture that planning calculations ahead of time during generation is more aligned with LM pretraining data, based on internet text, than planning retrieval queries in similar contexts." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Token efficiency. TA strategies are typically evaluated in terms of task performance and properties such as factuality and logic correctness. We argue that computational cost is another important factor to consider. Specifically, we propose to evaluate token efficiency, that is, the amount of prompt tokens" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.577, + 0.885, + 0.77 + ], + "angle": 0, + "content": "and generated tokens, which have direct effect on the cost of the TA strategy. Notably, the cost of a TA strategy depends on various variables, including model size, GPU type, caching optimizations, vocabulary size, beam search size, and so on. However, token counts can serve as a plausibly generic proxy for the purpose of comparing the cost of different TA strategies, as other factors are roughly equal across strategies, as long as the same models and tools are used. We consider prompt tokens and generated tokens separately, as they often have different consequences on cost.\\(^2\\)" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.772, + 0.884, + 0.854 + ], + "angle": 0, + "content": "Tables 3, 4 show both canonical and empirical comparisons across TA strategies with regards to token efficiency. The canonical comparison is a function of the relevant variables in the \"canonical\" setting where the model was expected to answer" + }, + { + "type": "page_footnote", + "bbox": [ + 0.508, + 0.87, + 0.883, + 0.919 + ], + "angle": 0, + "content": "2Depending on model architecture and quantity of times reusing the same prompt, prompt processing cost can be optimized, whereas the token generation cost varies with other factors such as vocabulary size." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13862" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.295 + ], + "angle": 0, + "content": "the question perfectly, and use the tool perfectly as intended. Across all TA strategy experiments, we found no general correlation between token efficiency and performance. Concretely: (1) All TA strategies are significantly more expensive than the no-tool baselines by orders of magnitude, while not incurring an improvement worthy of this extra cost. Empirically, using tools in each case can incur extra costs by a factor of \\(5x\\) to \\(10x\\) for prompt processing, and \\(2x\\) to \\(5x\\) for generation. (2) The refinement strategies are more expensive than the no-refinement strategies. So while they improve performance for retrieval tasks, it comes at a cost." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.306, + 0.308, + 0.323 + ], + "angle": 0, + "content": "6 Analytical Results" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.331, + 0.49, + 0.429 + ], + "angle": 0, + "content": "We discuss further analyses of our results, findings that (a) our observations generally hold across different levels of example difficulty, and (b) most prediction errors of tool-augmented LMs stem from incorrect inputs to the tool and bad outputs from it, and not from a lack of tool usage." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.439, + 0.31, + 0.455 + ], + "angle": 0, + "content": "6.1 Example Difficulty" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.459, + 0.49, + 0.62 + ], + "angle": 0, + "content": "It has been shown that LMs have difficulty solving problems involving long-tail entities (Kandpal et al., 2022; Mallen et al., 2022) and complex mathematical reasoning challenges (Mishra et al., 2022; Imani et al., 2023). Accordingly, we ablate the results from §5 along the following axes of example difficulty, in order to understand how tools can affect performance on difficult examples. We provide an overview of the trends here, and extended results are available in Appendix B." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.625, + 0.49, + 0.835 + ], + "angle": 0, + "content": "Measures of difficulty. We investigate the effectiveness of tool-usage across varying levels of example difficulty, which we approximate in two axes: (A) Long-tail entities (retrieval): Following Mallen et al. (2022), we extract the entities from the question and associated gold answers in StrategyQA and MuSiQue, and use the corresponding entity Wikipedia page views as a measure of popularity. (B) Large numbers (calculation): We segment the examples in the calculation tasks based on the range of the median and largest number in the example (question and gold solution in GSM8k, or question and context paragraph in DROP)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.839, + 0.49, + 0.921 + ], + "angle": 0, + "content": "Results. Performance across increasing levels of entity popularity and computation complexity, with different LMs and TA strategies, are shown in Figure 4a and Figure 4b, respectively. We find that performance uniformly decreases for harder ex" + }, + { + "type": "text", + "bbox": [ + 0.506, + 0.085, + 0.885, + 0.327 + ], + "angle": 0, + "content": "amples in the retrieval setting for all models, but in the calculation setting, this only manifests for Flan-UL2-20B (implying that the larger models are more robust to the numerical ranges in GSM8K and DROP). Overall, in all cases tool use does not improve upon the baselines even when controlling for the harder cases where tools are expected to be more useful. This conclusion is aligned with our error analysis in §6.3, which shows that the common errors stem from incorrect tool arguments, more than correct tool arguments but incorrect inferences based on them. Flan-UL2 with a calculator is an exception, where tool use indeed helps, though moreso on the easier examples, likely due to a higher rate of correct arguments to the calculator." + }, + { + "type": "title", + "bbox": [ + 0.508, + 0.337, + 0.719, + 0.353 + ], + "angle": 0, + "content": "6.2 Tool Usage Statistics" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.358, + 0.885, + 0.648 + ], + "angle": 0, + "content": "A possible explanation for the similar performance of no-tool baselines could be a lack of tool usage. To check this, we aggregate usage over the different TA strategies, and find that the models indeed use tools in the majority of the cases; \\(70\\% - 80\\%\\) in SelfAsk, and \\(>90\\%\\) in others (see Appendix B). We also investigate usage across other axes, such as models and number of demonstrations, and find similar trends. However, the datasets and tasks we investigate are designed to benefit from the tools in all cases, which shows that few-shot demonstrations are not always sufficient in inducing tool use in models. In particular, the SelfAsk strategies receive the lowest tool use, being the strategies that use natural language to query whether to use the tool (the answer begins with \"Are follow up questions needed here: to which the model answers \"No\" in the cases where the tool is not used)." + }, + { + "type": "title", + "bbox": [ + 0.508, + 0.658, + 0.673, + 0.674 + ], + "angle": 0, + "content": "6.3 Error Analysis" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.678, + 0.885, + 0.921 + ], + "angle": 0, + "content": "We sampled 50 instances for which an error was made by the TA models, randomly across the 5-shot experiments, and categorized them across three categories: (A) Incorrect tool input; (B) incorrect tool output; (C) incorrect model inferences based on correct tool usage. Error B applies only to the retrieval settings, where the retrieval tool (Google Search in our case) retrieved a wrong or irrelevant snippet. The errors were distributed approximately to \\(60\\%\\) (A), \\(10\\%\\) (B), and \\(30\\%\\) (C) in the retrieval setting, and \\(80\\%\\) (A) and \\(20\\%\\) (C) in the calculation setting. Li et al. (2023) reported an error analysis for tool-assistance in dialogue customer assistance settings, with similar conclusions regarding error A, although errors B and C do not apply in their" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.942 + ], + "angle": 0, + "content": "13863" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.121, + 0.085, + 0.874, + 0.335 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.112, + 0.348, + 0.884, + 0.406 + ], + "angle": 0, + "content": "Figure 4: We analyze performance of the strategies across two area (no-tool baselines vs. TA strategies), conditioned on example difficulty as defined by the existence of rare or common entities in the retrieval settings (via percentile of page views) and small or large numbers in the calculation settings (via percentile of numeric range). In (a), lower page views imply higher difficulty, and in (b), larger numbers imply higher difficulty." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.419, + 0.465, + 0.433 + ], + "angle": 0, + "content": "context, and other error types manifest instead." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.436, + 0.488, + 0.533 + ], + "angle": 0, + "content": "Our results suggest that the majority of errors are not due to the incorrect tool responses (i.e., issues with Google Search as a choice of retriever), and overall more influenced by incorrectly invoking tools to begin with, in comparison to invoking them correctly but composing the solution incorrectly." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.55, + 0.392, + 0.567 + ], + "angle": 0, + "content": "7 Conclusions and Takeaways" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.58, + 0.49, + 0.805 + ], + "angle": 0, + "content": "We conduct a comprehensive assessment of few-shot tool augmentation strategies for LMs, covering hundreds of experiments with multiple LMs, datasets, and tools. Our experiments show that current tool-usage integration approaches are presently a false promise; prompting strategies that do not use tools typically obtain similar task performance, without the high cost of tool execution. Controlling for example difficulty, where tools are expected to provide the most benefit, does not explain the relative strength of the no-tool baselines. Instead, the primary errors we observe are related to incorrect usage of the tools to begin with (i.e., generating incorrect arguments to the tool)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.807, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Our findings call for more robust evaluation of future TA strategies, primarily in more practical settings where models are not expected to leverage inherent abilities to solve tasks. To this end, our work provides concrete evaluation guidelines, such as employing stronger baselines and factoring in computation costs." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.417, + 0.615, + 0.433 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.448, + 0.883, + 0.609 + ], + "angle": 0, + "content": "While our study aims to provide a comprehensive evaluation of TA strategies, there are some limitations. First, recent work (Dodge et al., 2021; Magar and Schwartz, 2022; OpenAI, 2023) suggests that examples from public datasets, like those used in our evaluation, may have leaked to the training data of recent LMs. Such contamination can introduce biases to the evaluation, such as lack of need for external tools. We are not aware of alternatives without this issue at the time of this writing." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.612, + 0.884, + 0.74 + ], + "angle": 0, + "content": "Second, due to the high cost of executing large LMs in an exhaustive evaluation, we ran only a single experiment for each combination of TA strategy, model, dataset, and number of demonstrations. However, given the sensitivity of models to the demonstrations (Perez et al., 2021), future work should extend this evaluation to use multiple sets of demonstrations for each such combination." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.743, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Last, while our findings show that non-tool models often perform on par with existing TA strategies, our setting favors tool usage. For example, our tasks only require a single type of tool such that the model does not need to choose between multiple tools. Future work that investigates when and how tools can improve performance should consider more realistic evaluation settings, for example, by considering tasks where the model may need to use multiple types of tools together, or tasks where tools may sometimes give unhelpful answers." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13864" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.116, + 0.085, + 0.214, + 0.099 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.108, + 0.49, + 0.228 + ], + "angle": 0, + "content": "Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. 2019. Giving BERT a calculator: Finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5947-5952, Hong Kong, China. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.239, + 0.49, + 0.398 + ], + "angle": 0, + "content": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. CoRR, abs/2005.14165." + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.409, + 0.49, + 0.463 + ], + "angle": 0, + "content": "Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks." + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.474, + 0.49, + 0.777 + ], + "angle": 0, + "content": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways." + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.787, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le," + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.108, + 0.49, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.529, + 0.086, + 0.882, + 0.113 + ], + "angle": 0, + "content": "and Jason Wei. 2022. Scaling instruction-finetuned language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.122, + 0.885, + 0.201 + ], + "angle": 0, + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.211, + 0.885, + 0.33 + ], + "angle": 0, + "content": "Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.339, + 0.885, + 0.419 + ], + "angle": 0, + "content": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.428, + 0.885, + 0.506 + ], + "angle": 0, + "content": "Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.516, + 0.885, + 0.569 + ], + "angle": 0, + "content": "Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023b. Pal: Program-aided language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.578, + 0.885, + 0.658 + ], + "angle": 0, + "content": "Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies. Transactions of the Association for Computational Linguistics (TACL)." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.667, + 0.885, + 0.709 + ], + "angle": 0, + "content": "Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.717, + 0.885, + 0.758 + ], + "angle": 0, + "content": "Hangfeng He, Hongming Zhang, and Dan Roth. 2022. Rethinking with retrieval: Faithful large language model inference. arXiv preprint arXiv:2301.00303." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.766, + 0.885, + 0.808 + ], + "angle": 0, + "content": "Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.816, + 0.885, + 0.87 + ], + "angle": 0, + "content": "Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg. 2023. Stop uploading test data in plain text: Practical strategies for mitigating data contamination by evaluation benchmarks." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.878, + 0.885, + 0.92 + ], + "angle": 0, + "content": "Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, and Jonathan Berant. 2019. Neural network gradient-based learning" + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.885, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "13865" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.135, + 0.086, + 0.489, + 0.139 + ], + "angle": 0, + "content": "of black-box function interfaces. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.151, + 0.487, + 0.204 + ], + "angle": 0, + "content": "Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.216, + 0.486, + 0.269 + ], + "angle": 0, + "content": "Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. arXiv preprint arXiv:2211.08411." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.281, + 0.487, + 0.347 + ], + "angle": 0, + "content": "Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2023. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.359, + 0.487, + 0.412 + ], + "angle": 0, + "content": "Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2023. Decomposed prompting: A modular approach for solving complex tasks." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.424, + 0.487, + 0.503 + ], + "angle": 0, + "content": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-augmented generation for knowledge-intensive nlp tasks." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.515, + 0.487, + 0.555 + ], + "angle": 0, + "content": "Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api-bank: A benchmark for tool-augmented llms." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.567, + 0.487, + 0.607 + ], + "angle": 0, + "content": "Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. 2023. Deductive verification of chain-of-thought reasoning." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.619, + 0.487, + 0.672 + ], + "angle": 0, + "content": "Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M. Dai. 2022. Mind's eye: Grounded language model reasoning through simulation." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.684, + 0.487, + 0.736 + ], + "angle": 0, + "content": "Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of-thought reasoning." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.749, + 0.487, + 0.827 + ], + "angle": 0, + "content": "Inbal Magar and Roy Schwartz. 2022. Data contamination: From memorization to exploitation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 157-165, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.84, + 0.487, + 0.918 + ], + "angle": 0, + "content": "Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.489, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.883, + 0.165 + ], + "angle": 0, + "content": "Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.176, + 0.883, + 0.281 + ], + "angle": 0, + "content": "Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505-3523, Dublin, Ireland. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.292, + 0.883, + 0.384 + ], + "angle": 0, + "content": "Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browser-assisted question-answering with human feedback." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.395, + 0.883, + 0.459 + ], + "angle": 0, + "content": "Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. In International Conference on Learning Representations." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.471, + 0.883, + 0.576 + ], + "angle": 0, + "content": "Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9844–9855, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.587, + 0.772, + 0.602 + ], + "angle": 0, + "content": "OpenAI. 2023. Gpt-4 technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.612, + 0.883, + 0.677 + ], + "angle": 0, + "content": "Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. 2023. Art: Automatic multi-step reasoning and tool-use for large language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.689, + 0.883, + 0.716 + ], + "angle": 0, + "content": "Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.727, + 0.883, + 0.766 + ], + "angle": 0, + "content": "Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.777, + 0.883, + 0.804 + ], + "angle": 0, + "content": "Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.815, + 0.883, + 0.866 + ], + "angle": 0, + "content": "Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.878, + 0.883, + 0.918 + ], + "angle": 0, + "content": "Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. 2022. Limitations of language models in arithmetic and symbolic induction." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.883, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13866" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.118, + 0.086, + 0.49, + 0.242 + ], + "angle": 0, + "content": "Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool learning with foundation models." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.253, + 0.489, + 0.306 + ], + "angle": 0, + "content": "Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.315, + 0.489, + 0.395 + ], + "angle": 0, + "content": "Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Siamak Shakeri, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby, and Donald Metzler. 2023. UI2: Unifying language learning paradigms." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.404, + 0.487, + 0.444 + ], + "angle": 0, + "content": "Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, and Phil Blunsom. 2018. Neural arithmetic logic units." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.454, + 0.489, + 0.507 + ], + "angle": 0, + "content": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022a. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.517, + 0.489, + 0.583 + ], + "angle": 0, + "content": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022b. MuSiQue: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.593, + 0.489, + 0.646 + ], + "angle": 0, + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "13867" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.357, + 0.1 + ], + "angle": 0, + "content": "A Implementation Details" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.112, + 0.356, + 0.127 + ], + "angle": 0, + "content": "A.1 Tool-Assisted Strategies." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.134, + 0.49, + 0.31 + ], + "angle": 0, + "content": "General Details. In all cases, if the tool invocation fails (e.g., with an ill-formatted calculation, or a null response from Google Search), the model is used to generate the tool's output instead. For all retrieval settings using Google Search, we test both Top-1 and Top-5 retrieval: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information. Illustrative examples of the data are available in Table 5." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.322, + 0.49, + 0.497 + ], + "angle": 0, + "content": "SelfAsk and SelfAskQA. SelfAsk involves decomposing each question into a series of simpler sub-questions, and calling the tool directly for each sub-question. The tool's output is inserted into the prompt as an intermediate answer. When the model generates a step that begins with the string \"So the answer is:,\" it is expected to generate an answer that builds on the previous intermediate answers which were tool outputs. In this work, we use Google Search as the tool as in the original work by (Press et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.5, + 0.49, + 0.58 + ], + "angle": 0, + "content": "Our SelfAsk implementation reuses the original implementation by Press et al. (2023). Since Self-Ask is designed specifically for knowledge-based QA, we only evaluate this strategy for the knowledge tasks MuSiQue and StrategyQA." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.582, + 0.49, + 0.821 + ], + "angle": 0, + "content": "The SelfAskQA variant involves calling the model for each pair of sub-question and retrieved snippet that (hopefully) contains its answer. This method of recursively calling the model with different a different prompt as if it were another tool is a technique proposed by Khot et al. (2023). We collect all sub-questions from the SelfAsk prompts in order to construct QA prompts (using the tool to retrieve supporting snippets). The model is called with the QA prompts in order to answer each sub-question based on its snippet. The SelfAskQA variant in essence summarizes each Google Search snippet, which can be as long as a paragraph, into a short answer to the given sub-question, effectively simplifying and shortening the overall answer." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Among the two SelfAsk implementations, neither decisively outperforms the other: SelfAskQA outperforms SelfAsk for GPT-3 and Flan-PaLM-62B on both MuSiQue and StrategyQA, but for Flan-PaLM-540B and Flan-UL2-20B the relationship flips." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.326 + ], + "angle": 0, + "content": "Inline and InlineQA. The Inline strategy format largely mimics the Toolformer format by Schick et al. (2023), but can also be cast into the ART framework by Paranjape et al. (2023) or the Decomposed Prompting framework by Khot et al. (2023). In general, the strategy simply calls for generating the tool call in a predefined format—in our case, square brackets and the tool name. The tool is invoked with the arguments generated by the model inside the brackets, and the tool's output is inserted into the model. Our implementation is based on the inference code implemented by Schick et al. (2023), although notably, we focus on few-shot usage, and do not perform the tool-usage pretraining step that largely concerns the referenced work." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.328, + 0.885, + 0.504 + ], + "angle": 0, + "content": "We implement two variants: Inline, which uses a tool called \"Search\" that appends the retrieved snippet or calculation output directly into the prompt, and InlineQA, which uses a tool called \"QA\" which calls the model with a separate prompt in order to summarize the retrieved snippet into a concise answer, identically to the aforementioned SelfAskQA variant. As with the SelfAsk and SelfAskQA variants, among Inline and InlineQA in the knowledge-based tasks, neither consistently outperforms the other in particular." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.518, + 0.884, + 0.744 + ], + "angle": 0, + "content": "Interleaving. The Interleaving Retrieval strategy (Trivedi et al., 2022a) proposes to use each reasoning step by the model in its CoT answer as a query to a retrieval model. The retrieved snippet is then added to the prompt in order to provide additional information to the model. The structure for each demonstration becomes: (1) All retrieved documents thus far; (2) The question; (3) The generated answer thus far (see Trivedi et al., 2022a for details). In this way, the tool is used heuristically without explicit demonstrations from the model, but the generation of the answer at each CoT step is still conditioned on tool usage based on the previous steps." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.758, + 0.884, + 0.87 + ], + "angle": 0, + "content": "Check & Fix. We propose this strategy as a more lightweight variant of refinement based on tools in comparison to RARR, and it is comparable to contemporaneously proposed (Jiang et al., 2023): After each CoT step, the step is checked for accuracy using a tool, and if found inaccurate, a new fixed step is generated to replace it." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.872, + 0.884, + 0.919 + ], + "angle": 0, + "content": "In the retrieval setting, each step is verified and fixed by prompting the model to classify whether the step is contradicted by the retrieved paragraphs," + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13868" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.123, + 0.082, + 0.88, + 0.409 + ], + "angle": 0, + "content": "
ModelDatasetStrategyExampleModel Answer
GPT-3StrategyQABaseline-CoTQuestion: Was a woman Prime Minister directly before or after Stanley Baldwin? Answer: No.Stanley Baldwin was Prime Minister of the United Kingdom from 1923 to 1929. The woman Prime Minister directly before him was Margaret Thatcher, who served from 1979 to 1990. The woman Prime Minister directly after him was Theresa May, who served from 2016 to 2019. So the answer is no.
Flan-UL2MuSiQueSelfAskQuestion: What administrative territorial entity contains the place where KPRM is licensed to broadcast to? Answer: Hubbard County.Are follow up questions needed here: Yes. What is the place where KPRM is licensed to broadcast to? Intermediate answer: KPRM is licensed to broadcast to the city of Houston. Follow up: What administrative territorial entity contains the city of Houston? Intermediate answer: The city of Houston is in Texas state. Follow up: What is the administrative territorial entity that contains Texas state? Intermediate answer: Texas state is in the United States. So the final answer is: The United States
Flan-PaLM-62BDROPCheck & FixContext: [omitted paragraph about the 2000 census] Question: How many households and families reside in the county according to the 2000 census? Answer: 2648426484 households and 12068 families reside in the county. There were 40543 people, 15416 households, and 11068 families residing in the county. Thus, 26484 + 12068 = 38552. So the final answer is: 38552.
Flan-PaLM-540BGSM8KInlineQuestion: How many girls are in the school if 40% of a school population is made up of 240 boys? Answer: 360If 40% of the school population is made up of 240 boys, then 1% is made up of [Calculator(240 / 40) -> 6.0] 6 boys. Therefore, the total school population is made up of [Calculator(6 * 100) -> 600] 600 students. If the school population is made up of 600 students, and 40% of them are boys, then the remaining 60% are girls, a total of [ Calculator(60 / 100) * 600 -> 360] 360 girls. So the final answer is: 360.
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.418, + 0.884, + 0.447 + ], + "angle": 0, + "content": "Table 5: Illustrative examples of various datasets, strategies and model outputs. The answers from the Interleaving, Check & Fix and RARR models are of the same format as the CoT baseline." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.472, + 0.489, + 0.634 + ], + "angle": 0, + "content": "and if so, to generate the fixed step based on demonstrations. In the calculation setting, each step is first heuristically checked for whether it contains a calculation, and if so, the calculation is inserted into the calculator tool, and the model is prompted to verify whether the tool output is consistent with the calculation in the text. If this is incorrect, the model generates the fixed step. In both cases, the answer generation continues where the fixed step completely replaces the original incorrect step." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.646, + 0.489, + 0.806 + ], + "angle": 0, + "content": "RARR. RARR (Retrofit Attribution using Research and Revision, Gao et al., 2023a) was proposed as a post processing method for refining any text, including LM chain-of-thought outputs. This is done via automatically finding attribution for each claim in the text, and post-editing the output to fix unsupported content while preserving the original output as much as possible. Our RARR implementation reuses the original implementation by Gao et al. (2023a)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.808, + 0.489, + 0.84 + ], + "angle": 0, + "content": "The RARR process involves the following steps, with each considered as a separate tool:" + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.856, + 0.488, + 0.92 + ], + "angle": 0, + "content": "1. Question Generation: First, they generate a series of questions that cover various aspects of a passage, referred to as passage x. The questions generated aim to verify and attribute" + }, + { + "type": "text", + "bbox": [ + 0.545, + 0.472, + 0.882, + 0.504 + ], + "angle": 0, + "content": "information from the passage. This is done via prompting the LM with few-shot examples." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.512, + 0.884, + 0.592 + ], + "angle": 0, + "content": "2. Evidence Retrieval: For each generated question, the Google Search tool is utilized to retrieve the top-\\(k\\) passages that are related to the question. In this work, we evaluate both Top-1 and Top-5." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.6, + 0.882, + 0.743 + ], + "angle": 0, + "content": "3. Evidence Ranking: The retrieved evidences are next ranked using a query-document relevance model scorer. Unlike the original RARR implementation (Gao et al., 2023a), which uses the GTR retrieval model (Ni et al., 2022), we instead implement the scorer via few-shot LM prompting, as suggested by the authors. The output of this stage is thus the top-1 ranked evidence." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.752, + 0.882, + 0.848 + ], + "angle": 0, + "content": "4. Agreement Phase: Given a triplet of a text, question, and an evidence, this phase determines whether both the text and the question imply the same answer to the question. This is implemented via few-shot LM prompting using a chain-of-thought style prompt." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.856, + 0.882, + 0.919 + ], + "angle": 0, + "content": "5. Editing Phase: If the previous Agreement Phase outputs disagreement between the text and the evidence, the (text, question, evidence) triplet is fed to a model that outputs a revised" + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.512, + 0.884, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13869" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.14, + 0.082, + 0.462, + 0.362 + ], + "angle": 0, + "content": "
ModelDatasetBest baseline
GPT-3StrategyQAInline
GPT-3DROPInline
GPT-3GSM8KCoT
GPT-3MuSiQueInline
Flan-UL2-20BStrategyQAInline
Flan-UL2-20BDROPInline
Flan-UL2-20BGSM8KCoT
Flan-UL2-20BMuSiQueCoT
Flan-PaLM-540BStrategyQACoT
Flan-PaLM-540BDROPInline
Flan-PaLM-540BGSM8KInline
Flan-PaLM-540BMuSiQueCoT
Flan-PaLM-62BStrategyQACoT
Flan-PaLM-62BDROPCoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCoT
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.371, + 0.49, + 0.442 + ], + "angle": 0, + "content": "Table 6: For each combination of dataset and model, we derive the best-performing baseline on the average score across the few-shot experiments. There is no clear winner: Two of the baselines achieve the best score in \\(50\\%\\) of cases." + }, + { + "type": "text", + "bbox": [ + 0.15, + 0.468, + 0.49, + 0.613 + ], + "angle": 0, + "content": "version of the text, considering the discrepancy between the previous text and the evidence. This is implemented via few-shot LM prompting using a similar chain-of-thought style prompt from the previous stage (see Gao et al., 2023a for the exact prompting template). The agreement and editing phases run iteratively until there are no needed revisions, detected in the Agreement Phase." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.624, + 0.239, + 0.637 + ], + "angle": 0, + "content": "A.2 Baselines" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.644, + 0.49, + 0.789 + ], + "angle": 0, + "content": "Chain-of-Thought. The CoT baseline is the standard baseline proposed by Wei et al. (2023) and implemented as a baseline by Press et al. (2023); Paranjape et al. (2023), inter alia. Often, the demonstrations used for this baseline are those originally published by Wei et al. (2023). In this work we annotate a new sample of examples with CoT answers for the purpose of a better estimation of CoT few-shot performance, and release our annotations." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.798, + 0.49, + 0.879 + ], + "angle": 0, + "content": "Self-Ask. The Self-Ask baseline uses the Self-Ask tool demonstrations, but does not invoke the tool after each \"Follow up:\" call, and instead generates the entire answer. This is the original no-tool baseline in Press et al. (2023)." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.888, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Inline. The Inline baseline uses the Inline tool demonstrations, but does not invoke the tool after" + }, + { + "type": "table", + "bbox": [ + 0.589, + 0.082, + 0.805, + 0.177 + ], + "angle": 0, + "content": "
ModelUsage (%)
Flan-PaLM-540B70.9
Flan-PaLM-62B80.6
Flan-UL2-20B82.6
GPT-395.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.509, + 0.186, + 0.884, + 0.215 + ], + "angle": 0, + "content": "Table 7: Note that RARR and Interleaving are guaranteed to use tools so they are omitted." + }, + { + "type": "table", + "bbox": [ + 0.603, + 0.228, + 0.792, + 0.34 + ], + "angle": 0, + "content": "
StrategyUsage (%)
Check & Fix92.9
SelfAsk80.4
SelfAskQA72.8
Inline99.9
InlineQA96.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.349, + 0.884, + 0.391 + ], + "angle": 0, + "content": "Table 8: Overview of average rate of tool usage across experiments. Note that RARR and Interleaving are guaranteed to use tools." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.418, + 0.884, + 0.466 + ], + "angle": 0, + "content": "each tool call, and instead generates the entire answer. This is the original no-tool baseline in Schick et al. (2023)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.48, + 0.698, + 0.494 + ], + "angle": 0, + "content": "B Extended Results" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.506, + 0.884, + 0.555 + ], + "angle": 0, + "content": "We provide the full results for our experiments (described in §4) in §B.1, and further analysis of TA strategy performance and tool usage in §B.2." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.566, + 0.75, + 0.581 + ], + "angle": 0, + "content": "B.1 Full Experiment Results" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.587, + 0.884, + 0.699 + ], + "angle": 0, + "content": "Tables 9, 10 detail our experiment results. Tables 11, 12, 13, 14 detail average and max aggregations over the few-shot prompts. As mentioned, we sample 500 examples for Flan-PaLM-62B , FlanPaLM-540B and Flan-UL2-20B experiments, and 250 for GPT-3 experiments, with the exception of StrategyQA whose test set has 229 examples." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.7, + 0.884, + 0.828 + ], + "angle": 0, + "content": "For DROP and MuSiQue, we report the F1 measures using the evaluation scripts provided by Dua et al. (2019); Trivedi et al. (2022b) respectively. For GSM8K, we normalize the numerical answers and measure exact-match. For StrategyQA, we normalize the answers (for capitalization, prefix and suffix punctuation, and so on) and measure exact-match to \"yes\" and \"no\"." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.84, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Best-performing strategies and baselines in each setting. In Tables 2, 6 we show the best-performing baseline and best-performing general strategy for each setting of model and dataset, among the average scores across the three few-shot" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13870" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.146 + ], + "angle": 0, + "content": "experiments. For strategies in general (Table 2), we see that the winning strategies vary significantly for different models, which supports Guideline (3) in Table 1." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.15, + 0.49, + 0.277 + ], + "angle": 0, + "content": "The distribution among the baselines is split \\(50\\% - 50\\%\\) among CoT and Inline. When considering each few-shot experiment separately (i.e., not taking the average), the distribution is \\(60.0\\%\\), \\(37.5\\%\\), and \\(2\\%\\) for Baseline-CoT, Baseline-Inline and Baseline-SelfAsk respectively for which baseline achieves the best-performing score. This supports Guideline (2) in Table 1." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.289, + 0.232, + 0.304 + ], + "angle": 0, + "content": "B.2 Analysis" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.31, + 0.49, + 0.518 + ], + "angle": 0, + "content": "Example Difficulty. Figures 5, 6 show extended results for the example difficulty analyses in §6. Here we consider the median of each difficulty metric—i.e., the difficulty across all entities or numbers in the example—rather than the minimum or maximum, as well as the ablation of refinement strategies against no-refinement strategies. We additionally checked for two alternative axes: operation complexity (addition and subtraction as “easy” examples, and multiplication and division as “hard” examples) and popularity links rather than popularity views. The trends we observe in the main paper hold in all of these cases." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.528, + 0.49, + 0.607 + ], + "angle": 0, + "content": "Tool Usage. Tables 7, 8 show aggregate tool usage percentages over multiple axes. Overall, few-shot demonstrations induce tool usage in the majority of cases, though not completely so (i.e., below \\(100\\%\\))." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "13871" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.145, + 0.265, + 0.85, + 0.37 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.144, + 0.376, + 0.85, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.144, + 0.486, + 0.85, + 0.584 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.144, + 0.589, + 0.85, + 0.689 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.703, + 0.886, + 0.748 + ], + "angle": 0, + "content": "Figure 5: An extension of Table 3 with results for both the average across few-shot experiments (a-b) and the maximum across few-shot experiments (c-d)—i.e., the maximum between 3-shot, 5-shot and 7-shot for each experiments setting." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.527, + 0.942 + ], + "angle": 0, + "content": "13872" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.159, + 0.094, + 0.843, + 0.852 + ], + "angle": 0, + "content": "
StrategyModelMuSiQueStrategyQA
3-shot5-shot7-shot3-shot5-shot7-shot
RARRFlan-PaLM-540B34.8635.0934.1480.3581.2280.79
RARRFlan-UL2-20B13.4012.0112.9855.9040.1742.79
RARRFlan-PaLM-62B23.6023.4224.0775.9877.7377.73
Baseline-CoTFlan-PaLM-540B33.0733.3633.8079.9184.2882.10
Baseline-CoTFlan-UL2-20B15.1416.5016.1067.2571.6272.05
Baseline-CoTGPT-327.3729.3130.2570.7471.6271.62
Baseline-CoTFlan-PaLM-62B23.6023.4224.2775.9879.0480.35
Baseline-SelfAskFlan-PaLM-540B25.8025.3424.3176.8673.3675.55
Baseline-SelfAskFlan-UL2-20B11.4011.5211.5234.0648.4753.71
Baseline-SelfAskGPT-327.9828.1329.8072.0574.2473.36
Baseline-SelfAskFlan-PaLM-62B5.289.525.4358.9575.9874.24
Baseline-InlineFlan-PaLM-540B30.3930.7131.1971.6279.9172.49
Baseline-InlineFlan-UL2-20B13.6613.339.7472.0568.5671.18
Baseline-InlineGPT-329.1130.3328.1570.3175.9878.60
Baseline-InlineFlan-PaLM-62B23.4222.6921.8675.1173.3675.55
SelfAskFlan-PaLM-540B20.0223.1423.2671.6271.1873.80
SelfAskFlan-UL2-20B11.867.687.4149.7825.7623.14
SelfAskGPT-324.3824.1522.3364.1967.2565.94
SelfAskFlan-PaLM-62B13.7914.8012.6867.2567.6966.38
SelfAskQAFlan-PaLM-540B21.0821.9222.9171.6269.4373.80
SelfAskQAFlan-UL2-20B8.535.352.3047.1617.0311.79
SelfAskQAGPT-332.7431.3030.3465.5067.6970.31
SelfAskQAFlan-PaLM-62B15.4217.4914.5167.2568.1269.00
InlineQAFlan-PaLM-540B31.8632.7832.1070.3172.9373.36
InlineQAFlan-UL2-20B18.0717.941.5671.1870.3156.77
InlineQAGPT-334.9036.6531.3270.3172.0570.31
InlineQAFlan-PaLM-62B12.5211.6510.5561.1463.3261.57
Check & FixFlan-PaLM-540B30.7333.1733.4880.3580.7978.17
Check & FixFlan-UL2-20B10.9011.7713.5252.4060.7069.87
Check & FixGPT-329.6632.9532.2672.0573.8070.74
Check & FixFlan-PaLM-62B25.2126.3926.4775.5571.1876.42
InlineFlan-PaLM-540B18.9724.4222.6174.2474.2475.11
InlineFlan-UL2-20B14.7014.9314.7848.4752.8444.98
InlineGPT-328.8531.0333.5470.3169.4368.56
InlineFlan-PaLM-62B9.959.4513.3254.5968.5670.31
InterleavingFlan-PaLM-540B23.7121.2920.5176.8678.6075.98
InterleavingFlan-PaLM-62B23.4323.7124.4274.6771.6274.24
RARR-Top5Flan-PaLM-540B36.1235.4035.4480.3579.9179.91
SelfAskQA-Top5Flan-PaLM-540B19.7521.6021.9969.8770.3172.05
Inline-Top5Flan-PaLM-540B32.6734.5331.6965.5077.7372.93
Check & Fix-Top5Flan-PaLM-540B31.7432.6833.8778.6081.6681.22
" + }, + { + "type": "table_footnote", + "bbox": [ + 0.117, + 0.865, + 0.882, + 0.905 + ], + "angle": 0, + "content": "Table 9: Results for the knowledge-retrieval tasks of MuSiQue and StrategyQA. MuSiQue scores are F1 scores. Missing cells, such as \"Interleaving\" with Flan-UL2-20B, are experiments where the model failed to converge to an answer." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.929, + 0.524, + 0.941 + ], + "angle": 0, + "content": "13873" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.159, + 0.086, + 0.842, + 0.429 + ], + "angle": 0, + "content": "
StrategyModelDROPGSM8K
3-shot5-shot7-shot3-shot5-shot7-shot
Baseline-CoTFlan-PaLM-540B77.275.074.267.470.870.8
Baseline-CoTFlan-UL2-20B7.227.226.2
Baseline-CoTGPT-357.655.655.658.858.058.4
Baseline-CoTFlan-PaLM-62B65.663.659.247.446.247.4
Baseline-InlineFlan-PaLM-540B77.875.674.469.872.671.2
Baseline-InlineFlan-UL2-20B3.65.63.6
Baseline-InlineGPT-357.666.059.651.654.053.2
Baseline-InlineFlan-PaLM-62B59.064.059.248.847.848.0
InlineFlan-PaLM-540B76.275.274.461.461.870.6
InlineFlan-UL2-20B26.626.226.0
InlineGPT-356.866.045.250.852.452.8
InlineFlan-PaLM-62B57.064.057.848.847.848.2
Check & FixFlan-PaLM-540B76.073.645.068.470.470.2
Check & FixFlan-UL2-20B23.225.823.2
Check & FixGPT-354.854.454.856.058.461.6
Check & FixFlan-PaLM-62B65.063.644.246.844.046.6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.439, + 0.884, + 0.469 + ], + "angle": 0, + "content": "Table 10: Results for the calculator settings of DROP and GSM8K. We omit Flan-UL2-20B results on DROP, as the model could not converge to solve the task with our prompts, likely since each example in this task is very long." + }, + { + "type": "table", + "bbox": [ + 0.233, + 0.485, + 0.769, + 0.886 + ], + "angle": 0, + "content": "
StrategyAggregationModelMuSiQueStrategyQA
Baseline-CoTMaxGPT-330.271.6
Baseline-CoTAverageGPT-329.071.3
Baseline-CoTMaxFlan-UL2-20B16.572.1
Baseline-CoTAverageFlan-UL2-20B15.970.3
Baseline-CoTMaxFlan-PaLM-62B24.380.3
Baseline-CoTAverageFlan-PaLM-62B23.878.5
Baseline-CoTMaxFlan-PaLM-540B33.884.3
Baseline-CoTAverageFlan-PaLM-540B33.482.1
Baseline-SelfAskMaxGPT-329.874.2
Baseline-SelfAskAverageGPT-328.673.2
Baseline-SelfAskMaxFlan-UL2-20B11.553.7
Baseline-SelfAskAverageFlan-UL2-20B11.545.4
Baseline-SelfAskMaxFlan-PaLM-62B9.576.0
Baseline-SelfAskAverageFlan-PaLM-62B6.769.7
Baseline-SelfAskMaxFlan-PaLM-540B25.876.9
Baseline-SelfAskAverageFlan-PaLM-540B25.175.3
Baseline-InlineMaxGPT-330.378.6
Baseline-InlineAverageGPT-329.275.0
Baseline-InlineMaxFlan-UL2-20B13.772.1
Baseline-InlineAverageFlan-UL2-20B12.270.6
Baseline-InlineMaxFlan-PaLM-62B23.475.5
Baseline-InlineAverageFlan-PaLM-62B22.774.7
Baseline-InlineMaxFlan-PaLM-540B31.279.9
Baseline-InlineAverageFlan-PaLM-540B30.874.7
" + }, + { + "type": "table_caption", + "bbox": [ + 0.227, + 0.896, + 0.77, + 0.912 + ], + "angle": 0, + "content": "Table 11: Aggregations by few-shot prompt of the results in Table 9 (basiines)." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "13874" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.151, + 0.129, + 0.844, + 0.825 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.838, + 0.885, + 0.882 + ], + "angle": 0, + "content": "Figure 6: An extension of Table 4. (a-b) refer to taking the minimum of entity page views to ablate examples that have rare entities, and maximum of numbers to ablate examples with large numbers. (c-e) take the median in both cases, and (f) shows the results when comparing TA strategies between refinement and non-refinement types." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "13875" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.234, + 0.304, + 0.768, + 0.668 + ], + "angle": 0, + "content": "
StrategyAggregationModelMuSiQueStrategyQA
InterleavingMaxFlan-PaLM-62B24.474.7
InterleavingAverageFlan-PaLM-62B23.973.9
InterleavingMaxFlan-PaLM-540B23.778.2
InterleavingAverageFlan-PaLM-540B21.877.0
RARRMaxFlan-UL2-20B13.455.9
RARRAverageFlan-UL2-20B12.846.3
RARRMaxFlan-PaLM-62B24.177.7
RARRAverageFlan-PaLM-62B23.777.1
RARRMaxFlan-PaLM-540B35.181.2
RARRAverageFlan-PaLM-540B34.780.6
RARR-Top5MaxFlan-PaLM-540B36.180.3
RARR-Top5AverageFlan-PaLM-540B35.780.1
Check & FixMaxGPT-332.973.8
Check & FixAverageGPT-331.672.2
Check & FixMaxFlan-UL2-20B13.569.9
Check & FixAverageFlan-UL2-20B12.161.0
Check & FixMaxFlan-PaLM-62B26.576.4
Check & FixAverageFlan-PaLM-62B26.074.4
Check & FixMaxFlan-PaLM-540B33.580.8
Check & FixAverageFlan-PaLM-540B32.379.6
Check & Fix-Top5MaxFlan-PaLM-540B33.981.7
Check & Fix-Top5AverageFlan-PaLM-540B32.880.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.678, + 0.781, + 0.694 + ], + "angle": 0, + "content": "Table 12: Aggregations by few-shot prompt of the results in Table 9 (TA strategies)." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "13876" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.198, + 0.152, + 0.806, + 0.821 + ], + "angle": 0, + "content": "
StrategyAggregationModelMuSiQueStrategyQA
SelfAskMaxGPT-324.467.2
SelfAskAverageGPT-323.665.8
SelfAskMaxFlan-UL2-20B11.949.8
SelfAskAverageFlan-UL2-20B9.032.9
SelfAskMaxFlan-PaLM-62B14.867.7
SelfAskAverageFlan-PaLM-62B13.867.1
SelfAskAverageFlan-PaLM-540B22.372.2
SelfAskMaxFlan-PaLM-540B23.474.2
SelfAskQAMaxGPT-332.770.3
SelfAskQAAverageGPT-331.567.8
SelfAskQAMaxFlan-UL2-20B8.547.2
SelfAskQAAverageFlan-UL2-20B5.425.3
SelfAskQAMaxFlan-PaLM-62B17.569.0
SelfAskQAAverageFlan-PaLM-62B15.868.1
SelfAskQAMaxFlan-PaLM-540B22.875.1
SelfAskQAAverageFlan-PaLM-540B21.971.9
SelfAskQA-Top5MaxFlan-PaLM-540B22.072.1
SelfAskQA-Top5AverageFlan-PaLM-540B21.170.7
InlineQAMaxGPT-336.772.1
InlineQAAverageGPT-334.370.9
InlineQAMaxFlan-UL2-20B18.171.2
InlineQAAverageFlan-UL2-20B12.566.1
InlineQAMaxFlan-PaLM-62B12.563.3
InlineQAAverageFlan-PaLM-62B11.662.0
InlineQAMaxFlan-PaLM-540B32.473.4
InlineQAAverageFlan-PaLM-540B32.172.2
InlineMaxGPT-333.570.3
InlineAverageGPT-331.169.4
InlineMaxFlan-UL2-20B14.952.8
InlineAverageFlan-UL2-20B14.848.8
InlineMaxFlan-PaLM-62B13.370.3
InlineAverageFlan-PaLM-62B10.964.5
InlineMaxFlan-PaLM-540B24.374.7
InlineAverageFlan-PaLM-540B22.074.2
InlineQA-Top5MaxFlan-PaLM-540B34.577.7
InlineQA-Top5AverageFlan-PaLM-540B33.072.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.83, + 0.781, + 0.845 + ], + "angle": 0, + "content": "Table 13: Aggregations by few-shot prompt of the results in Table 9 (TA strategies)." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "13877" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.234, + 0.194, + 0.766, + 0.778 + ], + "angle": 0, + "content": "
StrategyAggregationModelDROPGSM8K
Baseline-CoTMaxGPT-357.658.8
Baseline-CoTAverageGPT-356.358.4
Baseline-CoTMaxFlan-UL2-20B27.2
Baseline-CoTAverageFlan-UL2-20B20.2
Baseline-CoTMaxFlan-PaLM-62B65.647.4
Baseline-CoTAverageFlan-PaLM-62B62.847.0
Baseline-CoTMaxFlan-PaLM-540B77.270.8
Baseline-CoTAverageFlan-PaLM-540B75.569.7
Baseline-InlineMaxGPT-366.054.0
Baseline-InlineAverageGPT-361.152.9
Baseline-InlineMaxFlan-UL2-20B9.25.6
Baseline-InlineAverageFlan-UL2-20B4.24.3
Baseline-InlineMaxFlan-PaLM-62B64.048.8
Baseline-InlineAverageFlan-PaLM-62B60.748.2
Baseline-InlineMaxFlan-PaLM-540B77.872.6
Baseline-InlineAverageFlan-PaLM-540B75.971.2
Check & FixMaxGPT-354.861.6
Check & FixAverageGPT-354.758.7
Check & FixMaxFlan-UL2-20B25.8
Check & FixAverageFlan-UL2-20B24.1
Check & FixMaxFlan-PaLM-62B65.046.8
Check & FixAverageFlan-PaLM-62B57.645.8
Check & FixMaxFlan-PaLM-540B76.070.4
Check & FixAverageFlan-PaLM-540B64.969.7
InlineMaxGPT-366.052.8
InlineAverageGPT-356.052.0
InlineMaxFlan-UL2-20B26.6
InlineAverageFlan-UL2-20B26.3
InlineMaxFlan-PaLM-62B64.048.8
InlineAverageFlan-PaLM-62B59.648.3
InlineMaxFlan-PaLM-540B76.270.8
InlineAverageFlan-PaLM-540B75.364.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.262, + 0.789, + 0.735, + 0.804 + ], + "angle": 0, + "content": "Table 14: Aggregations by few-shot prompt of the results in Table 10." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "13878" + } + ] +] \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_origin.pdf b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0bed6add9416960711c46090a70756b8184f069d --- /dev/null +++ b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/6b3b9095-80bb-4832-9b1c-9b30dcb51c14_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61c3e70a06cb04117a78b57ef026207d61c18754cfb714268ad7014278a7f1dd +size 817302 diff --git a/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/full.md b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3e4982d52079d6077d05071bb9c688fd46a95c50 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/full.md @@ -0,0 +1,372 @@ +# A Comprehensive Evaluation of Tool-Assisted Generation Strategies + +Alon Jacovi $^{1*}$ Avi Caciularu $^{2}$ Jonathan Herzig $^{2}$ + +Roee Aharoni² Bernd Bohnet³ Mor Geva³ + +1Bar Ilan University 2Google Research 3Google DeepMind alonjacovi@gmail.com + +# Abstract + +A growing area of research investigates augmenting language models with tools (e.g., search engines, calculators) to overcome their shortcomings (e.g., missing or incorrect knowledge, incorrect logical inferences). Various few-shot tool-usage strategies have been proposed. However, there is no systematic and fair comparison across different strategies, or between these strategies and strong baselines that do not leverage tools. We conduct an extensive empirical analysis, finding that (1) across various datasets, example difficulty levels, and models, strong no-tool baselines are competitive to tool-assisted strategies, implying that effectively using tools with in-context demonstrations is a difficult unsolved problem; (2) for knowledge-retrieval tasks, strategies that refine incorrect outputs with tools outperform strategies that retrieve relevant information ahead of or during generation; (3) tool-assisted strategies are expensive in the number of tokens they require to work—incurring additional costs by orders of magnitude—which does not translate into significant improvement in performance. Overall, our findings suggest that few-shot tool integration is still an open challenge, emphasizing the need for comprehensive evaluations of future strategies to accurately assess their benefits and costs. + +# 1 Introduction + +Augmenting language models (LMs) with tools has been proposed to overcome LMs' inherent weaknesses (Mialon et al., 2023; Qian et al., 2022), such as the lack of grounding to reliable or updated sources (Jiang et al., 2023), incoherent logical ability (Liu et al., 2022; Ling et al., 2023) and arithmetic ability (Gao et al., 2023b), among others. This is done through tool-assisted (TA) generation, where LMs are trained or instructed to use external tools, such as search engines over the web—e.g., + +Google search (Gao et al., 2023a; Press et al., 2023; Nakano et al., 2022), Wikipedia search (Trivedi et al., 2022a), a calculator (Schick et al., 2023), or a python interpreter (Paranjape et al., 2023). Often, tool invocations are structured as Chain-of-Thought (CoT) long-form answers (Wei et al., 2023). + +Recent work proposed a variety of strategies for interfacing between the LM and the tool, such as through demonstrations of API calls (Paranjape et al., 2023) or using the tool to refine the model's output (Gao et al., 2023a)—see Figure 2 for an overview. But what are the advantages and tradeoffs of different TA strategies? For example, some strategies incur significantly higher computation costs than others with little to no improvement in performance. There is a gap in the literature on the evaluation of such strategies, in particular against strong baselines and against each other. Concretely, works that report empirical evaluations are often restricted to comparisons of a single proposed strategy against a limited selection of non-TA baselines, using a limited selection of LMs or even a single LM, or focus on evaluating various LMs with a specific TA strategy (Li et al., 2023). Additionally, comparisons often do not consider the increase in computation that each TA strategy requires, which vary significantly, and have a large effect on inference time or cost. + +The above issues are only some of the pitfalls we observed in the literature, limiting the scope of current evaluations. In §3, we analyze the literature for common pitfalls and collect a set of guidelines towards a fair and reliable evaluation procedure specifically for TA strategies. Next (§4), we conduct a study which addresses all of the observed pitfalls, using GPT3, Flan-UL2 and Flan-PaLM, and complex reasoning benchmarks StrategyQA, MuSiQue, GSM8K, and DROP. We report a fair, systematic comparison of five few-shot TA strategies across multiple models and demonstrations, and all strategies use the same set of tools. + +![](images/1c5421e70ff7f119d3c54f2e5ed5b8a0634970e8cb91f3aab86bd620f7368751.jpg) +Figure 1: Illustration of tool-assistance strategies that invoke tools and insert their outputs into the prompt (a), and strategies that first generate some output, and only use tools to fix and refine it (b). + +![](images/a20317809cde42b83fa8c5607c78d8b4b0f21cf28bae768bb6671f7e6942dd2c.jpg) + +We analyze the study results (§5) and arrive at surprising conclusions: (1) Non-TA baselines are stronger than initially reported. In most cases, TA strategies do not significantly or at all improve on non-TA strategies on popular Question Answering datasets. (2) For retrieval tools in knowledge tasks, TA strategies that fix model output after it is generated perform better than TA strategies that prompt the model to interface with the tool directly during generation. For calculator tools in calculation-intensive tasks, the relationship is not decisive. (3) TA strategies incur significantly higher computation costs than non-TA baselines by multiplicative factors, and there is no general correlation between computation cost and performance, with the exception that refinement strategies in retrieval settings are more costly than non-refinement strategies. + +In §6 we report a fine-grained analysis of the results. We investigate the effect of each example's difficulty—e.g., very large numbers, or very rare entities) on improvement from tool usage, and find that tools do not systematically improve model performance on harder examples, where they were expected to have the strongest improvement. Finally, based on an error analysis of failure cases, we find that the majority of mistakes follow incorrect tool invocations, rather than incorrect tool responses (in the case of the retrieval tool) or incorrect inferences based on correct tool usage. + +In conclusion, we conduct an extensive evaluation of few-shot TA strategies, finding that previous estimates of tool-usage performance is not representative. Overall, this suggests that few-shot tool + +integration is still an open challenge. We call the community to evaluate future strategies systematically, while taking into account the significant costs that these strategies require in comparison to their benefits. Towards this, we provide a set of concrete guidelines for fair and reliable evaluation of TA strategies. Moreover, We release the handcrafted collection of 184 demonstrations used in our study (attached in the supplementary material). + +# 2 Tool-Assisted Language Models + +We describe existing few-shot strategies for augmenting LMs with tools and discuss related work. + +# 2.1 Few-shot TA strategies + +Strategies for tool usage can be broadly divided into two categories: (a) Using tools during generation and insert the tools' outputs into the model's prompt (Figures 1a, 2a); (b) Using tools to refine the LM's output after generation (Figures 1b, 2b). Strategies can be further categorized into settings where the tool is heuristically called in a pipeline or called when the model generates pre-specified tool calls. Refer to Mialon et al. (2023) for a review of the literature on TA strategies and models. + +Among TA strategies of type (a): SelfAsk (Press et al., 2023) decomposes the task into subtasks as simpler questions, such that a tool can be called on each question. A related strategy is Demonstrate-Search-Predict (Khattab et al., 2023). Inline strategies such as Toolformer (Schick et al., 2023)1, ART (Paranjape et al., 2023), inter alia (Chen et al., 2022; Gao et al., 2023b; Lyu et al., 2023) demonstrate tool usage with pre-defined words or tokens and tool arguments, halt generation when those tokens and arguments are generated, invoke the tool, and insert its output into the prompt to resume generation. Interleaving Retrieval (Trivedi et al., 2022a) does not directly instruct the model to use tools, but calls the tool on each reasoning step, to provide the model with additional context for future steps. (Jiang et al., 2023) propose a similar strategy, opting to re-write each step after using it as a query. There are also strategies such as Decomposed Prompting (Khot et al., 2023) that are generalizations of the previous strategies. + +Among TA strategies of type (b): RARR (Gao et al., 2023a) involves a pipeline designed for knowledge-based tasks: verifying the relevance + +![](images/ad8b756c88daba3fe46a897f52a1a2033fba139188f1f2989272dd6bc4926d3f.jpg) + +![](images/8cda9432777f2f391b342d52be331e08c439f215932cef30fc8c448fab0a1ca7.jpg) + +![](images/56b4cd0ce032c7e6ed5ff221971fa9d4a43294255f9ec8ab67388abf55e7f2d6.jpg) +Figure 2: Overview of the TA strategies implemented in this work. Blue text marks tool queries, tool responses are in turquoise cells, refinement is in orange cells and dashed arrows, and yellow cells are LM generations. + +and factuality of each claim by generating questions based on the claim, retrieving snippets that answer these questions, and checking if the answers match the information in the claim. If not, the claim is refined to match the snippets. Check & Fix, a method we introduce in this work, uses each CoT step as a search query, and checks whether the step is entailed by the retrieved snippets by prompting the model to classify this entailment. This strategy is similar to Jiang et al. (2023, contemporaneous work), which additionally uses low-confidence filtering but omits the entailment verification. + +# 2.2 Related Work + +Training LMs to use tools. While we are primarily concerned with few-shot tool assistance of LM generation, the literature also explores LMs which + +are trained to use specific tools (Parisi et al., 2022; Hao et al., 2023; Patil et al., 2023). These methods are constrained to the tools seen during training, and require data (annotated, bootstrapped, or synthetically constructed) of tool demonstrations. + +Other tool-assisted neural networks. There is adjacent research on augmenting neural networks, in ways besides textual interfaces, with tools (e.g., Andor et al., 2019; Jacovi et al., 2019) or training differentiable subnetworks that heavily mimic tools (Neelakantan et al., 2017; Trask et al., 2018). + +# 3 Evaluation Pitfalls + +While there is a plethora of TA strategies (§2.1), no systematic comparison of these strategies has been conducted. Research that proposes TA strategies in + +
PitfallRecommendation
(1)Coupling the TA strategy and the tool together.Comparisons of TA strategies should use the same tools across strategies.
(2)Forcing no-tool baselines to the framework of the TA strategy.The optimal way to solve the task without tools may be different from solving the task with tools: No-tool baselines should include multiple variants of both free-form and structured strategies, to ensure the TA strategies are not given an advantage.
(3)Using one model across all comparisons.Different models may behave differently when it comes to using tools effectively, based on their training data. Multiple models should be tested, if possible.
(4)Using one prompt and set of demonstrations across all comparisons.Multiple different sets of demonstrations should be used to get reliable estimates of few-shot performance.
(5)Not considering TA strategy costs.TA strategies can be efficient or inefficient with regards to the prompt tokens and generation tokens they require to work, with respect to no-tool baselines or with respect to each other. The differences can be significant (§5). Comparisons of TA strategies should factor the computation cost of the strategy, which we term as token efficiency.
+ +Table 1: Summary of evaluation pitfalls of TA strategies (§3) and recommendations to mitigate them. + +few-shot settings is often not focused on evaluating properties of those strategies, but other aspects of LM capabilities (Press et al., 2023; Gao et al., 2023a), usage in particular strict contexts (Paranjape et al., 2023), evaluating various LM models themselves with a particular strategy (Mialon et al., 2023), and so on. + +Below we collect observations from the literature that demonstrate the limited evaluation scope of TA strategies, in an effort to establish a set of criteria for future evaluations to be reliable and fair (a summary is provided in Table 1). + +(1) Coupling the TA strategy and the tool together. Comparisons may vary the tools and methods together (e.g., a TA strategy $A$ with a tool $A$ versus a TA strategy $B$ with a tool $B$ ). +(2) Forcing baselines to the framework of the TA strategy. Typical baselines to a given TA strategy are to apply that strategy while letting the model generate the tool's output instead of the tool, and using CoT prompting. However, the optimal way to solve the problem without tools may not be the same as the TA strategy in question. In this work, we implement three different baselines (§4) and find that there is no clear winner among two of them (we explore this empirically in §5). +(3) Using one model across all comparisons. Often, a single model is chosen to use as the underlying model for the TA strategy. This limits the insights from the evaluation to this model in particular, since conclusions may not carry over to other models. In this work, we find that the best-performing strategies vary significantly across different LMs (we explore this empirically in §5). + +(4) Using one prompt and one set of demonstrations across all comparisons. Few-shot evaluation is known to be unreliable when using a single set of demonstrations as a single prompt (Perez et al., 2021). Furthermore, some prompts used in TA strategy evaluations—in particular, CoT demonstrations—appear so often on the internet that they are suspected to be part of the models' training data, further compromising their function (Jacovi et al., 2023). +(5) Not considering TA strategy costs. In many cases, the TA strategy requires significantly more compute than no-tool baselines, and different TA strategies also require different amounts of computation. Computation cost is not traditionally considered in comparisons. + +# 4 Experimental Setup + +Our goal is to conduct a fair and reliable comparison of TA strategies, without being influenced by properties of specific models, tools or prompts. To this end, we focus on few-shot tool usage, a popular TA scheme that allows flexibility around using new tools and adapting tools to specific tasks. + +In what follows, we describe our experimental setup. What guides this experimental setup is to perform a comprehensive, rigorous evaluation without the pitfalls of §3. Our evaluation covers 5 different TA strategies, 4 recent LMs, 4 complex reasoning datasets, 3 few-shot prompts, and 2 tools. For each TA strategy + dataset + model combination, we run three experiments with a different number of demonstrations. Overall, our evaluation includes an execution of 342 experiments, each of which + +generates 250 (GPT-3) or 500 (non-GPT-3) long-form answers. Additional implementation details are in Appendix A. + +Tool-assisted strategies. We evaluate the TA strategies shown in Figure 2: SelfAsk, Inline, Interleaving, C&F and RARR. We additionally include variants of SelfAsk and Inline where the model is separately called to summarize tool output in relevant context, as it can often be very long (SelfAskQA and InlineQA; see Appendix A for details). Finally, in the retrieval settings, we use Top-1 retrieval for all models, and additionally Top-5 retrieval for the Flan-PaLM-540B model (see "Models" below) to check whether additional retrieved information can improve performance despite the significantly longer input and processing cost. + +For SelfAsk and RARR we use the original implementation provided by the methods' creators. We implement Interleaving (Trivedi et al., 2022a), as at the time of this research no implementation was available. Importantly, this implementation yields similar performance to that of existing approaches that combine CoT with retrieval from Wikipedia by He et al. (2022); Jiang et al. (2023) (see full results in Appendix B). Additionally, Jiang et al. (2023, Figure 4) implemented methods that apply retrieval and refinement over generated CoT that are similar to C&F and achieve similar performance to ours, as well (see Appendix B). For Inline, we are not aware of reports on few-shot performance of a similar strategy in the literature. + +Baseline strategies. We use no-tool versions of SelfAsk, Inline, and standard CoT prompting. The SelfAsk and Inline baselines simply involve giving the model the prompts used for the tool-based versions, while disabling tool calls (such that the model generates the output in-place of the tools). These are the baselines used by Press et al. (2023) and Schick et al. (2023) respectively. + +Datasets. We consider tasks that require complex reasoning, where models could potentially benefit from external tool usage. Specifically, we use StrategyQA (Geva et al., 2021) and MuSiQue (Trivedi et al., 2022b), which require reasoning about entity knowledge, and GSM8k (Cobbe et al., 2021) and DROP (Dua et al., 2019) that evaluate arithmetic reasoning. In DROP we select examples that have numerical answers. We randomly sample 500 examples from the development set of each dataset (with the exception of StrategyQA, whose + +test set has 229 examples), and use it for performance evaluation of UL2, Flan-PaLM-540B and Flan-PaLM-62B. For GPT-3, we use a subset of 250 examples of that set, due to cost. We use standard evaluation measures for every dataset (F1 in the case of MuSiQue). We provide data examples in Appendix A. + +Models. We evaluate the methods across four LMs: Flan-UL2-20B (Tay et al., 2023), GPT-3 (text-davinci-003) (Brown et al., 2020), Flan-PaLM-540B and Flan-PaLM-62B (Chung et al., 2022). We omit GPT-3 experiments on RARR and Interleaving due to cost. Importantly, our focus is not in comparing performance of these models, but to use them as samples of different model instances and training schemes against which to compare different TA strategies. + +Tools. We strictly use the same tools across all strategies, to ensure a fair comparison: Google Search (Press et al., 2023; Schick et al., 2023; Lewis et al., 2021) for knowledge tasks, and a calculator (Schick et al., 2023; Qin et al., 2023) for the calculation tasks. RARR, SelfAsk and Interleaving are designed for retrieval settings only, while Inline and Check & Fix can be used in all settings. For the retrieval settings using Google Search and Flan-PaLM-540B, we test retrieval with both the top 1 and top 5 tool-retrieved snippets: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information. + +Few-shot demonstrations. In order to overcome bias from using demonstrations from prior work that were likely seen during training (Jacovi et al., 2023), we re-announce prompts for all TA strategies, datasets and tools. We randomly sample 8 examples from each dataset's training set, and annotate each example with demonstrations for each TA strategy. Some of the strategies call the model multiple times with different prompts (e.g., Check & Fix, RARR), which requires separate annotations. This effort results in a total of 184 annotated demonstrations, which we release as a resource for future works on TA generation. From each set of 8 demonstrations, we then construct three separate prompts—3-shot, 5-shot and 7-shot—randomly sampled from the original 8 demonstrations, to get a better estimation of few-shot performance. + +![](images/ad09b9fc8f1984f62245138a07a699d525eb34e52bdace5161196d53592a2e6a.jpg) + +![](images/a06af40183a6ac224b425d7bba84e8385dc98a63a1760df3a658f1b03fc2b068.jpg) +Figure 3: A comparison of evaluation scores across two areas ( $\S 5$ ): (a) No-tool baselines vs. TA strategies; (b) Tool usage via refinement of generated text vs. tool usage during generation, where the generated text contains tool arguments is conditioned on tool outputs. The dark line marks the confidence interval among samples. + +# 5 Comparative Results + +# Organization of the results. Due to the + +Tool vs. no tool. Previous work that propose TA strategies found that using such strategies consistently improve performance in comparison to no-tool baselines (Press et al., 2023; Jiang et al., 2023; Trivedi et al., 2022a, inter alia). + +Figure 3 shows that the TA strategies do not improve performance over the no-tool baselines in our selection of datasets. The figure shows results against the average of the different few-shot scores, though we observe similar trends when using the maximum of scores as well. Full results are in Appendix B. Similarly to us, Gao et al. (2023a, §6.2) found that StrategyQA performance slightly decreased with tools in RARR compared to no-tool baselines for PaLM-540B (Chowdhery et al., 2022), and Jiang et al. (2023, §6.2) found that performance decreased on StrategyQA in two settings comparable to our implementations of Interleaving and Check & Fix with GPT-3. + +We conclude that for the settings in this work, the no-tool baselines are stronger than initially expected based on the literature. More research is required to investigate whether this relationship holds in other contexts, though we note that the datasets and models used in our experiments are common in TA research (Mialon et al., 2023). + +Additionally, our experiments provide empirical justification to Recommendations (2) and (3) in §3. First, we find that the CoT and Inline baselines outperform each other at a roughly equal rate, and neither emerges as a clear winner. This shows + +
ModelDatasetBest strategy
GPT-3StrategyQABaseline-Inline
GPT-3DROPBaseline-Inline
GPT-3GSM8KCheck & Fix
GPT-3MuSiQueInline
Flan-PaLM-540BStrategyQABaseline-CoT
Flan-PaLM-540BDROPBaseline-Inline
Flan-PaLM-540BGSM8KBaseline-Inline
Flan-PaLM-540BMuSiQueRARR-Top5
Flan-UL2-20BStrategyQABaseline-Inline
Flan-UL2-20BDROPBaseline-Inline
Flan-UL2-20BGSM8KInline
Flan-UL2-20BMuSiQueBaseline-CoT
Flan-PaLM-62BStrategyQABaseline-CoT
Flan-PaLM-62BDROPBaseline-CoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCheck & Fix
+ +Table 2: For each combination of dataset and model, we derive the best-performing strategy on the average score across the few-shot prompts. Notably, the best-performing strategy varies across different models, datasets or prompts, which means that it is necessary to evaluate over all axes to get a better estimation of general performance. + +that different baselines obtain different results, and so, relying on only a single baseline in evaluation does not necessarily provide a good estimation for no-tool performance (recommendation (2)). Also, the best-performing strategies vary significantly across models, which highlights the importance of using multiple models for evaluation (recommendation (3))—for illustration, we report the highest-performing strategies in each setting in Table 2, to + +
TA strategyPrompt tokens (canonical)Prompt tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinen3533531418801
SelfAskt(n+kt+1/2)22811399--
SelfAskQAt(2n+k)35892736--
Inlinet(n+kt+1/2)1793177534531083
InlineQAt(2n+k)33753672--
Check & fixt(2n+k)3839354775483647
RARR3n(t+1)4729--
Interleavingt(n+kt+1/2)3221--
+ +Table 3: Average number of prompt tokens per strategy (5-shot), with $n$ as the CoT prompt length, $t$ as the number of tool calls, $k$ as the tool's output length. Flan-PaLM-540B has a shorter context window than GPT-3, which limits prompt length. The canonical formula for RARR favorably assumes a single verification question. + +
TA strategyAnswer tokens (canonical)Answer tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinem44425888
SelfAskm2072--
SelfAskQA2m5964--
Inlinem10324862102
InlineQA2m114256--
Check & fix2m8917775177
RARR3m181--
Interleavingm72--
+ +Table 4: Average number of answer tokens across the 5-shot experiments, for each strategy. The RARR formula assumes a single verification question per step. + +show that the overall conclusion can be distorted by choosing a particular model or strategy Extended details are in Appendix B.1. + +Tool use during generation vs. post-generation refinement. In Figure 3 we compare the strategies that use tools during generation against the strategies that first generate an answer, and then use tools to improve the answer. For retrieval tasks, refinement clearly outperforms non-refinement strategies, but the same does not apply to the calculation tasks. We conjecture that planning calculations ahead of time during generation is more aligned with LM pretraining data, based on internet text, than planning retrieval queries in similar contexts. + +Token efficiency. TA strategies are typically evaluated in terms of task performance and properties such as factuality and logic correctness. We argue that computational cost is another important factor to consider. Specifically, we propose to evaluate token efficiency, that is, the amount of prompt tokens + +and generated tokens, which have direct effect on the cost of the TA strategy. Notably, the cost of a TA strategy depends on various variables, including model size, GPU type, caching optimizations, vocabulary size, beam search size, and so on. However, token counts can serve as a plausibly generic proxy for the purpose of comparing the cost of different TA strategies, as other factors are roughly equal across strategies, as long as the same models and tools are used. We consider prompt tokens and generated tokens separately, as they often have different consequences on cost. $^2$ + +Tables 3, 4 show both canonical and empirical comparisons across TA strategies with regards to token efficiency. The canonical comparison is a function of the relevant variables in the "canonical" setting where the model was expected to answer + +the question perfectly, and use the tool perfectly as intended. Across all TA strategy experiments, we found no general correlation between token efficiency and performance. Concretely: (1) All TA strategies are significantly more expensive than the no-tool baselines by orders of magnitude, while not incurring an improvement worthy of this extra cost. Empirically, using tools in each case can incur extra costs by a factor of $5x$ to $10x$ for prompt processing, and $2x$ to $5x$ for generation. (2) The refinement strategies are more expensive than the no-refinement strategies. So while they improve performance for retrieval tasks, it comes at a cost. + +# 6 Analytical Results + +We discuss further analyses of our results, findings that (a) our observations generally hold across different levels of example difficulty, and (b) most prediction errors of tool-augmented LMs stem from incorrect inputs to the tool and bad outputs from it, and not from a lack of tool usage. + +# 6.1 Example Difficulty + +It has been shown that LMs have difficulty solving problems involving long-tail entities (Kandpal et al., 2022; Mallen et al., 2022) and complex mathematical reasoning challenges (Mishra et al., 2022; Imani et al., 2023). Accordingly, we ablate the results from §5 along the following axes of example difficulty, in order to understand how tools can affect performance on difficult examples. We provide an overview of the trends here, and extended results are available in Appendix B. + +Measures of difficulty. We investigate the effectiveness of tool-usage across varying levels of example difficulty, which we approximate in two axes: (A) Long-tail entities (retrieval): Following Mallen et al. (2022), we extract the entities from the question and associated gold answers in StrategyQA and MuSiQue, and use the corresponding entity Wikipedia page views as a measure of popularity. (B) Large numbers (calculation): We segment the examples in the calculation tasks based on the range of the median and largest number in the example (question and gold solution in GSM8k, or question and context paragraph in DROP). + +Results. Performance across increasing levels of entity popularity and computation complexity, with different LMs and TA strategies, are shown in Figure 4a and Figure 4b, respectively. We find that performance uniformly decreases for harder ex + +amples in the retrieval setting for all models, but in the calculation setting, this only manifests for Flan-UL2-20B (implying that the larger models are more robust to the numerical ranges in GSM8K and DROP). Overall, in all cases tool use does not improve upon the baselines even when controlling for the harder cases where tools are expected to be more useful. This conclusion is aligned with our error analysis in §6.3, which shows that the common errors stem from incorrect tool arguments, more than correct tool arguments but incorrect inferences based on them. Flan-UL2 with a calculator is an exception, where tool use indeed helps, though moreso on the easier examples, likely due to a higher rate of correct arguments to the calculator. + +# 6.2 Tool Usage Statistics + +A possible explanation for the similar performance of no-tool baselines could be a lack of tool usage. To check this, we aggregate usage over the different TA strategies, and find that the models indeed use tools in the majority of the cases; $70\% - 80\%$ in SelfAsk, and $>90\%$ in others (see Appendix B). We also investigate usage across other axes, such as models and number of demonstrations, and find similar trends. However, the datasets and tasks we investigate are designed to benefit from the tools in all cases, which shows that few-shot demonstrations are not always sufficient in inducing tool use in models. In particular, the SelfAsk strategies receive the lowest tool use, being the strategies that use natural language to query whether to use the tool (the answer begins with "Are follow up questions needed here: to which the model answers "No" in the cases where the tool is not used). + +# 6.3 Error Analysis + +We sampled 50 instances for which an error was made by the TA models, randomly across the 5-shot experiments, and categorized them across three categories: (A) Incorrect tool input; (B) incorrect tool output; (C) incorrect model inferences based on correct tool usage. Error B applies only to the retrieval settings, where the retrieval tool (Google Search in our case) retrieved a wrong or irrelevant snippet. The errors were distributed approximately to $60\%$ (A), $10\%$ (B), and $30\%$ (C) in the retrieval setting, and $80\%$ (A) and $20\%$ (C) in the calculation setting. Li et al. (2023) reported an error analysis for tool-assistance in dialogue customer assistance settings, with similar conclusions regarding error A, although errors B and C do not apply in their + +![](images/7039a5cdcf52dcaa696b43188a45a62ab3501ce48d2f6c8a6d93a2e5da6c93b3.jpg) +Figure 4: We analyze performance of the strategies across two area (no-tool baselines vs. TA strategies), conditioned on example difficulty as defined by the existence of rare or common entities in the retrieval settings (via percentile of page views) and small or large numbers in the calculation settings (via percentile of numeric range). In (a), lower page views imply higher difficulty, and in (b), larger numbers imply higher difficulty. + +context, and other error types manifest instead. + +Our results suggest that the majority of errors are not due to the incorrect tool responses (i.e., issues with Google Search as a choice of retriever), and overall more influenced by incorrectly invoking tools to begin with, in comparison to invoking them correctly but composing the solution incorrectly. + +# 7 Conclusions and Takeaways + +We conduct a comprehensive assessment of few-shot tool augmentation strategies for LMs, covering hundreds of experiments with multiple LMs, datasets, and tools. Our experiments show that current tool-usage integration approaches are presently a false promise; prompting strategies that do not use tools typically obtain similar task performance, without the high cost of tool execution. Controlling for example difficulty, where tools are expected to provide the most benefit, does not explain the relative strength of the no-tool baselines. Instead, the primary errors we observe are related to incorrect usage of the tools to begin with (i.e., generating incorrect arguments to the tool). + +Our findings call for more robust evaluation of future TA strategies, primarily in more practical settings where models are not expected to leverage inherent abilities to solve tasks. To this end, our work provides concrete evaluation guidelines, such as employing stronger baselines and factoring in computation costs. + +# Limitations + +While our study aims to provide a comprehensive evaluation of TA strategies, there are some limitations. First, recent work (Dodge et al., 2021; Magar and Schwartz, 2022; OpenAI, 2023) suggests that examples from public datasets, like those used in our evaluation, may have leaked to the training data of recent LMs. Such contamination can introduce biases to the evaluation, such as lack of need for external tools. We are not aware of alternatives without this issue at the time of this writing. + +Second, due to the high cost of executing large LMs in an exhaustive evaluation, we ran only a single experiment for each combination of TA strategy, model, dataset, and number of demonstrations. However, given the sensitivity of models to the demonstrations (Perez et al., 2021), future work should extend this evaluation to use multiple sets of demonstrations for each such combination. + +Last, while our findings show that non-tool models often perform on par with existing TA strategies, our setting favors tool usage. For example, our tasks only require a single type of tool such that the model does not need to choose between multiple tools. Future work that investigates when and how tools can improve performance should consider more realistic evaluation settings, for example, by considering tasks where the model may need to use multiple types of tools together, or tasks where tools may sometimes give unhelpful answers. + +# References + +Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. 2019. Giving BERT a calculator: Finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5947-5952, Hong Kong, China. Association for Computational Linguistics. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. CoRR, abs/2005.14165. +Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. +Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. +Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, + +and Jason Wei. 2022. Scaling instruction-finetuned language models. +Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168. +Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics. +Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models. +Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023b. Pal: Program-aided language models. +Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies. Transactions of the Association for Computational Linguistics (TACL). +Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. +Hangfeng He, Hongming Zhang, and Dan Roth. 2022. Rethinking with retrieval: Faithful large language model inference. arXiv preprint arXiv:2301.00303. +Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398. +Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg. 2023. Stop uploading test data in plain text: Practical strategies for mitigating data contamination by evaluation benchmarks. +Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, and Jonathan Berant. 2019. Neural network gradient-based learning + +of black-box function interfaces. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. +Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. +Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. arXiv preprint arXiv:2211.08411. +Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2023. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp. +Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2023. Decomposed prompting: A modular approach for solving complex tasks. +Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-augmented generation for knowledge-intensive nlp tasks. +Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api-bank: A benchmark for tool-augmented llms. +Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. 2023. Deductive verification of chain-of-thought reasoning. +Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M. Dai. 2022. Mind's eye: Grounded language model reasoning through simulation. +Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of-thought reasoning. +Inbal Magar and Roy Schwartz. 2022. Data contamination: From memorization to exploitation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 157-165, Dublin, Ireland. Association for Computational Linguistics. +Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511. + +Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey. +Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505-3523, Dublin, Ireland. Association for Computational Linguistics. +Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browser-assisted question-answering with human feedback. +Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. In International Conference on Learning Representations. +Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9844–9855, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. +OpenAI. 2023. Gpt-4 technical report. +Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. 2023. Art: Automatic multi-step reasoning and tool-use for large language models. +Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models. +Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis. +Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. +Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models. +Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. 2022. Limitations of language models in arithmetic and symbolic induction. + +Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool learning with foundation models. +Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. +Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Siamak Shakeri, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby, and Donald Metzler. 2023. UI2: Unifying language learning paradigms. +Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, and Phil Blunsom. 2018. Neural arithmetic logic units. +Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022a. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. +Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022b. MuSiQue: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. + +# A Implementation Details + +# A.1 Tool-Assisted Strategies. + +General Details. In all cases, if the tool invocation fails (e.g., with an ill-formatted calculation, or a null response from Google Search), the model is used to generate the tool's output instead. For all retrieval settings using Google Search, we test both Top-1 and Top-5 retrieval: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information. Illustrative examples of the data are available in Table 5. + +SelfAsk and SelfAskQA. SelfAsk involves decomposing each question into a series of simpler sub-questions, and calling the tool directly for each sub-question. The tool's output is inserted into the prompt as an intermediate answer. When the model generates a step that begins with the string "So the answer is:," it is expected to generate an answer that builds on the previous intermediate answers which were tool outputs. In this work, we use Google Search as the tool as in the original work by (Press et al., 2023). + +Our SelfAsk implementation reuses the original implementation by Press et al. (2023). Since Self-Ask is designed specifically for knowledge-based QA, we only evaluate this strategy for the knowledge tasks MuSiQue and StrategyQA. + +The SelfAskQA variant involves calling the model for each pair of sub-question and retrieved snippet that (hopefully) contains its answer. This method of recursively calling the model with different a different prompt as if it were another tool is a technique proposed by Khot et al. (2023). We collect all sub-questions from the SelfAsk prompts in order to construct QA prompts (using the tool to retrieve supporting snippets). The model is called with the QA prompts in order to answer each sub-question based on its snippet. The SelfAskQA variant in essence summarizes each Google Search snippet, which can be as long as a paragraph, into a short answer to the given sub-question, effectively simplifying and shortening the overall answer. + +Among the two SelfAsk implementations, neither decisively outperforms the other: SelfAskQA outperforms SelfAsk for GPT-3 and Flan-PaLM-62B on both MuSiQue and StrategyQA, but for Flan-PaLM-540B and Flan-UL2-20B the relationship flips. + +Inline and InlineQA. The Inline strategy format largely mimics the Toolformer format by Schick et al. (2023), but can also be cast into the ART framework by Paranjape et al. (2023) or the Decomposed Prompting framework by Khot et al. (2023). In general, the strategy simply calls for generating the tool call in a predefined format—in our case, square brackets and the tool name. The tool is invoked with the arguments generated by the model inside the brackets, and the tool's output is inserted into the model. Our implementation is based on the inference code implemented by Schick et al. (2023), although notably, we focus on few-shot usage, and do not perform the tool-usage pretraining step that largely concerns the referenced work. + +We implement two variants: Inline, which uses a tool called "Search" that appends the retrieved snippet or calculation output directly into the prompt, and InlineQA, which uses a tool called "QA" which calls the model with a separate prompt in order to summarize the retrieved snippet into a concise answer, identically to the aforementioned SelfAskQA variant. As with the SelfAsk and SelfAskQA variants, among Inline and InlineQA in the knowledge-based tasks, neither consistently outperforms the other in particular. + +Interleaving. The Interleaving Retrieval strategy (Trivedi et al., 2022a) proposes to use each reasoning step by the model in its CoT answer as a query to a retrieval model. The retrieved snippet is then added to the prompt in order to provide additional information to the model. The structure for each demonstration becomes: (1) All retrieved documents thus far; (2) The question; (3) The generated answer thus far (see Trivedi et al., 2022a for details). In this way, the tool is used heuristically without explicit demonstrations from the model, but the generation of the answer at each CoT step is still conditioned on tool usage based on the previous steps. + +Check & Fix. We propose this strategy as a more lightweight variant of refinement based on tools in comparison to RARR, and it is comparable to contemporaneously proposed (Jiang et al., 2023): After each CoT step, the step is checked for accuracy using a tool, and if found inaccurate, a new fixed step is generated to replace it. + +In the retrieval setting, each step is verified and fixed by prompting the model to classify whether the step is contradicted by the retrieved paragraphs, + +
ModelDatasetStrategyExampleModel Answer
GPT-3StrategyQABaseline-CoTQuestion: Was a woman Prime Minister directly before or after Stanley Baldwin? Answer: No.Stanley Baldwin was Prime Minister of the United Kingdom from 1923 to 1929. The woman Prime Minister directly before him was Margaret Thatcher, who served from 1979 to 1990. The woman Prime Minister directly after him was Theresa May, who served from 2016 to 2019. So the answer is no.
Flan-UL2MuSiQueSelfAskQuestion: What administrative territorial entity contains the place where KPRM is licensed to broadcast to? Answer: Hubbard County.Are follow up questions needed here: Yes. What is the place where KPRM is licensed to broadcast to? Intermediate answer: KPRM is licensed to broadcast to the city of Houston. Follow up: What administrative territorial entity contains the city of Houston? Intermediate answer: The city of Houston is in Texas state. Follow up: What is the administrative territorial entity that contains Texas state? Intermediate answer: Texas state is in the United States. So the final answer is: The United States
Flan-PaLM-62BDROPCheck & FixContext: [omitted paragraph about the 2000 census] Question: How many households and families reside in the county according to the 2000 census? Answer: 2648426484 households and 12068 families reside in the county. There were 40543 people, 15416 households, and 11068 families residing in the county. Thus, 26484 + 12068 = 38552. So the final answer is: 38552.
Flan-PaLM-540BGSM8KInlineQuestion: How many girls are in the school if 40% of a school population is made up of 240 boys? Answer: 360If 40% of the school population is made up of 240 boys, then 1% is made up of [Calculator(240 / 40) -> 6.0] 6 boys. Therefore, the total school population is made up of [Calculator(6 * 100) -> 600] 600 students. If the school population is made up of 600 students, and 40% of them are boys, then the remaining 60% are girls, a total of [ Calculator(60 / 100) * 600 -> 360] 360 girls. So the final answer is: 360.
+ +Table 5: Illustrative examples of various datasets, strategies and model outputs. The answers from the Interleaving, Check & Fix and RARR models are of the same format as the CoT baseline. + +and if so, to generate the fixed step based on demonstrations. In the calculation setting, each step is first heuristically checked for whether it contains a calculation, and if so, the calculation is inserted into the calculator tool, and the model is prompted to verify whether the tool output is consistent with the calculation in the text. If this is incorrect, the model generates the fixed step. In both cases, the answer generation continues where the fixed step completely replaces the original incorrect step. + +RARR. RARR (Retrofit Attribution using Research and Revision, Gao et al., 2023a) was proposed as a post processing method for refining any text, including LM chain-of-thought outputs. This is done via automatically finding attribution for each claim in the text, and post-editing the output to fix unsupported content while preserving the original output as much as possible. Our RARR implementation reuses the original implementation by Gao et al. (2023a). + +The RARR process involves the following steps, with each considered as a separate tool: + +1. Question Generation: First, they generate a series of questions that cover various aspects of a passage, referred to as passage x. The questions generated aim to verify and attribute + +information from the passage. This is done via prompting the LM with few-shot examples. + +2. Evidence Retrieval: For each generated question, the Google Search tool is utilized to retrieve the top- $k$ passages that are related to the question. In this work, we evaluate both Top-1 and Top-5. +3. Evidence Ranking: The retrieved evidences are next ranked using a query-document relevance model scorer. Unlike the original RARR implementation (Gao et al., 2023a), which uses the GTR retrieval model (Ni et al., 2022), we instead implement the scorer via few-shot LM prompting, as suggested by the authors. The output of this stage is thus the top-1 ranked evidence. +4. Agreement Phase: Given a triplet of a text, question, and an evidence, this phase determines whether both the text and the question imply the same answer to the question. This is implemented via few-shot LM prompting using a chain-of-thought style prompt. +5. Editing Phase: If the previous Agreement Phase outputs disagreement between the text and the evidence, the (text, question, evidence) triplet is fed to a model that outputs a revised + +
ModelDatasetBest baseline
GPT-3StrategyQAInline
GPT-3DROPInline
GPT-3GSM8KCoT
GPT-3MuSiQueInline
Flan-UL2-20BStrategyQAInline
Flan-UL2-20BDROPInline
Flan-UL2-20BGSM8KCoT
Flan-UL2-20BMuSiQueCoT
Flan-PaLM-540BStrategyQACoT
Flan-PaLM-540BDROPInline
Flan-PaLM-540BGSM8KInline
Flan-PaLM-540BMuSiQueCoT
Flan-PaLM-62BStrategyQACoT
Flan-PaLM-62BDROPCoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCoT
+ +version of the text, considering the discrepancy between the previous text and the evidence. This is implemented via few-shot LM prompting using a similar chain-of-thought style prompt from the previous stage (see Gao et al., 2023a for the exact prompting template). The agreement and editing phases run iteratively until there are no needed revisions, detected in the Agreement Phase. + +# A.2 Baselines + +Chain-of-Thought. The CoT baseline is the standard baseline proposed by Wei et al. (2023) and implemented as a baseline by Press et al. (2023); Paranjape et al. (2023), inter alia. Often, the demonstrations used for this baseline are those originally published by Wei et al. (2023). In this work we annotate a new sample of examples with CoT answers for the purpose of a better estimation of CoT few-shot performance, and release our annotations. + +Self-Ask. The Self-Ask baseline uses the Self-Ask tool demonstrations, but does not invoke the tool after each "Follow up:" call, and instead generates the entire answer. This is the original no-tool baseline in Press et al. (2023). + +Inline. The Inline baseline uses the Inline tool demonstrations, but does not invoke the tool after + +Table 6: For each combination of dataset and model, we derive the best-performing baseline on the average score across the few-shot experiments. There is no clear winner: Two of the baselines achieve the best score in $50\%$ of cases. + +
ModelUsage (%)
Flan-PaLM-540B70.9
Flan-PaLM-62B80.6
Flan-UL2-20B82.6
GPT-395.1
+ +Table 7: Note that RARR and Interleaving are guaranteed to use tools so they are omitted. + +
StrategyUsage (%)
Check & Fix92.9
SelfAsk80.4
SelfAskQA72.8
Inline99.9
InlineQA96.1
+ +Table 8: Overview of average rate of tool usage across experiments. Note that RARR and Interleaving are guaranteed to use tools. + +each tool call, and instead generates the entire answer. This is the original no-tool baseline in Schick et al. (2023). + +# B Extended Results + +We provide the full results for our experiments (described in §4) in §B.1, and further analysis of TA strategy performance and tool usage in §B.2. + +# B.1 Full Experiment Results + +Tables 9, 10 detail our experiment results. Tables 11, 12, 13, 14 detail average and max aggregations over the few-shot prompts. As mentioned, we sample 500 examples for Flan-PaLM-62B , FlanPaLM-540B and Flan-UL2-20B experiments, and 250 for GPT-3 experiments, with the exception of StrategyQA whose test set has 229 examples. + +For DROP and MuSiQue, we report the F1 measures using the evaluation scripts provided by Dua et al. (2019); Trivedi et al. (2022b) respectively. For GSM8K, we normalize the numerical answers and measure exact-match. For StrategyQA, we normalize the answers (for capitalization, prefix and suffix punctuation, and so on) and measure exact-match to "yes" and "no". + +Best-performing strategies and baselines in each setting. In Tables 2, 6 we show the best-performing baseline and best-performing general strategy for each setting of model and dataset, among the average scores across the three few-shot + +experiments. For strategies in general (Table 2), we see that the winning strategies vary significantly for different models, which supports Guideline (3) in Table 1. + +The distribution among the baselines is split $50\% - 50\%$ among CoT and Inline. When considering each few-shot experiment separately (i.e., not taking the average), the distribution is $60.0\%$ , $37.5\%$ , and $2\%$ for Baseline-CoT, Baseline-Inline and Baseline-SelfAsk respectively for which baseline achieves the best-performing score. This supports Guideline (2) in Table 1. + +# B.2 Analysis + +Example Difficulty. Figures 5, 6 show extended results for the example difficulty analyses in §6. Here we consider the median of each difficulty metric—i.e., the difficulty across all entities or numbers in the example—rather than the minimum or maximum, as well as the ablation of refinement strategies against no-refinement strategies. We additionally checked for two alternative axes: operation complexity (addition and subtraction as “easy” examples, and multiplication and division as “hard” examples) and popularity links rather than popularity views. The trends we observe in the main paper hold in all of these cases. + +Tool Usage. Tables 7, 8 show aggregate tool usage percentages over multiple axes. Overall, few-shot demonstrations induce tool usage in the majority of cases, though not completely so (i.e., below $100\%$ ). + +![](images/ac4a0dde76cc7d60efb1b1a72b7167e5d83cc236108f7ff0a0d445f57c449897.jpg) + +![](images/9e9aa15aaaedd0de974d1aef476063f9aed9e6102bea1d72aeaac09fbf896ab9.jpg) + +![](images/d891c4106bc180293d8705b06b931c249a0fa7197a273df43947839153b0b755.jpg) + +![](images/90fceda5634230bcbc932418e38619436b8dcecefc9b958c8161556d30ba5cfa.jpg) +Figure 5: An extension of Table 3 with results for both the average across few-shot experiments (a-b) and the maximum across few-shot experiments (c-d)—i.e., the maximum between 3-shot, 5-shot and 7-shot for each experiments setting. + +
StrategyModelMuSiQueStrategyQA
3-shot5-shot7-shot3-shot5-shot7-shot
RARRFlan-PaLM-540B34.8635.0934.1480.3581.2280.79
RARRFlan-UL2-20B13.4012.0112.9855.9040.1742.79
RARRFlan-PaLM-62B23.6023.4224.0775.9877.7377.73
Baseline-CoTFlan-PaLM-540B33.0733.3633.8079.9184.2882.10
Baseline-CoTFlan-UL2-20B15.1416.5016.1067.2571.6272.05
Baseline-CoTGPT-327.3729.3130.2570.7471.6271.62
Baseline-CoTFlan-PaLM-62B23.6023.4224.2775.9879.0480.35
Baseline-SelfAskFlan-PaLM-540B25.8025.3424.3176.8673.3675.55
Baseline-SelfAskFlan-UL2-20B11.4011.5211.5234.0648.4753.71
Baseline-SelfAskGPT-327.9828.1329.8072.0574.2473.36
Baseline-SelfAskFlan-PaLM-62B5.289.525.4358.9575.9874.24
Baseline-InlineFlan-PaLM-540B30.3930.7131.1971.6279.9172.49
Baseline-InlineFlan-UL2-20B13.6613.339.7472.0568.5671.18
Baseline-InlineGPT-329.1130.3328.1570.3175.9878.60
Baseline-InlineFlan-PaLM-62B23.4222.6921.8675.1173.3675.55
SelfAskFlan-PaLM-540B20.0223.1423.2671.6271.1873.80
SelfAskFlan-UL2-20B11.867.687.4149.7825.7623.14
SelfAskGPT-324.3824.1522.3364.1967.2565.94
SelfAskFlan-PaLM-62B13.7914.8012.6867.2567.6966.38
SelfAskQAFlan-PaLM-540B21.0821.9222.9171.6269.4373.80
SelfAskQAFlan-UL2-20B8.535.352.3047.1617.0311.79
SelfAskQAGPT-332.7431.3030.3465.5067.6970.31
SelfAskQAFlan-PaLM-62B15.4217.4914.5167.2568.1269.00
InlineQAFlan-PaLM-540B31.8632.7832.1070.3172.9373.36
InlineQAFlan-UL2-20B18.0717.941.5671.1870.3156.77
InlineQAGPT-334.9036.6531.3270.3172.0570.31
InlineQAFlan-PaLM-62B12.5211.6510.5561.1463.3261.57
Check & FixFlan-PaLM-540B30.7333.1733.4880.3580.7978.17
Check & FixFlan-UL2-20B10.9011.7713.5252.4060.7069.87
Check & FixGPT-329.6632.9532.2672.0573.8070.74
Check & FixFlan-PaLM-62B25.2126.3926.4775.5571.1876.42
InlineFlan-PaLM-540B18.9724.4222.6174.2474.2475.11
InlineFlan-UL2-20B14.7014.9314.7848.4752.8444.98
InlineGPT-328.8531.0333.5470.3169.4368.56
InlineFlan-PaLM-62B9.959.4513.3254.5968.5670.31
InterleavingFlan-PaLM-540B23.7121.2920.5176.8678.6075.98
InterleavingFlan-PaLM-62B23.4323.7124.4274.6771.6274.24
RARR-Top5Flan-PaLM-540B36.1235.4035.4480.3579.9179.91
SelfAskQA-Top5Flan-PaLM-540B19.7521.6021.9969.8770.3172.05
Inline-Top5Flan-PaLM-540B32.6734.5331.6965.5077.7372.93
Check & Fix-Top5Flan-PaLM-540B31.7432.6833.8778.6081.6681.22
+ +Table 9: Results for the knowledge-retrieval tasks of MuSiQue and StrategyQA. MuSiQue scores are F1 scores. Missing cells, such as "Interleaving" with Flan-UL2-20B, are experiments where the model failed to converge to an answer. + +
StrategyModelDROPGSM8K
3-shot5-shot7-shot3-shot5-shot7-shot
Baseline-CoTFlan-PaLM-540B77.275.074.267.470.870.8
Baseline-CoTFlan-UL2-20B7.227.226.2
Baseline-CoTGPT-357.655.655.658.858.058.4
Baseline-CoTFlan-PaLM-62B65.663.659.247.446.247.4
Baseline-InlineFlan-PaLM-540B77.875.674.469.872.671.2
Baseline-InlineFlan-UL2-20B3.65.63.6
Baseline-InlineGPT-357.666.059.651.654.053.2
Baseline-InlineFlan-PaLM-62B59.064.059.248.847.848.0
InlineFlan-PaLM-540B76.275.274.461.461.870.6
InlineFlan-UL2-20B26.626.226.0
InlineGPT-356.866.045.250.852.452.8
InlineFlan-PaLM-62B57.064.057.848.847.848.2
Check & FixFlan-PaLM-540B76.073.645.068.470.470.2
Check & FixFlan-UL2-20B23.225.823.2
Check & FixGPT-354.854.454.856.058.461.6
Check & FixFlan-PaLM-62B65.063.644.246.844.046.6
+ +Table 10: Results for the calculator settings of DROP and GSM8K. We omit Flan-UL2-20B results on DROP, as the model could not converge to solve the task with our prompts, likely since each example in this task is very long. + +
StrategyAggregationModelMuSiQueStrategyQA
Baseline-CoTMaxGPT-330.271.6
Baseline-CoTAverageGPT-329.071.3
Baseline-CoTMaxFlan-UL2-20B16.572.1
Baseline-CoTAverageFlan-UL2-20B15.970.3
Baseline-CoTMaxFlan-PaLM-62B24.380.3
Baseline-CoTAverageFlan-PaLM-62B23.878.5
Baseline-CoTMaxFlan-PaLM-540B33.884.3
Baseline-CoTAverageFlan-PaLM-540B33.482.1
Baseline-SelfAskMaxGPT-329.874.2
Baseline-SelfAskAverageGPT-328.673.2
Baseline-SelfAskMaxFlan-UL2-20B11.553.7
Baseline-SelfAskAverageFlan-UL2-20B11.545.4
Baseline-SelfAskMaxFlan-PaLM-62B9.576.0
Baseline-SelfAskAverageFlan-PaLM-62B6.769.7
Baseline-SelfAskMaxFlan-PaLM-540B25.876.9
Baseline-SelfAskAverageFlan-PaLM-540B25.175.3
Baseline-InlineMaxGPT-330.378.6
Baseline-InlineAverageGPT-329.275.0
Baseline-InlineMaxFlan-UL2-20B13.772.1
Baseline-InlineAverageFlan-UL2-20B12.270.6
Baseline-InlineMaxFlan-PaLM-62B23.475.5
Baseline-InlineAverageFlan-PaLM-62B22.774.7
Baseline-InlineMaxFlan-PaLM-540B31.279.9
Baseline-InlineAverageFlan-PaLM-540B30.874.7
+ +Table 11: Aggregations by few-shot prompt of the results in Table 9 (basiines). + +![](images/3bea2fe03cccd890f0ff782f744e84a91fdfbf876666e0691569d43890f62e6b.jpg) +Figure 6: An extension of Table 4. (a-b) refer to taking the minimum of entity page views to ablate examples that have rare entities, and maximum of numbers to ablate examples with large numbers. (c-e) take the median in both cases, and (f) shows the results when comparing TA strategies between refinement and non-refinement types. + +
StrategyAggregationModelMuSiQueStrategyQA
InterleavingMaxFlan-PaLM-62B24.474.7
InterleavingAverageFlan-PaLM-62B23.973.9
InterleavingMaxFlan-PaLM-540B23.778.2
InterleavingAverageFlan-PaLM-540B21.877.0
RARRMaxFlan-UL2-20B13.455.9
RARRAverageFlan-UL2-20B12.846.3
RARRMaxFlan-PaLM-62B24.177.7
RARRAverageFlan-PaLM-62B23.777.1
RARRMaxFlan-PaLM-540B35.181.2
RARRAverageFlan-PaLM-540B34.780.6
RARR-Top5MaxFlan-PaLM-540B36.180.3
RARR-Top5AverageFlan-PaLM-540B35.780.1
Check & FixMaxGPT-332.973.8
Check & FixAverageGPT-331.672.2
Check & FixMaxFlan-UL2-20B13.569.9
Check & FixAverageFlan-UL2-20B12.161.0
Check & FixMaxFlan-PaLM-62B26.576.4
Check & FixAverageFlan-PaLM-62B26.074.4
Check & FixMaxFlan-PaLM-540B33.580.8
Check & FixAverageFlan-PaLM-540B32.379.6
Check & Fix-Top5MaxFlan-PaLM-540B33.981.7
Check & Fix-Top5AverageFlan-PaLM-540B32.880.5
+ +Table 12: Aggregations by few-shot prompt of the results in Table 9 (TA strategies). + +
StrategyAggregationModelMuSiQueStrategyQA
SelfAskMaxGPT-324.467.2
SelfAskAverageGPT-323.665.8
SelfAskMaxFlan-UL2-20B11.949.8
SelfAskAverageFlan-UL2-20B9.032.9
SelfAskMaxFlan-PaLM-62B14.867.7
SelfAskAverageFlan-PaLM-62B13.867.1
SelfAskAverageFlan-PaLM-540B22.372.2
SelfAskMaxFlan-PaLM-540B23.474.2
SelfAskQAMaxGPT-332.770.3
SelfAskQAAverageGPT-331.567.8
SelfAskQAMaxFlan-UL2-20B8.547.2
SelfAskQAAverageFlan-UL2-20B5.425.3
SelfAskQAMaxFlan-PaLM-62B17.569.0
SelfAskQAAverageFlan-PaLM-62B15.868.1
SelfAskQAMaxFlan-PaLM-540B22.875.1
SelfAskQAAverageFlan-PaLM-540B21.971.9
SelfAskQA-Top5MaxFlan-PaLM-540B22.072.1
SelfAskQA-Top5AverageFlan-PaLM-540B21.170.7
InlineQAMaxGPT-336.772.1
InlineQAAverageGPT-334.370.9
InlineQAMaxFlan-UL2-20B18.171.2
InlineQAAverageFlan-UL2-20B12.566.1
InlineQAMaxFlan-PaLM-62B12.563.3
InlineQAAverageFlan-PaLM-62B11.662.0
InlineQAMaxFlan-PaLM-540B32.473.4
InlineQAAverageFlan-PaLM-540B32.172.2
InlineMaxGPT-333.570.3
InlineAverageGPT-331.169.4
InlineMaxFlan-UL2-20B14.952.8
InlineAverageFlan-UL2-20B14.848.8
InlineMaxFlan-PaLM-62B13.370.3
InlineAverageFlan-PaLM-62B10.964.5
InlineMaxFlan-PaLM-540B24.374.7
InlineAverageFlan-PaLM-540B22.074.2
InlineQA-Top5MaxFlan-PaLM-540B34.577.7
InlineQA-Top5AverageFlan-PaLM-540B33.072.1
+ +Table 13: Aggregations by few-shot prompt of the results in Table 9 (TA strategies). + +
StrategyAggregationModelDROPGSM8K
Baseline-CoTMaxGPT-357.658.8
Baseline-CoTAverageGPT-356.358.4
Baseline-CoTMaxFlan-UL2-20B27.2
Baseline-CoTAverageFlan-UL2-20B20.2
Baseline-CoTMaxFlan-PaLM-62B65.647.4
Baseline-CoTAverageFlan-PaLM-62B62.847.0
Baseline-CoTMaxFlan-PaLM-540B77.270.8
Baseline-CoTAverageFlan-PaLM-540B75.569.7
Baseline-InlineMaxGPT-366.054.0
Baseline-InlineAverageGPT-361.152.9
Baseline-InlineMaxFlan-UL2-20B9.25.6
Baseline-InlineAverageFlan-UL2-20B4.24.3
Baseline-InlineMaxFlan-PaLM-62B64.048.8
Baseline-InlineAverageFlan-PaLM-62B60.748.2
Baseline-InlineMaxFlan-PaLM-540B77.872.6
Baseline-InlineAverageFlan-PaLM-540B75.971.2
Check & FixMaxGPT-354.861.6
Check & FixAverageGPT-354.758.7
Check & FixMaxFlan-UL2-20B25.8
Check & FixAverageFlan-UL2-20B24.1
Check & FixMaxFlan-PaLM-62B65.046.8
Check & FixAverageFlan-PaLM-62B57.645.8
Check & FixMaxFlan-PaLM-540B76.070.4
Check & FixAverageFlan-PaLM-540B64.969.7
InlineMaxGPT-366.052.8
InlineAverageGPT-356.052.0
InlineMaxFlan-UL2-20B26.6
InlineAverageFlan-UL2-20B26.3
InlineMaxFlan-PaLM-62B64.048.8
InlineAverageFlan-PaLM-62B59.648.3
InlineMaxFlan-PaLM-540B76.270.8
InlineAverageFlan-PaLM-540B75.364.5
+ +Table 14: Aggregations by few-shot prompt of the results in Table 10. \ No newline at end of file diff --git a/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/images.zip b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e5704168146a55abd11f30f31be65b5498761f89 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:028a2c5e51bf6ca432e82ffd5da634424e31aaa6bc2e2f5086d6954e61f19d55 +size 2228952 diff --git a/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/layout.json b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fb20bbedeae83a69bb5e34f56085f3718456f021 --- /dev/null +++ b/2023/A Comprehensive Evaluation of Tool-Assisted Generation Strategies/layout.json @@ -0,0 +1,9275 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 86, + 75, + 507, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 75, + 507, + 94 + ], + "spans": [ + { + "bbox": [ + 86, + 75, + 507, + 94 + ], + "type": "text", + "content": "A Comprehensive Evaluation of Tool-Assisted Generation Strategies" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "spans": [ + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "type": "text", + "content": "Alon Jacovi" + }, + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "type": "text", + "content": " Avi Caciularu" + }, + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "type": "text", + "content": " Jonathan Herzig" + }, + { + "bbox": [ + 167, + 109, + 428, + 124 + ], + "type": "inline_equation", + "content": "^{2}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 178, + 128, + 418, + 142 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 178, + 128, + 418, + 142 + ], + "spans": [ + { + "bbox": [ + 178, + 128, + 418, + 142 + ], + "type": "text", + "content": "Roee Aharoni² Bernd Bohnet³ Mor Geva³" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 143, + 153, + 452, + 181 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 143, + 153, + 452, + 181 + ], + "spans": [ + { + "bbox": [ + 143, + 153, + 452, + 181 + ], + "type": "text", + "content": "1Bar Ilan University 2Google Research 3Google DeepMind alonjacovi@gmail.com" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 86, + 238, + 274, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 238, + 274, + 583 + ], + "spans": [ + { + "bbox": [ + 86, + 238, + 274, + 583 + ], + "type": "text", + "content": "A growing area of research investigates augmenting language models with tools (e.g., search engines, calculators) to overcome their shortcomings (e.g., missing or incorrect knowledge, incorrect logical inferences). Various few-shot tool-usage strategies have been proposed. However, there is no systematic and fair comparison across different strategies, or between these strategies and strong baselines that do not leverage tools. We conduct an extensive empirical analysis, finding that (1) across various datasets, example difficulty levels, and models, strong no-tool baselines are competitive to tool-assisted strategies, implying that effectively using tools with in-context demonstrations is a difficult unsolved problem; (2) for knowledge-retrieval tasks, strategies that refine incorrect outputs with tools outperform strategies that retrieve relevant information ahead of or during generation; (3) tool-assisted strategies are expensive in the number of tokens they require to work—incurring additional costs by orders of magnitude—which does not translate into significant improvement in performance. Overall, our findings suggest that few-shot tool integration is still an open challenge, emphasizing the need for comprehensive evaluations of future strategies to accurately assess their benefits and costs." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 596, + 154, + 608 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 596, + 154, + 608 + ], + "spans": [ + { + "bbox": [ + 68, + 596, + 154, + 608 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 618, + 291, + 754 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 618, + 291, + 754 + ], + "spans": [ + { + "bbox": [ + 67, + 618, + 291, + 754 + ], + "type": "text", + "content": "Augmenting language models (LMs) with tools has been proposed to overcome LMs' inherent weaknesses (Mialon et al., 2023; Qian et al., 2022), such as the lack of grounding to reliable or updated sources (Jiang et al., 2023), incoherent logical ability (Liu et al., 2022; Ling et al., 2023) and arithmetic ability (Gao et al., 2023b), among others. This is done through tool-assisted (TA) generation, where LMs are trained or instructed to use external tools, such as search engines over the web—e.g.," + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 212, + 526, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 212, + 526, + 293 + ], + "spans": [ + { + "bbox": [ + 302, + 212, + 526, + 293 + ], + "type": "text", + "content": "Google search (Gao et al., 2023a; Press et al., 2023; Nakano et al., 2022), Wikipedia search (Trivedi et al., 2022a), a calculator (Schick et al., 2023), or a python interpreter (Paranjape et al., 2023). Often, tool invocations are structured as Chain-of-Thought (CoT) long-form answers (Wei et al., 2023)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 297, + 526, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 297, + 526, + 593 + ], + "spans": [ + { + "bbox": [ + 302, + 297, + 526, + 593 + ], + "type": "text", + "content": "Recent work proposed a variety of strategies for interfacing between the LM and the tool, such as through demonstrations of API calls (Paranjape et al., 2023) or using the tool to refine the model's output (Gao et al., 2023a)—see Figure 2 for an overview. But what are the advantages and tradeoffs of different TA strategies? For example, some strategies incur significantly higher computation costs than others with little to no improvement in performance. There is a gap in the literature on the evaluation of such strategies, in particular against strong baselines and against each other. Concretely, works that report empirical evaluations are often restricted to comparisons of a single proposed strategy against a limited selection of non-TA baselines, using a limited selection of LMs or even a single LM, or focus on evaluating various LMs with a specific TA strategy (Li et al., 2023). Additionally, comparisons often do not consider the increase in computation that each TA strategy requires, which vary significantly, and have a large effect on inference time or cost." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 597, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 597, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 597, + 526, + 772 + ], + "type": "text", + "content": "The above issues are only some of the pitfalls we observed in the literature, limiting the scope of current evaluations. In §3, we analyze the literature for common pitfalls and collect a set of guidelines towards a fair and reliable evaluation procedure specifically for TA strategies. Next (§4), we conduct a study which addresses all of the observed pitfalls, using GPT3, Flan-UL2 and Flan-PaLM, and complex reasoning benchmarks StrategyQA, MuSiQue, GSM8K, and DROP. We report a fair, systematic comparison of five few-shot TA strategies across multiple models and demonstrations, and all strategies use the same set of tools." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 84, + 761, + 280, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 761, + 280, + 772 + ], + "spans": [ + { + "bbox": [ + 84, + 761, + 280, + 772 + ], + "type": "text", + "content": "*Work done during an internship at Google Research." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "text", + "content": "13856" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 125, + 795, + 468, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 795, + 468, + 818 + ], + "spans": [ + { + "bbox": [ + 125, + 795, + 468, + 818 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13856-13878 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 75, + 71, + 177, + 252 + ], + "blocks": [ + { + "bbox": [ + 75, + 71, + 177, + 252 + ], + "lines": [ + { + "bbox": [ + 75, + 71, + 177, + 252 + ], + "spans": [ + { + "bbox": [ + 75, + 71, + 177, + 252 + ], + "type": "image", + "image_path": "1c5421e70ff7f119d3c54f2e5ed5b8a0634970e8cb91f3aab86bd620f7368751.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 264, + 291, + 312 + ], + "lines": [ + { + "bbox": [ + 67, + 264, + 291, + 312 + ], + "spans": [ + { + "bbox": [ + 67, + 264, + 291, + 312 + ], + "type": "text", + "content": "Figure 1: Illustration of tool-assistance strategies that invoke tools and insert their outputs into the prompt (a), and strategies that first generate some output, and only use tools to fix and refine it (b)." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 180, + 70, + 283, + 252 + ], + "blocks": [ + { + "bbox": [ + 180, + 70, + 283, + 252 + ], + "lines": [ + { + "bbox": [ + 180, + 70, + 283, + 252 + ], + "spans": [ + { + "bbox": [ + 180, + 70, + 283, + 252 + ], + "type": "image", + "image_path": "a20317809cde42b83fa8c5607c78d8b4b0f21cf28bae768bb6671f7e6942dd2c.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 325, + 291, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 325, + 291, + 554 + ], + "spans": [ + { + "bbox": [ + 67, + 325, + 291, + 554 + ], + "type": "text", + "content": "We analyze the study results (§5) and arrive at surprising conclusions: (1) Non-TA baselines are stronger than initially reported. In most cases, TA strategies do not significantly or at all improve on non-TA strategies on popular Question Answering datasets. (2) For retrieval tools in knowledge tasks, TA strategies that fix model output after it is generated perform better than TA strategies that prompt the model to interface with the tool directly during generation. For calculator tools in calculation-intensive tasks, the relationship is not decisive. (3) TA strategies incur significantly higher computation costs than non-TA baselines by multiplicative factors, and there is no general correlation between computation cost and performance, with the exception that refinement strategies in retrieval settings are more costly than non-refinement strategies." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 556, + 291, + 717 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 556, + 291, + 717 + ], + "spans": [ + { + "bbox": [ + 67, + 556, + 291, + 717 + ], + "type": "text", + "content": "In §6 we report a fine-grained analysis of the results. We investigate the effect of each example's difficulty—e.g., very large numbers, or very rare entities) on improvement from tool usage, and find that tools do not systematically improve model performance on harder examples, where they were expected to have the strongest improvement. Finally, based on an error analysis of failure cases, we find that the majority of mistakes follow incorrect tool invocations, rather than incorrect tool responses (in the case of the retrieval tool) or incorrect inferences based on correct tool usage." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "content": "In conclusion, we conduct an extensive evaluation of few-shot TA strategies, finding that previous estimates of tool-usage performance is not representative. Overall, this suggests that few-shot tool" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "type": "text", + "content": "integration is still an open challenge. We call the community to evaluate future strategies systematically, while taking into account the significant costs that these strategies require in comparison to their benefits. Towards this, we provide a set of concrete guidelines for fair and reliable evaluation of TA strategies. Moreover, We release the handcrafted collection of 184 demonstrations used in our study (attached in the supplementary material)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 204, + 486, + 218 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 204, + 486, + 218 + ], + "spans": [ + { + "bbox": [ + 302, + 204, + 486, + 218 + ], + "type": "text", + "content": "2 Tool-Assisted Language Models" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 226, + 525, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 226, + 525, + 253 + ], + "spans": [ + { + "bbox": [ + 302, + 226, + 525, + 253 + ], + "type": "text", + "content": "We describe existing few-shot strategies for augmenting LMs with tools and discuss related work." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 264, + 437, + 276 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 264, + 437, + 276 + ], + "spans": [ + { + "bbox": [ + 302, + 264, + 437, + 276 + ], + "type": "text", + "content": "2.1 Few-shot TA strategies" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 281, + 526, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 281, + 526, + 416 + ], + "spans": [ + { + "bbox": [ + 302, + 281, + 526, + 416 + ], + "type": "text", + "content": "Strategies for tool usage can be broadly divided into two categories: (a) Using tools during generation and insert the tools' outputs into the model's prompt (Figures 1a, 2a); (b) Using tools to refine the LM's output after generation (Figures 1b, 2b). Strategies can be further categorized into settings where the tool is heuristically called in a pipeline or called when the model generates pre-specified tool calls. Refer to Mialon et al. (2023) for a review of the literature on TA strategies and models." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 417, + 526, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 417, + 526, + 701 + ], + "spans": [ + { + "bbox": [ + 302, + 417, + 526, + 701 + ], + "type": "text", + "content": "Among TA strategies of type (a): SelfAsk (Press et al., 2023) decomposes the task into subtasks as simpler questions, such that a tool can be called on each question. A related strategy is Demonstrate-Search-Predict (Khattab et al., 2023). Inline strategies such as Toolformer (Schick et al., 2023)1, ART (Paranjape et al., 2023), inter alia (Chen et al., 2022; Gao et al., 2023b; Lyu et al., 2023) demonstrate tool usage with pre-defined words or tokens and tool arguments, halt generation when those tokens and arguments are generated, invoke the tool, and insert its output into the prompt to resume generation. Interleaving Retrieval (Trivedi et al., 2022a) does not directly instruct the model to use tools, but calls the tool on each reasoning step, to provide the model with additional context for future steps. (Jiang et al., 2023) propose a similar strategy, opting to re-write each step after using it as a query. There are also strategies such as Decomposed Prompting (Khot et al., 2023) that are generalizations of the previous strategies." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 702, + 525, + 742 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 702, + 525, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 702, + 525, + 742 + ], + "type": "text", + "content": "Among TA strategies of type (b): RARR (Gao et al., 2023a) involves a pipeline designed for knowledge-based tasks: verifying the relevance" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "text", + "content": "1Schick et al. primarily discusses tool usage with training. We adapt only the few-shot strategy in our experiments." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13857" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 73, + 84, + 521, + 214 + ], + "blocks": [ + { + "bbox": [ + 73, + 84, + 521, + 214 + ], + "lines": [ + { + "bbox": [ + 73, + 84, + 521, + 214 + ], + "spans": [ + { + "bbox": [ + 73, + 84, + 521, + 214 + ], + "type": "image", + "image_path": "ad8b756c88daba3fe46a897f52a1a2033fba139188f1f2989272dd6bc4926d3f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 73, + 216, + 521, + 380 + ], + "blocks": [ + { + "bbox": [ + 73, + 216, + 521, + 380 + ], + "lines": [ + { + "bbox": [ + 73, + 216, + 521, + 380 + ], + "spans": [ + { + "bbox": [ + 73, + 216, + 521, + 380 + ], + "type": "image", + "image_path": "8cda9432777f2f391b342d52be331e08c439f215932cef30fc8c448fab0a1ca7.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 73, + 381, + 521, + 495 + ], + "blocks": [ + { + "bbox": [ + 73, + 381, + 521, + 495 + ], + "lines": [ + { + "bbox": [ + 73, + 381, + 521, + 495 + ], + "spans": [ + { + "bbox": [ + 73, + 381, + 521, + 495 + ], + "type": "image", + "image_path": "56b4cd0ce032c7e6ed5ff221971fa9d4a43294255f9ec8ab67388abf55e7f2d6.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 502, + 525, + 527 + ], + "lines": [ + { + "bbox": [ + 67, + 502, + 525, + 527 + ], + "spans": [ + { + "bbox": [ + 67, + 502, + 525, + 527 + ], + "type": "text", + "content": "Figure 2: Overview of the TA strategies implemented in this work. Blue text marks tool queries, tool responses are in turquoise cells, refinement is in orange cells and dashed arrows, and yellow cells are LM generations." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 538, + 291, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 538, + 291, + 700 + ], + "spans": [ + { + "bbox": [ + 67, + 538, + 291, + 700 + ], + "type": "text", + "content": "and factuality of each claim by generating questions based on the claim, retrieving snippets that answer these questions, and checking if the answers match the information in the claim. If not, the claim is refined to match the snippets. Check & Fix, a method we introduce in this work, uses each CoT step as a search query, and checks whether the step is entailed by the retrieved snippets by prompting the model to classify this entailment. This strategy is similar to Jiang et al. (2023, contemporaneous work), which additionally uses low-confidence filtering but omits the entailment verification." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 713, + 161, + 725 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 713, + 161, + 725 + ], + "spans": [ + { + "bbox": [ + 67, + 713, + 161, + 725 + ], + "type": "text", + "content": "2.2 Related Work" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 733, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 291, + 773 + ], + "type": "text", + "content": "Training LMs to use tools. While we are primarily concerned with few-shot tool assistance of LM generation, the literature also explores LMs which" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 538, + 526, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 538, + 526, + 605 + ], + "spans": [ + { + "bbox": [ + 302, + 538, + 526, + 605 + ], + "type": "text", + "content": "are trained to use specific tools (Parisi et al., 2022; Hao et al., 2023; Patil et al., 2023). These methods are constrained to the tools seen during training, and require data (annotated, bootstrapped, or synthetically constructed) of tool demonstrations." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 615, + 526, + 697 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 615, + 526, + 697 + ], + "spans": [ + { + "bbox": [ + 302, + 615, + 526, + 697 + ], + "type": "text", + "content": "Other tool-assisted neural networks. There is adjacent research on augmenting neural networks, in ways besides textual interfaces, with tools (e.g., Andor et al., 2019; Jacovi et al., 2019) or training differentiable subnetworks that heavily mimic tools (Neelakantan et al., 2017; Trask et al., 2018)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 709, + 420, + 722 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 709, + 420, + 722 + ], + "spans": [ + { + "bbox": [ + 302, + 709, + 420, + 722 + ], + "type": "text", + "content": "3 Evaluation Pitfalls" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 733, + 525, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 525, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 525, + 773 + ], + "type": "text", + "content": "While there is a plethora of TA strategies (§2.1), no systematic comparison of these strategies has been conducted. Research that proposes TA strategies in" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 223, + 68, + 383, + 79 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 68, + 383, + 79 + ], + "spans": [ + { + "bbox": [ + 223, + 68, + 383, + 79 + ], + "type": "text", + "content": "Who lived longer, Muhammad Ali or Alan Turing?" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13858" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 73, + 68, + 520, + 257 + ], + "blocks": [ + { + "bbox": [ + 73, + 68, + 520, + 257 + ], + "lines": [ + { + "bbox": [ + 73, + 68, + 520, + 257 + ], + "spans": [ + { + "bbox": [ + 73, + 68, + 520, + 257 + ], + "type": "table", + "html": "
PitfallRecommendation
(1)Coupling the TA strategy and the tool together.Comparisons of TA strategies should use the same tools across strategies.
(2)Forcing no-tool baselines to the framework of the TA strategy.The optimal way to solve the task without tools may be different from solving the task with tools: No-tool baselines should include multiple variants of both free-form and structured strategies, to ensure the TA strategies are not given an advantage.
(3)Using one model across all comparisons.Different models may behave differently when it comes to using tools effectively, based on their training data. Multiple models should be tested, if possible.
(4)Using one prompt and set of demonstrations across all comparisons.Multiple different sets of demonstrations should be used to get reliable estimates of few-shot performance.
(5)Not considering TA strategy costs.TA strategies can be efficient or inefficient with regards to the prompt tokens and generation tokens they require to work, with respect to no-tool baselines or with respect to each other. The differences can be significant (§5). Comparisons of TA strategies should factor the computation cost of the strategy, which we term as token efficiency.
", + "image_path": "5d7729ae102ec068b6b348f158fca39595c8b1721908888c58a765ba8bc62418.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 92, + 264, + 499, + 278 + ], + "lines": [ + { + "bbox": [ + 92, + 264, + 499, + 278 + ], + "spans": [ + { + "bbox": [ + 92, + 264, + 499, + 278 + ], + "type": "text", + "content": "Table 1: Summary of evaluation pitfalls of TA strategies (§3) and recommendations to mitigate them." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 298, + 290, + 391 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 298, + 290, + 391 + ], + "spans": [ + { + "bbox": [ + 67, + 298, + 290, + 391 + ], + "type": "text", + "content": "few-shot settings is often not focused on evaluating properties of those strategies, but other aspects of LM capabilities (Press et al., 2023; Gao et al., 2023a), usage in particular strict contexts (Paranjape et al., 2023), evaluating various LM models themselves with a particular strategy (Mialon et al., 2023), and so on." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 394, + 291, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 394, + 291, + 461 + ], + "spans": [ + { + "bbox": [ + 67, + 394, + 291, + 461 + ], + "type": "text", + "content": "Below we collect observations from the literature that demonstrate the limited evaluation scope of TA strategies, in an effort to establish a set of criteria for future evaluations to be reliable and fair (a summary is provided in Table 1)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 465, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "spans": [ + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "text", + "content": "(1) Coupling the TA strategy and the tool together. Comparisons may vary the tools and methods together (e.g., a TA strategy " + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "text", + "content": " with a tool " + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "text", + "content": " versus a TA strategy " + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "text", + "content": " with a tool " + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 67, + 465, + 291, + 519 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 524, + 290, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 524, + 290, + 661 + ], + "spans": [ + { + "bbox": [ + 67, + 524, + 290, + 661 + ], + "type": "text", + "content": "(2) Forcing baselines to the framework of the TA strategy. Typical baselines to a given TA strategy are to apply that strategy while letting the model generate the tool's output instead of the tool, and using CoT prompting. However, the optimal way to solve the problem without tools may not be the same as the TA strategy in question. In this work, we implement three different baselines (§4) and find that there is no clear winner among two of them (we explore this empirically in §5)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 665, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 665, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 665, + 291, + 772 + ], + "type": "text", + "content": "(3) Using one model across all comparisons. Often, a single model is chosen to use as the underlying model for the TA strategy. This limits the insights from the evaluation to this model in particular, since conclusions may not carry over to other models. In this work, we find that the best-performing strategies vary significantly across different LMs (we explore this empirically in §5)." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 298, + 526, + 519 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 302, + 298, + 526, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 298, + 526, + 433 + ], + "spans": [ + { + "bbox": [ + 302, + 298, + 526, + 433 + ], + "type": "text", + "content": "(4) Using one prompt and one set of demonstrations across all comparisons. Few-shot evaluation is known to be unreliable when using a single set of demonstrations as a single prompt (Perez et al., 2021). Furthermore, some prompts used in TA strategy evaluations—in particular, CoT demonstrations—appear so often on the internet that they are suspected to be part of the models' training data, further compromising their function (Jacovi et al., 2023)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 439, + 526, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 439, + 526, + 519 + ], + "spans": [ + { + "bbox": [ + 302, + 439, + 526, + 519 + ], + "type": "text", + "content": "(5) Not considering TA strategy costs. In many cases, the TA strategy requires significantly more compute than no-tool baselines, and different TA strategies also require different amounts of computation. Computation cost is not traditionally considered in comparisons." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 533, + 427, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 533, + 427, + 547 + ], + "spans": [ + { + "bbox": [ + 302, + 533, + 427, + 547 + ], + "type": "text", + "content": "4 Experimental Setup" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 555, + 525, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 555, + 525, + 636 + ], + "spans": [ + { + "bbox": [ + 302, + 555, + 525, + 636 + ], + "type": "text", + "content": "Our goal is to conduct a fair and reliable comparison of TA strategies, without being influenced by properties of specific models, tools or prompts. To this end, we focus on few-shot tool usage, a popular TA scheme that allows flexibility around using new tools and adapting tools to specific tasks." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 638, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 638, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 638, + 526, + 772 + ], + "type": "text", + "content": "In what follows, we describe our experimental setup. What guides this experimental setup is to perform a comprehensive, rigorous evaluation without the pitfalls of §3. Our evaluation covers 5 different TA strategies, 4 recent LMs, 4 complex reasoning datasets, 3 few-shot prompts, and 2 tools. For each TA strategy + dataset + model combination, we run three experiments with a different number of demonstrations. Overall, our evaluation includes an execution of 342 experiments, each of which" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13859" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 113 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 113 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 113 + ], + "type": "text", + "content": "generates 250 (GPT-3) or 500 (non-GPT-3) long-form answers. Additional implementation details are in Appendix A." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 119, + 291, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 119, + 291, + 295 + ], + "spans": [ + { + "bbox": [ + 67, + 119, + 291, + 295 + ], + "type": "text", + "content": "Tool-assisted strategies. We evaluate the TA strategies shown in Figure 2: SelfAsk, Inline, Interleaving, C&F and RARR. We additionally include variants of SelfAsk and Inline where the model is separately called to summarize tool output in relevant context, as it can often be very long (SelfAskQA and InlineQA; see Appendix A for details). Finally, in the retrieval settings, we use Top-1 retrieval for all models, and additionally Top-5 retrieval for the Flan-PaLM-540B model (see \"Models\" below) to check whether additional retrieved information can improve performance despite the significantly longer input and processing cost." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 298, + 292, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 298, + 292, + 502 + ], + "spans": [ + { + "bbox": [ + 69, + 298, + 292, + 502 + ], + "type": "text", + "content": "For SelfAsk and RARR we use the original implementation provided by the methods' creators. We implement Interleaving (Trivedi et al., 2022a), as at the time of this research no implementation was available. Importantly, this implementation yields similar performance to that of existing approaches that combine CoT with retrieval from Wikipedia by He et al. (2022); Jiang et al. (2023) (see full results in Appendix B). Additionally, Jiang et al. (2023, Figure 4) implemented methods that apply retrieval and refinement over generated CoT that are similar to C&F and achieve similar performance to ours, as well (see Appendix B). For Inline, we are not aware of reports on few-shot performance of a similar strategy in the literature." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 508, + 291, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 508, + 291, + 618 + ], + "spans": [ + { + "bbox": [ + 67, + 508, + 291, + 618 + ], + "type": "text", + "content": "Baseline strategies. We use no-tool versions of SelfAsk, Inline, and standard CoT prompting. The SelfAsk and Inline baselines simply involve giving the model the prompts used for the tool-based versions, while disabling tool calls (such that the model generates the output in-place of the tools). These are the baselines used by Press et al. (2023) and Schick et al. (2023) respectively." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 624, + 291, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 291, + 774 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 291, + 774 + ], + "type": "text", + "content": "Datasets. We consider tasks that require complex reasoning, where models could potentially benefit from external tool usage. Specifically, we use StrategyQA (Geva et al., 2021) and MuSiQue (Trivedi et al., 2022b), which require reasoning about entity knowledge, and GSM8k (Cobbe et al., 2021) and DROP (Dua et al., 2019) that evaluate arithmetic reasoning. In DROP we select examples that have numerical answers. We randomly sample 500 examples from the development set of each dataset (with the exception of StrategyQA, whose" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 71, + 527, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 527, + 167 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 527, + 167 + ], + "type": "text", + "content": "test set has 229 examples), and use it for performance evaluation of UL2, Flan-PaLM-540B and Flan-PaLM-62B. For GPT-3, we use a subset of 250 examples of that set, due to cost. We use standard evaluation measures for every dataset (F1 in the case of MuSiQue). We provide data examples in Appendix A." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 179, + 527, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 179, + 527, + 316 + ], + "spans": [ + { + "bbox": [ + 302, + 179, + 527, + 316 + ], + "type": "text", + "content": "Models. We evaluate the methods across four LMs: Flan-UL2-20B (Tay et al., 2023), GPT-3 (text-davinci-003) (Brown et al., 2020), Flan-PaLM-540B and Flan-PaLM-62B (Chung et al., 2022). We omit GPT-3 experiments on RARR and Interleaving due to cost. Importantly, our focus is not in comparing performance of these models, but to use them as samples of different model instances and training schemes against which to compare different TA strategies." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 327, + 527, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 327, + 527, + 530 + ], + "spans": [ + { + "bbox": [ + 302, + 327, + 527, + 530 + ], + "type": "text", + "content": "Tools. We strictly use the same tools across all strategies, to ensure a fair comparison: Google Search (Press et al., 2023; Schick et al., 2023; Lewis et al., 2021) for knowledge tasks, and a calculator (Schick et al., 2023; Qin et al., 2023) for the calculation tasks. RARR, SelfAsk and Interleaving are designed for retrieval settings only, while Inline and Check & Fix can be used in all settings. For the retrieval settings using Google Search and Flan-PaLM-540B, we test retrieval with both the top 1 and top 5 tool-retrieved snippets: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 544, + 527, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 544, + 527, + 774 + ], + "spans": [ + { + "bbox": [ + 302, + 544, + 527, + 774 + ], + "type": "text", + "content": "Few-shot demonstrations. In order to overcome bias from using demonstrations from prior work that were likely seen during training (Jacovi et al., 2023), we re-announce prompts for all TA strategies, datasets and tools. We randomly sample 8 examples from each dataset's training set, and annotate each example with demonstrations for each TA strategy. Some of the strategies call the model multiple times with different prompts (e.g., Check & Fix, RARR), which requires separate annotations. This effort results in a total of 184 annotated demonstrations, which we release as a resource for future works on TA generation. From each set of 8 demonstrations, we then construct three separate prompts—3-shot, 5-shot and 7-shot—randomly sampled from the original 8 demonstrations, to get a better estimation of few-shot performance." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 314, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 314, + 792 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 314, + 792 + ], + "type": "text", + "content": "13860" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 71, + 521, + 167 + ], + "blocks": [ + { + "bbox": [ + 69, + 71, + 521, + 167 + ], + "lines": [ + { + "bbox": [ + 69, + 71, + 521, + 167 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 521, + 167 + ], + "type": "image", + "image_path": "ad09b9fc8f1984f62245138a07a699d525eb34e52bdace5161196d53592a2e6a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 69, + 170, + 521, + 260 + ], + "blocks": [ + { + "bbox": [ + 69, + 170, + 521, + 260 + ], + "lines": [ + { + "bbox": [ + 69, + 170, + 521, + 260 + ], + "spans": [ + { + "bbox": [ + 69, + 170, + 521, + 260 + ], + "type": "image", + "image_path": "a06af40183a6ac224b425d7bba84e8385dc98a63a1760df3a658f1b03fc2b068.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 66, + 271, + 525, + 309 + ], + "lines": [ + { + "bbox": [ + 66, + 271, + 525, + 309 + ], + "spans": [ + { + "bbox": [ + 66, + 271, + 525, + 309 + ], + "type": "text", + "content": "Figure 3: A comparison of evaluation scores across two areas (" + }, + { + "bbox": [ + 66, + 271, + 525, + 309 + ], + "type": "inline_equation", + "content": "\\S 5" + }, + { + "bbox": [ + 66, + 271, + 525, + 309 + ], + "type": "text", + "content": "): (a) No-tool baselines vs. TA strategies; (b) Tool usage via refinement of generated text vs. tool usage during generation, where the generated text contains tool arguments is conditioned on tool outputs. The dark line marks the confidence interval among samples." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 319, + 196, + 333 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 319, + 196, + 333 + ], + "spans": [ + { + "bbox": [ + 67, + 319, + 196, + 333 + ], + "type": "text", + "content": "5 Comparative Results" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 340, + 247, + 353 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 340, + 247, + 353 + ], + "spans": [ + { + "bbox": [ + 67, + 340, + 247, + 353 + ], + "type": "text", + "content": "Organization of the results. Due to the" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 354, + 290, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 354, + 290, + 420 + ], + "spans": [ + { + "bbox": [ + 67, + 354, + 290, + 420 + ], + "type": "text", + "content": "Tool vs. no tool. Previous work that propose TA strategies found that using such strategies consistently improve performance in comparison to no-tool baselines (Press et al., 2023; Jiang et al., 2023; Trivedi et al., 2022a, inter alia)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 421, + 291, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 421, + 291, + 609 + ], + "spans": [ + { + "bbox": [ + 67, + 421, + 291, + 609 + ], + "type": "text", + "content": "Figure 3 shows that the TA strategies do not improve performance over the no-tool baselines in our selection of datasets. The figure shows results against the average of the different few-shot scores, though we observe similar trends when using the maximum of scores as well. Full results are in Appendix B. Similarly to us, Gao et al. (2023a, §6.2) found that StrategyQA performance slightly decreased with tools in RARR compared to no-tool baselines for PaLM-540B (Chowdhery et al., 2022), and Jiang et al. (2023, §6.2) found that performance decreased on StrategyQA in two settings comparable to our implementations of Interleaving and Check & Fix with GPT-3." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 611, + 290, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 611, + 290, + 705 + ], + "spans": [ + { + "bbox": [ + 67, + 611, + 290, + 705 + ], + "type": "text", + "content": "We conclude that for the settings in this work, the no-tool baselines are stronger than initially expected based on the literature. More research is required to investigate whether this relationship holds in other contexts, though we note that the datasets and models used in our experiments are common in TA research (Mialon et al., 2023)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 706, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 706, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 706, + 291, + 772 + ], + "type": "text", + "content": "Additionally, our experiments provide empirical justification to Recommendations (2) and (3) in §3. First, we find that the CoT and Inline baselines outperform each other at a roughly equal rate, and neither emerges as a clear winner. This shows" + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 315, + 317, + 514, + 535 + ], + "blocks": [ + { + "bbox": [ + 315, + 317, + 514, + 535 + ], + "lines": [ + { + "bbox": [ + 315, + 317, + 514, + 535 + ], + "spans": [ + { + "bbox": [ + 315, + 317, + 514, + 535 + ], + "type": "table", + "html": "
ModelDatasetBest strategy
GPT-3StrategyQABaseline-Inline
GPT-3DROPBaseline-Inline
GPT-3GSM8KCheck & Fix
GPT-3MuSiQueInline
Flan-PaLM-540BStrategyQABaseline-CoT
Flan-PaLM-540BDROPBaseline-Inline
Flan-PaLM-540BGSM8KBaseline-Inline
Flan-PaLM-540BMuSiQueRARR-Top5
Flan-UL2-20BStrategyQABaseline-Inline
Flan-UL2-20BDROPBaseline-Inline
Flan-UL2-20BGSM8KInline
Flan-UL2-20BMuSiQueBaseline-CoT
Flan-PaLM-62BStrategyQABaseline-CoT
Flan-PaLM-62BDROPBaseline-CoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCheck & Fix
", + "image_path": "b1069e3adec691b1a3429c547844e3c6ae619081af760ba7dfcedd0fb3161e12.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 543, + 525, + 628 + ], + "lines": [ + { + "bbox": [ + 302, + 543, + 525, + 628 + ], + "spans": [ + { + "bbox": [ + 302, + 543, + 525, + 628 + ], + "type": "text", + "content": "Table 2: For each combination of dataset and model, we derive the best-performing strategy on the average score across the few-shot prompts. Notably, the best-performing strategy varies across different models, datasets or prompts, which means that it is necessary to evaluate over all axes to get a better estimation of general performance." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "text", + "content": "that different baselines obtain different results, and so, relying on only a single baseline in evaluation does not necessarily provide a good estimation for no-tool performance (recommendation (2)). Also, the best-performing strategies vary significantly across models, which highlights the importance of using multiple models for evaluation (recommendation (3))—for illustration, we report the highest-performing strategies in each setting in Table 2, to" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "text", + "content": "13861" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 127, + 68, + 466, + 228 + ], + "blocks": [ + { + "bbox": [ + 127, + 68, + 466, + 228 + ], + "lines": [ + { + "bbox": [ + 127, + 68, + 466, + 228 + ], + "spans": [ + { + "bbox": [ + 127, + 68, + 466, + 228 + ], + "type": "table", + "html": "
TA strategyPrompt tokens (canonical)Prompt tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinen3533531418801
SelfAskt(n+kt+1/2)22811399--
SelfAskQAt(2n+k)35892736--
Inlinet(n+kt+1/2)1793177534531083
InlineQAt(2n+k)33753672--
Check & fixt(2n+k)3839354775483647
RARR3n(t+1)4729--
Interleavingt(n+kt+1/2)3221--
", + "image_path": "4f8aebf138ab26d7d2873e49d6d58b19e29cfc71354db056804c3f4f786266c9.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 127, + 282, + 466, + 441 + ], + "blocks": [ + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "lines": [ + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "spans": [ + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "type": "text", + "content": "Table 3: Average number of prompt tokens per strategy (5-shot), with " + }, + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "type": "text", + "content": " as the CoT prompt length, " + }, + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "type": "text", + "content": " as the number of tool calls, " + }, + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 67, + 235, + 525, + 273 + ], + "type": "text", + "content": " as the tool's output length. Flan-PaLM-540B has a shorter context window than GPT-3, which limits prompt length. The canonical formula for RARR favorably assumes a single verification question." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 127, + 282, + 466, + 441 + ], + "lines": [ + { + "bbox": [ + 127, + 282, + 466, + 441 + ], + "spans": [ + { + "bbox": [ + 127, + 282, + 466, + 441 + ], + "type": "table", + "html": "
TA strategyAnswer tokens (canonical)Answer tokens (empirical)
GPT-3RetrievalGPT-3Calculator
Baselinem44425888
SelfAskm2072--
SelfAskQA2m5964--
Inlinem10324862102
InlineQA2m114256--
Check & fix2m8917775177
RARR3m181--
Interleavingm72--
", + "image_path": "874550e24f4d8b69d2d2da186f71d66a3453552cdcbcbbd9444b3b3bbb363d22.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 449, + 525, + 475 + ], + "lines": [ + { + "bbox": [ + 67, + 449, + 525, + 475 + ], + "spans": [ + { + "bbox": [ + 67, + 449, + 525, + 475 + ], + "type": "text", + "content": "Table 4: Average number of answer tokens across the 5-shot experiments, for each strategy. The RARR formula assumes a single verification question per step." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 485, + 290, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 485, + 290, + 526 + ], + "spans": [ + { + "bbox": [ + 67, + 485, + 290, + 526 + ], + "type": "text", + "content": "show that the overall conclusion can be distorted by choosing a particular model or strategy Extended details are in Appendix B.1." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 534, + 291, + 683 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 534, + 291, + 683 + ], + "spans": [ + { + "bbox": [ + 67, + 534, + 291, + 683 + ], + "type": "text", + "content": "Tool use during generation vs. post-generation refinement. In Figure 3 we compare the strategies that use tools during generation against the strategies that first generate an answer, and then use tools to improve the answer. For retrieval tasks, refinement clearly outperforms non-refinement strategies, but the same does not apply to the calculation tasks. We conjecture that planning calculations ahead of time during generation is more aligned with LM pretraining data, based on internet text, than planning retrieval queries in similar contexts." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 692, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 291, + 773 + ], + "type": "text", + "content": "Token efficiency. TA strategies are typically evaluated in terms of task performance and properties such as factuality and logic correctness. We argue that computational cost is another important factor to consider. Specifically, we propose to evaluate token efficiency, that is, the amount of prompt tokens" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 485, + 526, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 485, + 526, + 647 + ], + "spans": [ + { + "bbox": [ + 302, + 485, + 526, + 647 + ], + "type": "text", + "content": "and generated tokens, which have direct effect on the cost of the TA strategy. Notably, the cost of a TA strategy depends on various variables, including model size, GPU type, caching optimizations, vocabulary size, beam search size, and so on. However, token counts can serve as a plausibly generic proxy for the purpose of comparing the cost of different TA strategies, as other factors are roughly equal across strategies, as long as the same models and tools are used. We consider prompt tokens and generated tokens separately, as they often have different consequences on cost." + }, + { + "bbox": [ + 302, + 485, + 526, + 647 + ], + "type": "inline_equation", + "content": "^2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 649, + 525, + 718 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 649, + 525, + 718 + ], + "spans": [ + { + "bbox": [ + 302, + 649, + 525, + 718 + ], + "type": "text", + "content": "Tables 3, 4 show both canonical and empirical comparisons across TA strategies with regards to token efficiency. The canonical comparison is a function of the relevant variables in the \"canonical\" setting where the model was expected to answer" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 731, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 731, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 731, + 525, + 772 + ], + "type": "text", + "content": "2Depending on model architecture and quantity of times reusing the same prompt, prompt processing cost can be optimized, whereas the token generation cost varies with other factors such as vocabulary size." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13862" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "text", + "content": "the question perfectly, and use the tool perfectly as intended. Across all TA strategy experiments, we found no general correlation between token efficiency and performance. Concretely: (1) All TA strategies are significantly more expensive than the no-tool baselines by orders of magnitude, while not incurring an improvement worthy of this extra cost. Empirically, using tools in each case can incur extra costs by a factor of " + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "inline_equation", + "content": "5x" + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "inline_equation", + "content": "10x" + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "text", + "content": " for prompt processing, and " + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "inline_equation", + "content": "2x" + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "inline_equation", + "content": "5x" + }, + { + "bbox": [ + 67, + 71, + 291, + 248 + ], + "type": "text", + "content": " for generation. (2) The refinement strategies are more expensive than the no-refinement strategies. So while they improve performance for retrieval tasks, it comes at a cost." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 257, + 183, + 271 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 257, + 183, + 271 + ], + "spans": [ + { + "bbox": [ + 67, + 257, + 183, + 271 + ], + "type": "text", + "content": "6 Analytical Results" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 278, + 291, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 278, + 291, + 360 + ], + "spans": [ + { + "bbox": [ + 67, + 278, + 291, + 360 + ], + "type": "text", + "content": "We discuss further analyses of our results, findings that (a) our observations generally hold across different levels of example difficulty, and (b) most prediction errors of tool-augmented LMs stem from incorrect inputs to the tool and bad outputs from it, and not from a lack of tool usage." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 369, + 184, + 382 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 369, + 184, + 382 + ], + "spans": [ + { + "bbox": [ + 67, + 369, + 184, + 382 + ], + "type": "text", + "content": "6.1 Example Difficulty" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 386, + 291, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 386, + 291, + 521 + ], + "spans": [ + { + "bbox": [ + 67, + 386, + 291, + 521 + ], + "type": "text", + "content": "It has been shown that LMs have difficulty solving problems involving long-tail entities (Kandpal et al., 2022; Mallen et al., 2022) and complex mathematical reasoning challenges (Mishra et al., 2022; Imani et al., 2023). Accordingly, we ablate the results from §5 along the following axes of example difficulty, in order to understand how tools can affect performance on difficult examples. We provide an overview of the trends here, and extended results are available in Appendix B." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 525, + 291, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 525, + 291, + 702 + ], + "spans": [ + { + "bbox": [ + 67, + 525, + 291, + 702 + ], + "type": "text", + "content": "Measures of difficulty. We investigate the effectiveness of tool-usage across varying levels of example difficulty, which we approximate in two axes: (A) Long-tail entities (retrieval): Following Mallen et al. (2022), we extract the entities from the question and associated gold answers in StrategyQA and MuSiQue, and use the corresponding entity Wikipedia page views as a measure of popularity. (B) Large numbers (calculation): We segment the examples in the calculation tasks based on the range of the median and largest number in the example (question and gold solution in GSM8k, or question and context paragraph in DROP)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 705, + 291, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 705, + 291, + 774 + ], + "spans": [ + { + "bbox": [ + 67, + 705, + 291, + 774 + ], + "type": "text", + "content": "Results. Performance across increasing levels of entity popularity and computation complexity, with different LMs and TA strategies, are shown in Figure 4a and Figure 4b, respectively. We find that performance uniformly decreases for harder ex" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 301, + 71, + 526, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 71, + 526, + 275 + ], + "spans": [ + { + "bbox": [ + 301, + 71, + 526, + 275 + ], + "type": "text", + "content": "amples in the retrieval setting for all models, but in the calculation setting, this only manifests for Flan-UL2-20B (implying that the larger models are more robust to the numerical ranges in GSM8K and DROP). Overall, in all cases tool use does not improve upon the baselines even when controlling for the harder cases where tools are expected to be more useful. This conclusion is aligned with our error analysis in §6.3, which shows that the common errors stem from incorrect tool arguments, more than correct tool arguments but incorrect inferences based on them. Flan-UL2 with a calculator is an exception, where tool use indeed helps, though moreso on the easier examples, likely due to a higher rate of correct arguments to the calculator." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 283, + 427, + 296 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 283, + 427, + 296 + ], + "spans": [ + { + "bbox": [ + 302, + 283, + 427, + 296 + ], + "type": "text", + "content": "6.2 Tool Usage Statistics" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 301, + 301, + 526, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 301, + 526, + 544 + ], + "spans": [ + { + "bbox": [ + 301, + 301, + 526, + 544 + ], + "type": "text", + "content": "A possible explanation for the similar performance of no-tool baselines could be a lack of tool usage. To check this, we aggregate usage over the different TA strategies, and find that the models indeed use tools in the majority of the cases; " + }, + { + "bbox": [ + 301, + 301, + 526, + 544 + ], + "type": "inline_equation", + "content": "70\\% - 80\\%" + }, + { + "bbox": [ + 301, + 301, + 526, + 544 + ], + "type": "text", + "content": " in SelfAsk, and " + }, + { + "bbox": [ + 301, + 301, + 526, + 544 + ], + "type": "inline_equation", + "content": ">90\\%" + }, + { + "bbox": [ + 301, + 301, + 526, + 544 + ], + "type": "text", + "content": " in others (see Appendix B). We also investigate usage across other axes, such as models and number of demonstrations, and find similar trends. However, the datasets and tasks we investigate are designed to benefit from the tools in all cases, which shows that few-shot demonstrations are not always sufficient in inducing tool use in models. In particular, the SelfAsk strategies receive the lowest tool use, being the strategies that use natural language to query whether to use the tool (the answer begins with \"Are follow up questions needed here: to which the model answers \"No\" in the cases where the tool is not used)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 553, + 400, + 566 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 553, + 400, + 566 + ], + "spans": [ + { + "bbox": [ + 302, + 553, + 400, + 566 + ], + "type": "text", + "content": "6.3 Error Analysis" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "spans": [ + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "text", + "content": "We sampled 50 instances for which an error was made by the TA models, randomly across the 5-shot experiments, and categorized them across three categories: (A) Incorrect tool input; (B) incorrect tool output; (C) incorrect model inferences based on correct tool usage. Error B applies only to the retrieval settings, where the retrieval tool (Google Search in our case) retrieved a wrong or irrelevant snippet. The errors were distributed approximately to " + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "text", + "content": " (A), " + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "text", + "content": " (B), and " + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "text", + "content": " (C) in the retrieval setting, and " + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "text", + "content": " (A) and " + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 301, + 570, + 526, + 774 + ], + "type": "text", + "content": " (C) in the calculation setting. Li et al. (2023) reported an error analysis for tool-assistance in dialogue customer assistance settings, with similar conclusions regarding error A, although errors B and C do not apply in their" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 792 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 792 + ], + "type": "text", + "content": "13863" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 71, + 71, + 520, + 281 + ], + "blocks": [ + { + "bbox": [ + 71, + 71, + 520, + 281 + ], + "lines": [ + { + "bbox": [ + 71, + 71, + 520, + 281 + ], + "spans": [ + { + "bbox": [ + 71, + 71, + 520, + 281 + ], + "type": "image", + "image_path": "7039a5cdcf52dcaa696b43188a45a62ab3501ce48d2f6c8a6d93a2e5da6c93b3.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 66, + 292, + 525, + 341 + ], + "lines": [ + { + "bbox": [ + 66, + 292, + 525, + 341 + ], + "spans": [ + { + "bbox": [ + 66, + 292, + 525, + 341 + ], + "type": "text", + "content": "Figure 4: We analyze performance of the strategies across two area (no-tool baselines vs. TA strategies), conditioned on example difficulty as defined by the existence of rare or common entities in the retrieval settings (via percentile of page views) and small or large numbers in the calculation settings (via percentile of numeric range). In (a), lower page views imply higher difficulty, and in (b), larger numbers imply higher difficulty." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 352, + 276, + 364 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 352, + 276, + 364 + ], + "spans": [ + { + "bbox": [ + 67, + 352, + 276, + 364 + ], + "type": "text", + "content": "context, and other error types manifest instead." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 366, + 290, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 366, + 290, + 448 + ], + "spans": [ + { + "bbox": [ + 67, + 366, + 290, + 448 + ], + "type": "text", + "content": "Our results suggest that the majority of errors are not due to the incorrect tool responses (i.e., issues with Google Search as a choice of retriever), and overall more influenced by incorrectly invoking tools to begin with, in comparison to invoking them correctly but composing the solution incorrectly." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 462, + 233, + 476 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 462, + 233, + 476 + ], + "spans": [ + { + "bbox": [ + 67, + 462, + 233, + 476 + ], + "type": "text", + "content": "7 Conclusions and Takeaways" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 487, + 291, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 487, + 291, + 677 + ], + "spans": [ + { + "bbox": [ + 67, + 487, + 291, + 677 + ], + "type": "text", + "content": "We conduct a comprehensive assessment of few-shot tool augmentation strategies for LMs, covering hundreds of experiments with multiple LMs, datasets, and tools. Our experiments show that current tool-usage integration approaches are presently a false promise; prompting strategies that do not use tools typically obtain similar task performance, without the high cost of tool execution. Controlling for example difficulty, where tools are expected to provide the most benefit, does not explain the relative strength of the no-tool baselines. Instead, the primary errors we observe are related to incorrect usage of the tools to begin with (i.e., generating incorrect arguments to the tool)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 678, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 678, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 678, + 291, + 772 + ], + "type": "text", + "content": "Our findings call for more robust evaluation of future TA strategies, primarily in more practical settings where models are not expected to leverage inherent abilities to solve tasks. To this end, our work provides concrete evaluation guidelines, such as employing stronger baselines and factoring in computation costs." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 303, + 350, + 365, + 364 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 350, + 365, + 364 + ], + "spans": [ + { + "bbox": [ + 303, + 350, + 365, + 364 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 376, + 525, + 512 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 376, + 525, + 512 + ], + "spans": [ + { + "bbox": [ + 302, + 376, + 525, + 512 + ], + "type": "text", + "content": "While our study aims to provide a comprehensive evaluation of TA strategies, there are some limitations. First, recent work (Dodge et al., 2021; Magar and Schwartz, 2022; OpenAI, 2023) suggests that examples from public datasets, like those used in our evaluation, may have leaked to the training data of recent LMs. Such contamination can introduce biases to the evaluation, such as lack of need for external tools. We are not aware of alternatives without this issue at the time of this writing." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 514, + 525, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 514, + 525, + 622 + ], + "spans": [ + { + "bbox": [ + 302, + 514, + 525, + 622 + ], + "type": "text", + "content": "Second, due to the high cost of executing large LMs in an exhaustive evaluation, we ran only a single experiment for each combination of TA strategy, model, dataset, and number of demonstrations. However, given the sensitivity of models to the demonstrations (Perez et al., 2021), future work should extend this evaluation to use multiple sets of demonstrations for each such combination." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 624, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 624, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 624, + 526, + 772 + ], + "type": "text", + "content": "Last, while our findings show that non-tool models often perform on par with existing TA strategies, our setting favors tool usage. For example, our tasks only require a single type of tool such that the model does not need to choose between multiple tools. Future work that investigates when and how tools can improve performance should consider more realistic evaluation settings, for example, by considering tasks where the model may need to use multiple types of tools together, or tasks where tools may sometimes give unhelpful answers." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13864" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 90, + 291, + 773 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 69, + 90, + 291, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 90, + 291, + 191 + ], + "spans": [ + { + "bbox": [ + 69, + 90, + 291, + 191 + ], + "type": "text", + "content": "Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. 2019. Giving BERT a calculator: Finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5947-5952, Hong Kong, China. Association for Computational Linguistics." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 200, + 291, + 334 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 200, + 291, + 334 + ], + "spans": [ + { + "bbox": [ + 69, + 200, + 291, + 334 + ], + "type": "text", + "content": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. CoRR, abs/2005.14165." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 343, + 291, + 389 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 343, + 291, + 389 + ], + "spans": [ + { + "bbox": [ + 69, + 343, + 291, + 389 + ], + "type": "text", + "content": "Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 398, + 291, + 653 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 398, + 291, + 653 + ], + "spans": [ + { + "bbox": [ + 69, + 398, + 291, + 653 + ], + "type": "text", + "content": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 661, + 291, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 661, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 69, + 661, + 291, + 773 + ], + "type": "text", + "content": "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le," + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 526, + 773 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 314, + 72, + 524, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 524, + 95 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 524, + 95 + ], + "type": "text", + "content": "and Jason Wei. 2022. Scaling instruction-finetuned language models." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 304, + 102, + 526, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 102, + 526, + 169 + ], + "spans": [ + { + "bbox": [ + 304, + 102, + 526, + 169 + ], + "type": "text", + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 177, + 526, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 177, + 526, + 277 + ], + "spans": [ + { + "bbox": [ + 304, + 177, + 526, + 277 + ], + "type": "text", + "content": "Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 285, + 526, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 285, + 526, + 352 + ], + "spans": [ + { + "bbox": [ + 304, + 285, + 526, + 352 + ], + "type": "text", + "content": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 359, + 526, + 425 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 359, + 526, + 425 + ], + "spans": [ + { + "bbox": [ + 304, + 359, + 526, + 425 + ], + "type": "text", + "content": "Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. Rarr: Researching and revising what language models say, using language models." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 433, + 526, + 478 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 433, + 526, + 478 + ], + "spans": [ + { + "bbox": [ + 304, + 433, + 526, + 478 + ], + "type": "text", + "content": "Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023b. Pal: Program-aided language models." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 486, + 526, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 486, + 526, + 553 + ], + "spans": [ + { + "bbox": [ + 304, + 486, + 526, + 553 + ], + "type": "text", + "content": "Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies. Transactions of the Association for Computational Linguistics (TACL)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 560, + 526, + 596 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 560, + 526, + 596 + ], + "spans": [ + { + "bbox": [ + 304, + 560, + 526, + 596 + ], + "type": "text", + "content": "Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 602, + 526, + 637 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 602, + 526, + 637 + ], + "spans": [ + { + "bbox": [ + 304, + 602, + 526, + 637 + ], + "type": "text", + "content": "Hangfeng He, Hongming Zhang, and Dan Roth. 2022. Rethinking with retrieval: Faithful large language model inference. arXiv preprint arXiv:2301.00303." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 644, + 526, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 644, + 526, + 679 + ], + "spans": [ + { + "bbox": [ + 304, + 644, + 526, + 679 + ], + "type": "text", + "content": "Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 686, + 526, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 686, + 526, + 731 + ], + "spans": [ + { + "bbox": [ + 304, + 686, + 526, + 731 + ], + "type": "text", + "content": "Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg. 2023. Stop uploading test data in plain text: Practical strategies for mitigating data contamination by evaluation benchmarks." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 738, + 526, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 738, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 304, + 738, + 526, + 773 + ], + "type": "text", + "content": "Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, and Jonathan Berant. 2019. Neural network gradient-based learning" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "13865" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 772 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 80, + 72, + 290, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 72, + 290, + 116 + ], + "spans": [ + { + "bbox": [ + 80, + 72, + 290, + 116 + ], + "type": "text", + "content": "of black-box function interfaces. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 126, + 289, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 126, + 289, + 171 + ], + "spans": [ + { + "bbox": [ + 69, + 126, + 289, + 171 + ], + "type": "text", + "content": "Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 181, + 289, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 181, + 289, + 226 + ], + "spans": [ + { + "bbox": [ + 69, + 181, + 289, + 226 + ], + "type": "text", + "content": "Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. arXiv preprint arXiv:2211.08411." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 236, + 289, + 291 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 236, + 289, + 291 + ], + "spans": [ + { + "bbox": [ + 69, + 236, + 289, + 291 + ], + "type": "text", + "content": "Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2023. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 301, + 289, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 301, + 289, + 346 + ], + "spans": [ + { + "bbox": [ + 69, + 301, + 289, + 346 + ], + "type": "text", + "content": "Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2023. Decomposed prompting: A modular approach for solving complex tasks." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 356, + 289, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 356, + 289, + 423 + ], + "spans": [ + { + "bbox": [ + 69, + 356, + 289, + 423 + ], + "type": "text", + "content": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-augmented generation for knowledge-intensive nlp tasks." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 433, + 289, + 466 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 433, + 289, + 466 + ], + "spans": [ + { + "bbox": [ + 69, + 433, + 289, + 466 + ], + "type": "text", + "content": "Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api-bank: A benchmark for tool-augmented llms." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 476, + 289, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 476, + 289, + 510 + ], + "spans": [ + { + "bbox": [ + 69, + 476, + 289, + 510 + ], + "type": "text", + "content": "Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. 2023. Deductive verification of chain-of-thought reasoning." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 520, + 289, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 520, + 289, + 565 + ], + "spans": [ + { + "bbox": [ + 69, + 520, + 289, + 565 + ], + "type": "text", + "content": "Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M. Dai. 2022. Mind's eye: Grounded language model reasoning through simulation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 575, + 289, + 618 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 575, + 289, + 618 + ], + "spans": [ + { + "bbox": [ + 69, + 575, + 289, + 618 + ], + "type": "text", + "content": "Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of-thought reasoning." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 629, + 289, + 695 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 629, + 289, + 695 + ], + "spans": [ + { + "bbox": [ + 69, + 629, + 289, + 695 + ], + "type": "text", + "content": "Inbal Magar and Roy Schwartz. 2022. Data contamination: From memorization to exploitation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 157-165, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 706, + 289, + 772 + ], + "type": "text", + "content": "Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 304, + 72, + 525, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 525, + 138 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 525, + 138 + ], + "type": "text", + "content": "Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 148, + 525, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 148, + 525, + 236 + ], + "spans": [ + { + "bbox": [ + 304, + 148, + 525, + 236 + ], + "type": "text", + "content": "Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505-3523, Dublin, Ireland. Association for Computational Linguistics." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 245, + 525, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 245, + 525, + 322 + ], + "spans": [ + { + "bbox": [ + 304, + 245, + 525, + 322 + ], + "type": "text", + "content": "Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browser-assisted question-answering with human feedback." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 332, + 525, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 332, + 525, + 386 + ], + "spans": [ + { + "bbox": [ + 304, + 332, + 525, + 386 + ], + "type": "text", + "content": "Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. In International Conference on Learning Representations." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 396, + 525, + 484 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 396, + 525, + 484 + ], + "spans": [ + { + "bbox": [ + 304, + 396, + 525, + 484 + ], + "type": "text", + "content": "Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9844–9855, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 493, + 459, + 506 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 493, + 459, + 506 + ], + "spans": [ + { + "bbox": [ + 304, + 493, + 459, + 506 + ], + "type": "text", + "content": "OpenAI. 2023. Gpt-4 technical report." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 514, + 525, + 569 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 514, + 525, + 569 + ], + "spans": [ + { + "bbox": [ + 304, + 514, + 525, + 569 + ], + "type": "text", + "content": "Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. 2023. Art: Automatic multi-step reasoning and tool-use for large language models." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 579, + 525, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 579, + 525, + 602 + ], + "spans": [ + { + "bbox": [ + 304, + 579, + 525, + 602 + ], + "type": "text", + "content": "Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 611, + 525, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 611, + 525, + 644 + ], + "spans": [ + { + "bbox": [ + 304, + 611, + 525, + 644 + ], + "type": "text", + "content": "Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 653, + 525, + 676 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 653, + 525, + 676 + ], + "spans": [ + { + "bbox": [ + 304, + 653, + 525, + 676 + ], + "type": "text", + "content": "Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 685, + 525, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 685, + 525, + 728 + ], + "spans": [ + { + "bbox": [ + 304, + 685, + 525, + 728 + ], + "type": "text", + "content": "Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 738, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 738, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 738, + 525, + 772 + ], + "type": "text", + "content": "Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. 2022. Limitations of language models in arithmetic and symbolic induction." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "13866" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 543 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 70, + 72, + 291, + 203 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 72, + 291, + 203 + ], + "spans": [ + { + "bbox": [ + 70, + 72, + 291, + 203 + ], + "type": "text", + "content": "Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool learning with foundation models." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 212, + 290, + 257 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 212, + 290, + 257 + ], + "spans": [ + { + "bbox": [ + 69, + 212, + 290, + 257 + ], + "type": "text", + "content": "Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 264, + 290, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 264, + 290, + 332 + ], + "spans": [ + { + "bbox": [ + 69, + 264, + 290, + 332 + ], + "type": "text", + "content": "Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Siamak Shakeri, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby, and Donald Metzler. 2023. UI2: Unifying language learning paradigms." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 339, + 289, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 339, + 289, + 373 + ], + "spans": [ + { + "bbox": [ + 69, + 339, + 289, + 373 + ], + "type": "text", + "content": "Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, and Phil Blunsom. 2018. Neural arithmetic logic units." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 381, + 290, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 381, + 290, + 426 + ], + "spans": [ + { + "bbox": [ + 69, + 381, + 290, + 426 + ], + "type": "text", + "content": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022a. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 434, + 290, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 434, + 290, + 490 + ], + "spans": [ + { + "bbox": [ + 69, + 434, + 290, + 490 + ], + "type": "text", + "content": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022b. MuSiQue: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 498, + 290, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 498, + 290, + 543 + ], + "spans": [ + { + "bbox": [ + 69, + 498, + 290, + 543 + ], + "type": "text", + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13867" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 212, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 212, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 212, + 84 + ], + "type": "text", + "content": "A Implementation Details" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 94, + 211, + 106 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 94, + 211, + 106 + ], + "spans": [ + { + "bbox": [ + 68, + 94, + 211, + 106 + ], + "type": "text", + "content": "A.1 Tool-Assisted Strategies." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 112, + 291, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 112, + 291, + 260 + ], + "spans": [ + { + "bbox": [ + 67, + 112, + 291, + 260 + ], + "type": "text", + "content": "General Details. In all cases, if the tool invocation fails (e.g., with an ill-formatted calculation, or a null response from Google Search), the model is used to generate the tool's output instead. For all retrieval settings using Google Search, we test both Top-1 and Top-5 retrieval: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information. Illustrative examples of the data are available in Table 5." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 270, + 291, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 270, + 291, + 417 + ], + "spans": [ + { + "bbox": [ + 66, + 270, + 291, + 417 + ], + "type": "text", + "content": "SelfAsk and SelfAskQA. SelfAsk involves decomposing each question into a series of simpler sub-questions, and calling the tool directly for each sub-question. The tool's output is inserted into the prompt as an intermediate answer. When the model generates a step that begins with the string \"So the answer is:,\" it is expected to generate an answer that builds on the previous intermediate answers which were tool outputs. In this work, we use Google Search as the tool as in the original work by (Press et al., 2023)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 420, + 291, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 420, + 291, + 487 + ], + "spans": [ + { + "bbox": [ + 67, + 420, + 291, + 487 + ], + "type": "text", + "content": "Our SelfAsk implementation reuses the original implementation by Press et al. (2023). Since Self-Ask is designed specifically for knowledge-based QA, we only evaluate this strategy for the knowledge tasks MuSiQue and StrategyQA." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "spans": [ + { + "bbox": [ + 67, + 489, + 291, + 690 + ], + "type": "text", + "content": "The SelfAskQA variant involves calling the model for each pair of sub-question and retrieved snippet that (hopefully) contains its answer. This method of recursively calling the model with different a different prompt as if it were another tool is a technique proposed by Khot et al. (2023). We collect all sub-questions from the SelfAsk prompts in order to construct QA prompts (using the tool to retrieve supporting snippets). The model is called with the QA prompts in order to answer each sub-question based on its snippet. The SelfAskQA variant in essence summarizes each Google Search snippet, which can be as long as a paragraph, into a short answer to the given sub-question, effectively simplifying and shortening the overall answer." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "content": "Among the two SelfAsk implementations, neither decisively outperforms the other: SelfAskQA outperforms SelfAsk for GPT-3 and Flan-PaLM-62B on both MuSiQue and StrategyQA, but for Flan-PaLM-540B and Flan-UL2-20B the relationship flips." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 274 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 274 + ], + "type": "text", + "content": "Inline and InlineQA. The Inline strategy format largely mimics the Toolformer format by Schick et al. (2023), but can also be cast into the ART framework by Paranjape et al. (2023) or the Decomposed Prompting framework by Khot et al. (2023). In general, the strategy simply calls for generating the tool call in a predefined format—in our case, square brackets and the tool name. The tool is invoked with the arguments generated by the model inside the brackets, and the tool's output is inserted into the model. Our implementation is based on the inference code implemented by Schick et al. (2023), although notably, we focus on few-shot usage, and do not perform the tool-usage pretraining step that largely concerns the referenced work." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 275, + 526, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 275, + 526, + 423 + ], + "spans": [ + { + "bbox": [ + 302, + 275, + 526, + 423 + ], + "type": "text", + "content": "We implement two variants: Inline, which uses a tool called \"Search\" that appends the retrieved snippet or calculation output directly into the prompt, and InlineQA, which uses a tool called \"QA\" which calls the model with a separate prompt in order to summarize the retrieved snippet into a concise answer, identically to the aforementioned SelfAskQA variant. As with the SelfAsk and SelfAskQA variants, among Inline and InlineQA in the knowledge-based tasks, neither consistently outperforms the other in particular." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 435, + 525, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 435, + 525, + 625 + ], + "spans": [ + { + "bbox": [ + 302, + 435, + 525, + 625 + ], + "type": "text", + "content": "Interleaving. The Interleaving Retrieval strategy (Trivedi et al., 2022a) proposes to use each reasoning step by the model in its CoT answer as a query to a retrieval model. The retrieved snippet is then added to the prompt in order to provide additional information to the model. The structure for each demonstration becomes: (1) All retrieved documents thus far; (2) The question; (3) The generated answer thus far (see Trivedi et al., 2022a for details). In this way, the tool is used heuristically without explicit demonstrations from the model, but the generation of the answer at each CoT step is still conditioned on tool usage based on the previous steps." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 637, + 525, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 637, + 525, + 731 + ], + "spans": [ + { + "bbox": [ + 302, + 637, + 525, + 731 + ], + "type": "text", + "content": "Check & Fix. We propose this strategy as a more lightweight variant of refinement based on tools in comparison to RARR, and it is comparable to contemporaneously proposed (Jiang et al., 2023): After each CoT step, the step is checked for accuracy using a tool, and if found inaccurate, a new fixed step is generated to replace it." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "type": "text", + "content": "In the retrieval setting, each step is verified and fixed by prompting the model to classify whether the step is contradicted by the retrieved paragraphs," + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13868" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 73, + 68, + 523, + 343 + ], + "blocks": [ + { + "bbox": [ + 73, + 68, + 523, + 343 + ], + "lines": [ + { + "bbox": [ + 73, + 68, + 523, + 343 + ], + "spans": [ + { + "bbox": [ + 73, + 68, + 523, + 343 + ], + "type": "table", + "html": "
ModelDatasetStrategyExampleModel Answer
GPT-3StrategyQABaseline-CoTQuestion: Was a woman Prime Minister directly before or after Stanley Baldwin? Answer: No.Stanley Baldwin was Prime Minister of the United Kingdom from 1923 to 1929. The woman Prime Minister directly before him was Margaret Thatcher, who served from 1979 to 1990. The woman Prime Minister directly after him was Theresa May, who served from 2016 to 2019. So the answer is no.
Flan-UL2MuSiQueSelfAskQuestion: What administrative territorial entity contains the place where KPRM is licensed to broadcast to? Answer: Hubbard County.Are follow up questions needed here: Yes. What is the place where KPRM is licensed to broadcast to? Intermediate answer: KPRM is licensed to broadcast to the city of Houston. Follow up: What administrative territorial entity contains the city of Houston? Intermediate answer: The city of Houston is in Texas state. Follow up: What is the administrative territorial entity that contains Texas state? Intermediate answer: Texas state is in the United States. So the final answer is: The United States
Flan-PaLM-62BDROPCheck & FixContext: [omitted paragraph about the 2000 census] Question: How many households and families reside in the county according to the 2000 census? Answer: 2648426484 households and 12068 families reside in the county. There were 40543 people, 15416 households, and 11068 families residing in the county. Thus, 26484 + 12068 = 38552. So the final answer is: 38552.
Flan-PaLM-540BGSM8KInlineQuestion: How many girls are in the school if 40% of a school population is made up of 240 boys? Answer: 360If 40% of the school population is made up of 240 boys, then 1% is made up of [Calculator(240 / 40) -> 6.0] 6 boys. Therefore, the total school population is made up of [Calculator(6 * 100) -> 600] 600 students. If the school population is made up of 600 students, and 40% of them are boys, then the remaining 60% are girls, a total of [ Calculator(60 / 100) * 600 -> 360] 360 girls. So the final answer is: 360.
", + "image_path": "b946b0f1e03b0d2abe3348f065673ba189de3b2058b279813692f75960394dbc.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 351, + 525, + 375 + ], + "lines": [ + { + "bbox": [ + 67, + 351, + 525, + 375 + ], + "spans": [ + { + "bbox": [ + 67, + 351, + 525, + 375 + ], + "type": "text", + "content": "Table 5: Illustrative examples of various datasets, strategies and model outputs. The answers from the Interleaving, Check & Fix and RARR models are of the same format as the CoT baseline." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 396, + 290, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 396, + 290, + 533 + ], + "spans": [ + { + "bbox": [ + 67, + 396, + 290, + 533 + ], + "type": "text", + "content": "and if so, to generate the fixed step based on demonstrations. In the calculation setting, each step is first heuristically checked for whether it contains a calculation, and if so, the calculation is inserted into the calculator tool, and the model is prompted to verify whether the tool output is consistent with the calculation in the text. If this is incorrect, the model generates the fixed step. In both cases, the answer generation continues where the fixed step completely replaces the original incorrect step." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 543, + 290, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 543, + 290, + 677 + ], + "spans": [ + { + "bbox": [ + 67, + 543, + 290, + 677 + ], + "type": "text", + "content": "RARR. RARR (Retrofit Attribution using Research and Revision, Gao et al., 2023a) was proposed as a post processing method for refining any text, including LM chain-of-thought outputs. This is done via automatically finding attribution for each claim in the text, and post-editing the output to fix unsupported content while preserving the original output as much as possible. Our RARR implementation reuses the original implementation by Gao et al. (2023a)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 679, + 290, + 706 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 679, + 290, + 706 + ], + "spans": [ + { + "bbox": [ + 67, + 679, + 290, + 706 + ], + "type": "text", + "content": "The RARR process involves the following steps, with each considered as a separate tool:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 77, + 719, + 290, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 719, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 77, + 719, + 290, + 773 + ], + "type": "text", + "content": "1. Question Generation: First, they generate a series of questions that cover various aspects of a passage, referred to as passage x. The questions generated aim to verify and attribute" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 324, + 396, + 524, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 324, + 396, + 524, + 423 + ], + "spans": [ + { + "bbox": [ + 324, + 396, + 524, + 423 + ], + "type": "text", + "content": "information from the passage. This is done via prompting the LM with few-shot examples." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 311, + 430, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 311, + 430, + 525, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 430, + 525, + 497 + ], + "spans": [ + { + "bbox": [ + 311, + 430, + 525, + 497 + ], + "type": "text", + "content": "2. Evidence Retrieval: For each generated question, the Google Search tool is utilized to retrieve the top-" + }, + { + "bbox": [ + 311, + 430, + 525, + 497 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 311, + 430, + 525, + 497 + ], + "type": "text", + "content": " passages that are related to the question. In this work, we evaluate both Top-1 and Top-5." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 311, + 504, + 524, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 504, + 524, + 624 + ], + "spans": [ + { + "bbox": [ + 311, + 504, + 524, + 624 + ], + "type": "text", + "content": "3. Evidence Ranking: The retrieved evidences are next ranked using a query-document relevance model scorer. Unlike the original RARR implementation (Gao et al., 2023a), which uses the GTR retrieval model (Ni et al., 2022), we instead implement the scorer via few-shot LM prompting, as suggested by the authors. The output of this stage is thus the top-1 ranked evidence." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 311, + 632, + 524, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 632, + 524, + 713 + ], + "spans": [ + { + "bbox": [ + 311, + 632, + 524, + 713 + ], + "type": "text", + "content": "4. Agreement Phase: Given a triplet of a text, question, and an evidence, this phase determines whether both the text and the question imply the same answer to the question. This is implemented via few-shot LM prompting using a chain-of-thought style prompt." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 311, + 719, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 719, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 311, + 719, + 524, + 772 + ], + "type": "text", + "content": "5. Editing Phase: If the previous Agreement Phase outputs disagreement between the text and the evidence, the (text, question, evidence) triplet is fed to a model that outputs a revised" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13869" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 83, + 68, + 274, + 304 + ], + "blocks": [ + { + "bbox": [ + 83, + 68, + 274, + 304 + ], + "lines": [ + { + "bbox": [ + 83, + 68, + 274, + 304 + ], + "spans": [ + { + "bbox": [ + 83, + 68, + 274, + 304 + ], + "type": "table", + "html": "
ModelDatasetBest baseline
GPT-3StrategyQAInline
GPT-3DROPInline
GPT-3GSM8KCoT
GPT-3MuSiQueInline
Flan-UL2-20BStrategyQAInline
Flan-UL2-20BDROPInline
Flan-UL2-20BGSM8KCoT
Flan-UL2-20BMuSiQueCoT
Flan-PaLM-540BStrategyQACoT
Flan-PaLM-540BDROPInline
Flan-PaLM-540BGSM8KInline
Flan-PaLM-540BMuSiQueCoT
Flan-PaLM-62BStrategyQACoT
Flan-PaLM-62BDROPCoT
Flan-PaLM-62BGSM8KInline
Flan-PaLM-62BMuSiQueCoT
", + "image_path": "055d637244534fe5b256bba514b353da7eaf90c3a9ab23736fd5fd801ffdec3b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 89, + 393, + 291, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 393, + 291, + 515 + ], + "spans": [ + { + "bbox": [ + 89, + 393, + 291, + 515 + ], + "type": "text", + "content": "version of the text, considering the discrepancy between the previous text and the evidence. This is implemented via few-shot LM prompting using a similar chain-of-thought style prompt from the previous stage (see Gao et al., 2023a for the exact prompting template). The agreement and editing phases run iteratively until there are no needed revisions, detected in the Agreement Phase." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 524, + 142, + 535 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 524, + 142, + 535 + ], + "spans": [ + { + "bbox": [ + 68, + 524, + 142, + 535 + ], + "type": "text", + "content": "A.2 Baselines" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 541, + 291, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 541, + 291, + 663 + ], + "spans": [ + { + "bbox": [ + 67, + 541, + 291, + 663 + ], + "type": "text", + "content": "Chain-of-Thought. The CoT baseline is the standard baseline proposed by Wei et al. (2023) and implemented as a baseline by Press et al. (2023); Paranjape et al. (2023), inter alia. Often, the demonstrations used for this baseline are those originally published by Wei et al. (2023). In this work we annotate a new sample of examples with CoT answers for the purpose of a better estimation of CoT few-shot performance, and release our annotations." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 671, + 291, + 739 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 671, + 291, + 739 + ], + "spans": [ + { + "bbox": [ + 67, + 671, + 291, + 739 + ], + "type": "text", + "content": "Self-Ask. The Self-Ask baseline uses the Self-Ask tool demonstrations, but does not invoke the tool after each \"Follow up:\" call, and instead generates the entire answer. This is the original no-tool baseline in Press et al. (2023)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "type": "text", + "content": "Inline. The Inline baseline uses the Inline tool demonstrations, but does not invoke the tool after" + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 350, + 68, + 478, + 148 + ], + "blocks": [ + { + "bbox": [ + 67, + 312, + 291, + 371 + ], + "lines": [ + { + "bbox": [ + 67, + 312, + 291, + 371 + ], + "spans": [ + { + "bbox": [ + 67, + 312, + 291, + 371 + ], + "type": "text", + "content": "Table 6: For each combination of dataset and model, we derive the best-performing baseline on the average score across the few-shot experiments. There is no clear winner: Two of the baselines achieve the best score in " + }, + { + "bbox": [ + 67, + 312, + 291, + 371 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 67, + 312, + 291, + 371 + ], + "type": "text", + "content": " of cases." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 350, + 68, + 478, + 148 + ], + "lines": [ + { + "bbox": [ + 350, + 68, + 478, + 148 + ], + "spans": [ + { + "bbox": [ + 350, + 68, + 478, + 148 + ], + "type": "table", + "html": "
ModelUsage (%)
Flan-PaLM-540B70.9
Flan-PaLM-62B80.6
Flan-UL2-20B82.6
GPT-395.1
", + "image_path": "0925aa44a4efd33b4180ed8edcbb477c5e4e83bc090aaf1ae25529fb13dab0a8.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 358, + 191, + 471, + 285 + ], + "blocks": [ + { + "bbox": [ + 302, + 156, + 525, + 180 + ], + "lines": [ + { + "bbox": [ + 302, + 156, + 525, + 180 + ], + "spans": [ + { + "bbox": [ + 302, + 156, + 525, + 180 + ], + "type": "text", + "content": "Table 7: Note that RARR and Interleaving are guaranteed to use tools so they are omitted." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 358, + 191, + 471, + 285 + ], + "lines": [ + { + "bbox": [ + 358, + 191, + 471, + 285 + ], + "spans": [ + { + "bbox": [ + 358, + 191, + 471, + 285 + ], + "type": "table", + "html": "
StrategyUsage (%)
Check & Fix92.9
SelfAsk80.4
SelfAskQA72.8
Inline99.9
InlineQA96.1
", + "image_path": "7736a5dafe824533877271c1cd1c4eac67b9a1641ba3b940dc7940f16c000c9e.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 293, + 525, + 328 + ], + "lines": [ + { + "bbox": [ + 302, + 293, + 525, + 328 + ], + "spans": [ + { + "bbox": [ + 302, + 293, + 525, + 328 + ], + "type": "text", + "content": "Table 8: Overview of average rate of tool usage across experiments. Note that RARR and Interleaving are guaranteed to use tools." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 351, + 525, + 391 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 351, + 525, + 391 + ], + "spans": [ + { + "bbox": [ + 302, + 351, + 525, + 391 + ], + "type": "text", + "content": "each tool call, and instead generates the entire answer. This is the original no-tool baseline in Schick et al. (2023)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 403, + 415, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 403, + 415, + 415 + ], + "spans": [ + { + "bbox": [ + 302, + 403, + 415, + 415 + ], + "type": "text", + "content": "B Extended Results" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 425, + 525, + 466 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 425, + 525, + 466 + ], + "spans": [ + { + "bbox": [ + 302, + 425, + 525, + 466 + ], + "type": "text", + "content": "We provide the full results for our experiments (described in §4) in §B.1, and further analysis of TA strategy performance and tool usage in §B.2." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 476, + 446, + 488 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 476, + 446, + 488 + ], + "spans": [ + { + "bbox": [ + 302, + 476, + 446, + 488 + ], + "type": "text", + "content": "B.1 Full Experiment Results" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 493, + 525, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 493, + 525, + 587 + ], + "spans": [ + { + "bbox": [ + 302, + 493, + 525, + 587 + ], + "type": "text", + "content": "Tables 9, 10 detail our experiment results. Tables 11, 12, 13, 14 detail average and max aggregations over the few-shot prompts. As mentioned, we sample 500 examples for Flan-PaLM-62B , FlanPaLM-540B and Flan-UL2-20B experiments, and 250 for GPT-3 experiments, with the exception of StrategyQA whose test set has 229 examples." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 588, + 525, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 588, + 525, + 696 + ], + "spans": [ + { + "bbox": [ + 302, + 588, + 525, + 696 + ], + "type": "text", + "content": "For DROP and MuSiQue, we report the F1 measures using the evaluation scripts provided by Dua et al. (2019); Trivedi et al. (2022b) respectively. For GSM8K, we normalize the numerical answers and measure exact-match. For StrategyQA, we normalize the answers (for capitalization, prefix and suffix punctuation, and so on) and measure exact-match to \"yes\" and \"no\"." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 706, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 706, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 706, + 525, + 772 + ], + "type": "text", + "content": "Best-performing strategies and baselines in each setting. In Tables 2, 6 we show the best-performing baseline and best-performing general strategy for each setting of model and dataset, among the average scores across the three few-shot" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13870" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 122 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 122 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 122 + ], + "type": "text", + "content": "experiments. For strategies in general (Table 2), we see that the winning strategies vary significantly for different models, which supports Guideline (3) in Table 1." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "spans": [ + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "text", + "content": "The distribution among the baselines is split " + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "inline_equation", + "content": "50\\% - 50\\%" + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "text", + "content": " among CoT and Inline. When considering each few-shot experiment separately (i.e., not taking the average), the distribution is " + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "inline_equation", + "content": "60.0\\%" + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "inline_equation", + "content": "37.5\\%" + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "inline_equation", + "content": "2\\%" + }, + { + "bbox": [ + 67, + 126, + 291, + 232 + ], + "type": "text", + "content": " for Baseline-CoT, Baseline-Inline and Baseline-SelfAsk respectively for which baseline achieves the best-performing score. This supports Guideline (2) in Table 1." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 243, + 138, + 255 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 243, + 138, + 255 + ], + "spans": [ + { + "bbox": [ + 67, + 243, + 138, + 255 + ], + "type": "text", + "content": "B.2 Analysis" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 260, + 291, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 260, + 291, + 435 + ], + "spans": [ + { + "bbox": [ + 67, + 260, + 291, + 435 + ], + "type": "text", + "content": "Example Difficulty. Figures 5, 6 show extended results for the example difficulty analyses in §6. Here we consider the median of each difficulty metric—i.e., the difficulty across all entities or numbers in the example—rather than the minimum or maximum, as well as the ablation of refinement strategies against no-refinement strategies. We additionally checked for two alternative axes: operation complexity (addition and subtraction as “easy” examples, and multiplication and division as “hard” examples) and popularity links rather than popularity views. The trends we observe in the main paper hold in all of these cases." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 444, + 291, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 444, + 291, + 510 + ], + "spans": [ + { + "bbox": [ + 67, + 444, + 291, + 510 + ], + "type": "text", + "content": "Tool Usage. Tables 7, 8 show aggregate tool usage percentages over multiple axes. Overall, few-shot demonstrations induce tool usage in the majority of cases, though not completely so (i.e., below " + }, + { + "bbox": [ + 67, + 444, + 291, + 510 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 67, + 444, + 291, + 510 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 4 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "text", + "content": "13871" + } + ] + } + ], + "index": 5 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 86, + 222, + 505, + 311 + ], + "blocks": [ + { + "bbox": [ + 86, + 222, + 505, + 311 + ], + "lines": [ + { + "bbox": [ + 86, + 222, + 505, + 311 + ], + "spans": [ + { + "bbox": [ + 86, + 222, + 505, + 311 + ], + "type": "image", + "image_path": "ac4a0dde76cc7d60efb1b1a72b7167e5d83cc236108f7ff0a0d445f57c449897.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 85, + 316, + 505, + 403 + ], + "blocks": [ + { + "bbox": [ + 85, + 316, + 505, + 403 + ], + "lines": [ + { + "bbox": [ + 85, + 316, + 505, + 403 + ], + "spans": [ + { + "bbox": [ + 85, + 316, + 505, + 403 + ], + "type": "image", + "image_path": "9e9aa15aaaedd0de974d1aef476063f9aed9e6102bea1d72aeaac09fbf896ab9.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 85, + 408, + 505, + 491 + ], + "blocks": [ + { + "bbox": [ + 85, + 408, + 505, + 491 + ], + "lines": [ + { + "bbox": [ + 85, + 408, + 505, + 491 + ], + "spans": [ + { + "bbox": [ + 85, + 408, + 505, + 491 + ], + "type": "image", + "image_path": "d891c4106bc180293d8705b06b931c249a0fa7197a273df43947839153b0b755.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 85, + 495, + 505, + 579 + ], + "blocks": [ + { + "bbox": [ + 85, + 495, + 505, + 579 + ], + "lines": [ + { + "bbox": [ + 85, + 495, + 505, + 579 + ], + "spans": [ + { + "bbox": [ + 85, + 495, + 505, + 579 + ], + "type": "image", + "image_path": "90fceda5634230bcbc932418e38619436b8dcecefc9b958c8161556d30ba5cfa.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 591, + 527, + 629 + ], + "lines": [ + { + "bbox": [ + 67, + 591, + 527, + 629 + ], + "spans": [ + { + "bbox": [ + 67, + 591, + 527, + 629 + ], + "type": "text", + "content": "Figure 5: An extension of Table 3 with results for both the average across few-shot experiments (a-b) and the maximum across few-shot experiments (c-d)—i.e., the maximum between 3-shot, 5-shot and 7-shot for each experiments setting." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 313, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 313, + 792 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 313, + 792 + ], + "type": "text", + "content": "13872" + } + ] + } + ], + "index": 5 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 94, + 79, + 501, + 716 + ], + "blocks": [ + { + "bbox": [ + 94, + 79, + 501, + 716 + ], + "lines": [ + { + "bbox": [ + 94, + 79, + 501, + 716 + ], + "spans": [ + { + "bbox": [ + 94, + 79, + 501, + 716 + ], + "type": "table", + "html": "
StrategyModelMuSiQueStrategyQA
3-shot5-shot7-shot3-shot5-shot7-shot
RARRFlan-PaLM-540B34.8635.0934.1480.3581.2280.79
RARRFlan-UL2-20B13.4012.0112.9855.9040.1742.79
RARRFlan-PaLM-62B23.6023.4224.0775.9877.7377.73
Baseline-CoTFlan-PaLM-540B33.0733.3633.8079.9184.2882.10
Baseline-CoTFlan-UL2-20B15.1416.5016.1067.2571.6272.05
Baseline-CoTGPT-327.3729.3130.2570.7471.6271.62
Baseline-CoTFlan-PaLM-62B23.6023.4224.2775.9879.0480.35
Baseline-SelfAskFlan-PaLM-540B25.8025.3424.3176.8673.3675.55
Baseline-SelfAskFlan-UL2-20B11.4011.5211.5234.0648.4753.71
Baseline-SelfAskGPT-327.9828.1329.8072.0574.2473.36
Baseline-SelfAskFlan-PaLM-62B5.289.525.4358.9575.9874.24
Baseline-InlineFlan-PaLM-540B30.3930.7131.1971.6279.9172.49
Baseline-InlineFlan-UL2-20B13.6613.339.7472.0568.5671.18
Baseline-InlineGPT-329.1130.3328.1570.3175.9878.60
Baseline-InlineFlan-PaLM-62B23.4222.6921.8675.1173.3675.55
SelfAskFlan-PaLM-540B20.0223.1423.2671.6271.1873.80
SelfAskFlan-UL2-20B11.867.687.4149.7825.7623.14
SelfAskGPT-324.3824.1522.3364.1967.2565.94
SelfAskFlan-PaLM-62B13.7914.8012.6867.2567.6966.38
SelfAskQAFlan-PaLM-540B21.0821.9222.9171.6269.4373.80
SelfAskQAFlan-UL2-20B8.535.352.3047.1617.0311.79
SelfAskQAGPT-332.7431.3030.3465.5067.6970.31
SelfAskQAFlan-PaLM-62B15.4217.4914.5167.2568.1269.00
InlineQAFlan-PaLM-540B31.8632.7832.1070.3172.9373.36
InlineQAFlan-UL2-20B18.0717.941.5671.1870.3156.77
InlineQAGPT-334.9036.6531.3270.3172.0570.31
InlineQAFlan-PaLM-62B12.5211.6510.5561.1463.3261.57
Check & FixFlan-PaLM-540B30.7333.1733.4880.3580.7978.17
Check & FixFlan-UL2-20B10.9011.7713.5252.4060.7069.87
Check & FixGPT-329.6632.9532.2672.0573.8070.74
Check & FixFlan-PaLM-62B25.2126.3926.4775.5571.1876.42
InlineFlan-PaLM-540B18.9724.4222.6174.2474.2475.11
InlineFlan-UL2-20B14.7014.9314.7848.4752.8444.98
InlineGPT-328.8531.0333.5470.3169.4368.56
InlineFlan-PaLM-62B9.959.4513.3254.5968.5670.31
InterleavingFlan-PaLM-540B23.7121.2920.5176.8678.6075.98
InterleavingFlan-PaLM-62B23.4323.7124.4274.6771.6274.24
RARR-Top5Flan-PaLM-540B36.1235.4035.4480.3579.9179.91
SelfAskQA-Top5Flan-PaLM-540B19.7521.6021.9969.8770.3172.05
Inline-Top5Flan-PaLM-540B32.6734.5331.6965.5077.7372.93
Check & Fix-Top5Flan-PaLM-540B31.7432.6833.8778.6081.6681.22
", + "image_path": "757db40d7275d21575c568699949beaf95b45cb6610312417691092f485cdf1b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + }, + { + "bbox": [ + 69, + 727, + 524, + 761 + ], + "lines": [ + { + "bbox": [ + 69, + 727, + 524, + 761 + ], + "spans": [ + { + "bbox": [ + 69, + 727, + 524, + 761 + ], + "type": "text", + "content": "Table 9: Results for the knowledge-retrieval tasks of MuSiQue and StrategyQA. MuSiQue scores are F1 scores. Missing cells, such as \"Interleaving\" with Flan-UL2-20B, are experiments where the model failed to converge to an answer." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_footnote" + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 285, + 781, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 781, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 285, + 781, + 311, + 791 + ], + "type": "text", + "content": "13873" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 94, + 72, + 500, + 360 + ], + "blocks": [ + { + "bbox": [ + 94, + 72, + 500, + 360 + ], + "lines": [ + { + "bbox": [ + 94, + 72, + 500, + 360 + ], + "spans": [ + { + "bbox": [ + 94, + 72, + 500, + 360 + ], + "type": "table", + "html": "
StrategyModelDROPGSM8K
3-shot5-shot7-shot3-shot5-shot7-shot
Baseline-CoTFlan-PaLM-540B77.275.074.267.470.870.8
Baseline-CoTFlan-UL2-20B7.227.226.2
Baseline-CoTGPT-357.655.655.658.858.058.4
Baseline-CoTFlan-PaLM-62B65.663.659.247.446.247.4
Baseline-InlineFlan-PaLM-540B77.875.674.469.872.671.2
Baseline-InlineFlan-UL2-20B3.65.63.6
Baseline-InlineGPT-357.666.059.651.654.053.2
Baseline-InlineFlan-PaLM-62B59.064.059.248.847.848.0
InlineFlan-PaLM-540B76.275.274.461.461.870.6
InlineFlan-UL2-20B26.626.226.0
InlineGPT-356.866.045.250.852.452.8
InlineFlan-PaLM-62B57.064.057.848.847.848.2
Check & FixFlan-PaLM-540B76.073.645.068.470.470.2
Check & FixFlan-UL2-20B23.225.823.2
Check & FixGPT-354.854.454.856.058.461.6
Check & FixFlan-PaLM-62B65.063.644.246.844.046.6
", + "image_path": "1cb6a22edfd66c97427a1f7b6ba78ff98a6a07655d0e11308cc5e5cb0c48a9f4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 138, + 407, + 457, + 745 + ], + "blocks": [ + { + "bbox": [ + 67, + 369, + 525, + 394 + ], + "lines": [ + { + "bbox": [ + 67, + 369, + 525, + 394 + ], + "spans": [ + { + "bbox": [ + 67, + 369, + 525, + 394 + ], + "type": "text", + "content": "Table 10: Results for the calculator settings of DROP and GSM8K. We omit Flan-UL2-20B results on DROP, as the model could not converge to solve the task with our prompts, likely since each example in this task is very long." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 138, + 407, + 457, + 745 + ], + "lines": [ + { + "bbox": [ + 138, + 407, + 457, + 745 + ], + "spans": [ + { + "bbox": [ + 138, + 407, + 457, + 745 + ], + "type": "table", + "html": "
StrategyAggregationModelMuSiQueStrategyQA
Baseline-CoTMaxGPT-330.271.6
Baseline-CoTAverageGPT-329.071.3
Baseline-CoTMaxFlan-UL2-20B16.572.1
Baseline-CoTAverageFlan-UL2-20B15.970.3
Baseline-CoTMaxFlan-PaLM-62B24.380.3
Baseline-CoTAverageFlan-PaLM-62B23.878.5
Baseline-CoTMaxFlan-PaLM-540B33.884.3
Baseline-CoTAverageFlan-PaLM-540B33.482.1
Baseline-SelfAskMaxGPT-329.874.2
Baseline-SelfAskAverageGPT-328.673.2
Baseline-SelfAskMaxFlan-UL2-20B11.553.7
Baseline-SelfAskAverageFlan-UL2-20B11.545.4
Baseline-SelfAskMaxFlan-PaLM-62B9.576.0
Baseline-SelfAskAverageFlan-PaLM-62B6.769.7
Baseline-SelfAskMaxFlan-PaLM-540B25.876.9
Baseline-SelfAskAverageFlan-PaLM-540B25.175.3
Baseline-InlineMaxGPT-330.378.6
Baseline-InlineAverageGPT-329.275.0
Baseline-InlineMaxFlan-UL2-20B13.772.1
Baseline-InlineAverageFlan-UL2-20B12.270.6
Baseline-InlineMaxFlan-PaLM-62B23.475.5
Baseline-InlineAverageFlan-PaLM-62B22.774.7
Baseline-InlineMaxFlan-PaLM-540B31.279.9
Baseline-InlineAverageFlan-PaLM-540B30.874.7
", + "image_path": "312edb6df25d872382dc3f317cb783a660696c70e4aed1c28b3932aeafa168fc.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 135, + 753, + 458, + 766 + ], + "lines": [ + { + "bbox": [ + 135, + 753, + 458, + 766 + ], + "spans": [ + { + "bbox": [ + 135, + 753, + 458, + 766 + ], + "type": "text", + "content": "Table 11: Aggregations by few-shot prompt of the results in Table 9 (basiines)." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13874" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 89, + 108, + 502, + 693 + ], + "blocks": [ + { + "bbox": [ + 89, + 108, + 502, + 693 + ], + "lines": [ + { + "bbox": [ + 89, + 108, + 502, + 693 + ], + "spans": [ + { + "bbox": [ + 89, + 108, + 502, + 693 + ], + "type": "image", + "image_path": "3bea2fe03cccd890f0ff782f744e84a91fdfbf876666e0691569d43890f62e6b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 704, + 526, + 741 + ], + "lines": [ + { + "bbox": [ + 67, + 704, + 526, + 741 + ], + "spans": [ + { + "bbox": [ + 67, + 704, + 526, + 741 + ], + "type": "text", + "content": "Figure 6: An extension of Table 4. (a-b) refer to taking the minimum of entity page views to ablate examples that have rare entities, and maximum of numbers to ablate examples with large numbers. (c-e) take the median in both cases, and (f) shows the results when comparing TA strategies between refinement and non-refinement types." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13875" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 139, + 255, + 456, + 561 + ], + "blocks": [ + { + "bbox": [ + 139, + 255, + 456, + 561 + ], + "lines": [ + { + "bbox": [ + 139, + 255, + 456, + 561 + ], + "spans": [ + { + "bbox": [ + 139, + 255, + 456, + 561 + ], + "type": "table", + "html": "
StrategyAggregationModelMuSiQueStrategyQA
InterleavingMaxFlan-PaLM-62B24.474.7
InterleavingAverageFlan-PaLM-62B23.973.9
InterleavingMaxFlan-PaLM-540B23.778.2
InterleavingAverageFlan-PaLM-540B21.877.0
RARRMaxFlan-UL2-20B13.455.9
RARRAverageFlan-UL2-20B12.846.3
RARRMaxFlan-PaLM-62B24.177.7
RARRAverageFlan-PaLM-62B23.777.1
RARRMaxFlan-PaLM-540B35.181.2
RARRAverageFlan-PaLM-540B34.780.6
RARR-Top5MaxFlan-PaLM-540B36.180.3
RARR-Top5AverageFlan-PaLM-540B35.780.1
Check & FixMaxGPT-332.973.8
Check & FixAverageGPT-331.672.2
Check & FixMaxFlan-UL2-20B13.569.9
Check & FixAverageFlan-UL2-20B12.161.0
Check & FixMaxFlan-PaLM-62B26.576.4
Check & FixAverageFlan-PaLM-62B26.074.4
Check & FixMaxFlan-PaLM-540B33.580.8
Check & FixAverageFlan-PaLM-540B32.379.6
Check & Fix-Top5MaxFlan-PaLM-540B33.981.7
Check & Fix-Top5AverageFlan-PaLM-540B32.880.5
", + "image_path": "f250cf129b18f01d2e3ed98880c61ce7682534398a8988b8e50e58f536b2511e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 127, + 570, + 464, + 583 + ], + "lines": [ + { + "bbox": [ + 127, + 570, + 464, + 583 + ], + "spans": [ + { + "bbox": [ + 127, + 570, + 464, + 583 + ], + "type": "text", + "content": "Table 12: Aggregations by few-shot prompt of the results in Table 9 (TA strategies)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "13876" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 117, + 127, + 479, + 690 + ], + "blocks": [ + { + "bbox": [ + 117, + 127, + 479, + 690 + ], + "lines": [ + { + "bbox": [ + 117, + 127, + 479, + 690 + ], + "spans": [ + { + "bbox": [ + 117, + 127, + 479, + 690 + ], + "type": "table", + "html": "
StrategyAggregationModelMuSiQueStrategyQA
SelfAskMaxGPT-324.467.2
SelfAskAverageGPT-323.665.8
SelfAskMaxFlan-UL2-20B11.949.8
SelfAskAverageFlan-UL2-20B9.032.9
SelfAskMaxFlan-PaLM-62B14.867.7
SelfAskAverageFlan-PaLM-62B13.867.1
SelfAskAverageFlan-PaLM-540B22.372.2
SelfAskMaxFlan-PaLM-540B23.474.2
SelfAskQAMaxGPT-332.770.3
SelfAskQAAverageGPT-331.567.8
SelfAskQAMaxFlan-UL2-20B8.547.2
SelfAskQAAverageFlan-UL2-20B5.425.3
SelfAskQAMaxFlan-PaLM-62B17.569.0
SelfAskQAAverageFlan-PaLM-62B15.868.1
SelfAskQAMaxFlan-PaLM-540B22.875.1
SelfAskQAAverageFlan-PaLM-540B21.971.9
SelfAskQA-Top5MaxFlan-PaLM-540B22.072.1
SelfAskQA-Top5AverageFlan-PaLM-540B21.170.7
InlineQAMaxGPT-336.772.1
InlineQAAverageGPT-334.370.9
InlineQAMaxFlan-UL2-20B18.171.2
InlineQAAverageFlan-UL2-20B12.566.1
InlineQAMaxFlan-PaLM-62B12.563.3
InlineQAAverageFlan-PaLM-62B11.662.0
InlineQAMaxFlan-PaLM-540B32.473.4
InlineQAAverageFlan-PaLM-540B32.172.2
InlineMaxGPT-333.570.3
InlineAverageGPT-331.169.4
InlineMaxFlan-UL2-20B14.952.8
InlineAverageFlan-UL2-20B14.848.8
InlineMaxFlan-PaLM-62B13.370.3
InlineAverageFlan-PaLM-62B10.964.5
InlineMaxFlan-PaLM-540B24.374.7
InlineAverageFlan-PaLM-540B22.074.2
InlineQA-Top5MaxFlan-PaLM-540B34.577.7
InlineQA-Top5AverageFlan-PaLM-540B33.072.1
", + "image_path": "1c64a02b1a185df962b9b3e7638b6449b334d051949fba3f12899e13e37e14cd.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 127, + 698, + 464, + 710 + ], + "lines": [ + { + "bbox": [ + 127, + 698, + 464, + 710 + ], + "spans": [ + { + "bbox": [ + 127, + 698, + 464, + 710 + ], + "type": "text", + "content": "Table 13: Aggregations by few-shot prompt of the results in Table 9 (TA strategies)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "13877" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 139, + 163, + 455, + 654 + ], + "blocks": [ + { + "bbox": [ + 139, + 163, + 455, + 654 + ], + "lines": [ + { + "bbox": [ + 139, + 163, + 455, + 654 + ], + "spans": [ + { + "bbox": [ + 139, + 163, + 455, + 654 + ], + "type": "table", + "html": "
StrategyAggregationModelDROPGSM8K
Baseline-CoTMaxGPT-357.658.8
Baseline-CoTAverageGPT-356.358.4
Baseline-CoTMaxFlan-UL2-20B27.2
Baseline-CoTAverageFlan-UL2-20B20.2
Baseline-CoTMaxFlan-PaLM-62B65.647.4
Baseline-CoTAverageFlan-PaLM-62B62.847.0
Baseline-CoTMaxFlan-PaLM-540B77.270.8
Baseline-CoTAverageFlan-PaLM-540B75.569.7
Baseline-InlineMaxGPT-366.054.0
Baseline-InlineAverageGPT-361.152.9
Baseline-InlineMaxFlan-UL2-20B9.25.6
Baseline-InlineAverageFlan-UL2-20B4.24.3
Baseline-InlineMaxFlan-PaLM-62B64.048.8
Baseline-InlineAverageFlan-PaLM-62B60.748.2
Baseline-InlineMaxFlan-PaLM-540B77.872.6
Baseline-InlineAverageFlan-PaLM-540B75.971.2
Check & FixMaxGPT-354.861.6
Check & FixAverageGPT-354.758.7
Check & FixMaxFlan-UL2-20B25.8
Check & FixAverageFlan-UL2-20B24.1
Check & FixMaxFlan-PaLM-62B65.046.8
Check & FixAverageFlan-PaLM-62B57.645.8
Check & FixMaxFlan-PaLM-540B76.070.4
Check & FixAverageFlan-PaLM-540B64.969.7
InlineMaxGPT-366.052.8
InlineAverageGPT-356.052.0
InlineMaxFlan-UL2-20B26.6
InlineAverageFlan-UL2-20B26.3
InlineMaxFlan-PaLM-62B64.048.8
InlineAverageFlan-PaLM-62B59.648.3
InlineMaxFlan-PaLM-540B76.270.8
InlineAverageFlan-PaLM-540B75.364.5
", + "image_path": "0fa32e1c134f1cd3d2d3f844e902ee3bfb3faa979ff62b8eb735b3636532e218.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 155, + 663, + 437, + 676 + ], + "lines": [ + { + "bbox": [ + 155, + 663, + 437, + 676 + ], + "spans": [ + { + "bbox": [ + 155, + 663, + 437, + 676 + ], + "type": "text", + "content": "Table 14: Aggregations by few-shot prompt of the results in Table 10." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "13878" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 22 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_content_list.json b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b348504ecbf2f10f6bdce06e1a2b6e0d4d43db63 --- /dev/null +++ b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_content_list.json @@ -0,0 +1,2788 @@ +[ + { + "type": "text", + "text": "A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting", + "text_level": 1, + "bbox": [ + 114, + 79, + 880, + 118 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Pradyumna Tambwekar1, Lakshita Dodeja2*, Nathan Vaska3*, Wei Xu1, and Matthew Gombolay1", + "bbox": [ + 178, + 124, + 821, + 159 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1School of Interactive Computing, Georgia Institute of Technology", + "bbox": [ + 228, + 159, + 771, + 175 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ Computer Science Department, Brown University", + "bbox": [ + 294, + 175, + 707, + 192 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3Massachusetts Institute of Technology, Lincoln Laboratory", + "bbox": [ + 257, + 192, + 744, + 209 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "pradyumna.tambwekar@.gatech.edu, lakshita_dodeja@brown.edu, nathan.vaska@ll.mit.edu,{wei.xu, matthew.gombolay}@cc.gatech.edu", + "bbox": [ + 178, + 209, + 823, + 241 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 266 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Many real-world tasks involve a mixed-initiative setup, wherein humans and AI systems collaboratively perform a task. While significant work has been conducted towards enabling humans to specify, through language, exactly how an agent should complete a task (i.e., low-level specification), prior work lacks on interpreting the high-level strategic intent of the human commanders. Parsing strategic intent from language will allow autonomous systems to independently operate according to the user's plan without frequent guidance or instruction. In this paper, we build a computational interface capable of translating unstructured language strategies into actionable intent in the form of goals and constraints. Leveraging a game environment, we collect a dataset of over 1000 examples, mapping language strategies to the corresponding goals and constraints, and show that our model, trained on this dataset, significantly outperforms human interpreters in inferring strategic intent (i.e., goals and constraints) from language $(p < 0.05)$ . Furthermore, we show that our model (125M parameters) significantly outperforms ChatGPT for this task $(p < 0.05)$ in a low-data setting.", + "bbox": [ + 141, + 281, + 460, + 651 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 665, + 258, + 680 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Effective communication is essential for the proper functioning of organizational teams. \"Commander's Intent\" is a method for developing a theory of mind utilized in many domains such as the search and rescue, pandemic response, military, etc (Mercado et al., 2016; Rosen et al., 2002; Kruijff et al., 2014). Commanders and leaders often utilize the formulation of \"Commander's Intent\" to convey the tasks that need to be accomplished and engender an understanding of the criteria for success to their subordinates (Dempsey and Chavous, 2013). Commander's Intent could similarly function as", + "bbox": [ + 112, + 690, + 489, + 883 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/ce21f28bb23cd9ce31105d2c2a8f6a2a73c430a3946f368f8d44af13693a963f.jpg", + "image_caption": [ + "Figure 1: Our work aims to facilitate humans to specify their strategy to an AI system via language. Using the board game Risk as a simulated environment, we collect language descriptions of a strategy (top-left) corresponding to a player's troop deployments (bottom-left). The player's selections are shown by the white icons, and the grey and black icons denote the troops of the two opposing players. Each strategy corresponds to a set of goals (bottom-right) and constraints (top-right) The green and orange text corresponds to the language relating to constraints and goals respectively." + ], + "image_footnote": [], + "bbox": [ + 510, + 250, + 884, + 492 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "an effective scaffold to represent a human's strategic intent in a mixed-initiative interaction (Novick and Sutton, 1997). Commander's Intent provides a functionality for expert-specifiers to engender a degree of \"shared-cognition,\" between an AI-collaborator and a human-specifier, by aligning the actions of the AI system to the human-specifiers values or reward function.", + "bbox": [ + 507, + 690, + 884, + 818 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Commander's intent is formally represented by a set of goals and constraints. Goals (or preferences) are categorized as a desirable set of states or affairs that the agent intends to obtain (Moskowitz and Grant, 2009; Kruglanski, 1996) and constraints refer to conditions that are imposed on solutions", + "bbox": [ + 507, + 822, + 882, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*These authors contributed to this paper while they were at Georgia Institute of Technology.", + "bbox": [ + 112, + 892, + 487, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "12801", + "bbox": [ + 475, + 927, + 522, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12801-12819", + "bbox": [ + 208, + 945, + 786, + 958 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 958, + 719, + 971 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "formulated by an agent (Nickles, 1978). Translating unstructured language-based strategy into this machine-readable specification is a non-trivial challenge. This translation could be conducted via a human interpreter, however, interpreters with the requisite expertise will not always be available. Alternatively, humans could utilize a structured interface to specify their intent. However, interfaces can become overly complicated, and humans become demotivated to work with an AI system when they cannot easily navigate the interface (Hayes, 1985). Enabling humans to express their strategic intent in everyday language provides an effective solution to these issues.", + "bbox": [ + 112, + 84, + 492, + 311 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we develop an approach to solve a task we call automatic strategy translation, wherein we learn to infer strategic intent, in the form of goals and constraints, from language. Prior work has developed methods to utilize language to specify policies of an AI agent (Tambwekar et al., 2021; Gopalan et al., 2018; Thomason et al., 2019; Blukis et al., 2019) or specify reward functions or tasks which can be optimized for, via reinforcement learning (RL) or a planner (Gopalan et al., 2018; Padmakumar et al., 2021; Silva et al., 2021a). However, our work is the first to translate language into goals and constraints, which can be applied towards constrained optimization approaches for directing agent behavior independent of the original human specifier. Unlike prior work, we focus on interpreting language description of complex gameplay strategies, rather than simple individual commands (e.g., \"move from A to B; open the door\").", + "bbox": [ + 115, + 317, + 490, + 623 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "First, we collect a dataset of over 1000 examples mapping language to goals and constraints, leveraging a game environment of Risk. Next, we fine-tuned a pretrained RoBERTa model (Liu et al., 2019), equipped with model augmentations and customized loss functions such as Order-Agnostic Cross Entropy (Du et al., 2021), to infer goals and constraints from language strategy specifications. Finally, we employ a human evaluation to test our approach. Recent work has shown that automated evaluation metrics for language models may provide a misleading measure of performance (Liang et al., 2022). Therefore, we design a head-to-head evaluation, whereby, we can directly compare our model to the average human interpreter. In addition to humans, we prompted ChatGPT to perform the same task on a held-out set of 30 examples. We computed the statistical difference between our", + "bbox": [ + 112, + 629, + 490, + 919 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "model and these baselines, providing a concrete measure of the relative efficacy of our approach. Our contributions are as follows:", + "bbox": [ + 507, + 84, + 884, + 131 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We propose one of the first complete machine learning pipelines including data collection, augmentation and model training for inferring structured strategic intent from human language.", + "- Through a human study, we show that our proposed approach can interpret goals and constraints from language descriptions better than the average human $(p < 0.001)$ .", + "- Through in-context learning, we evaluate ChatGPT's performance to gauge the relative efficacy of our approach, and show that our approach significantly outperforms ChatGPT (p < 0.05)." + ], + "bbox": [ + 515, + 143, + 885, + 353 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 509, + 362, + 665, + 378 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "This section covers prior work on learning strategies from language, as well as methods and datasets to enable humans to specify AI-behavior in a mixed-initiative setting.", + "bbox": [ + 507, + 388, + 885, + 453 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 Learning strategies from Language", + "text_level": 1, + "bbox": [ + 507, + 464, + 833, + 479 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "A common approach for specifying strategies through language has been through encoding language instructions, via planning-based representation languages, such as PDDL or LTL (Williams et al., 2018; Bahdanau et al., 2018; Thomason et al., 2019; Tellex et al., 2020), or deep learning (Fu et al., 2019; Blukis et al., 2019; Gopalan et al., 2018). Such formulations facilitate the ability to constrain actions taken by the agent to the instruction specified, e.g. \"Go around the tree to your left and place the ball.\" Another popular alternative is language-conditioned learning, where language is employed to specify a reward function, or a task (Silva et al., 2021a; Goyal et al., 2019; Andreas et al., 2017; Shridhar et al., 2022). Such approaches seek to improve the ability of an agent to complete a task(s) through intermediate language inputs, such as \"take the ladder to your left\". However, these approaches do not allow a supervisor to specify their strategic intent, such that the agent can complete it's primary task while still adhering to the specifier's plan. Recent work proposed a novel approach to mapping language to constraints and rewards via a dependency tree (Rankin et al., 2021), however their approach relies on a pre-trained grammar to extract a dependency tree, thus may not scale to human-like language.", + "bbox": [ + 507, + 485, + 885, + 919 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "12802", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Formally, the process of optimizing AI systems given goals and constraints has been broadly categorized as Seldonian Optimization (Thomas et al., 2019, 2017). In this framework, the goal is to optimize the priorities of an objective function while adhering to a given set of constraints as opposed to simply optimizing based on the reward or loss function. (Yang et al., 2020) proposed a Seldonian optimization approach to translate constraints into a feature representation, encoding invalid regions in the state space, which is then applied towards safe RL. However their application is restricted to learning to parse individual constraint statements such as \"Don't get too close to the water,\" rather than facilitating constraint extraction from more realistic descriptions pertaining to an entire strategy. In our work, we provide a first-of-its-kind dataset, and correspondent model, to capacitate seldonian optimization through unstructured language.", + "bbox": [ + 115, + 84, + 490, + 390 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 Language and Strategy Datasets", + "text_level": 1, + "bbox": [ + 115, + 407, + 413, + 423 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Prior datasets for instruction following and policy specifications are often comprised of shorter instructions describing individual tasks. In contrast, our dataset consists of larger, unstructured descriptions of strategies which may be more reflective of potential strategy descriptions from in-the-wild users. Recent work has published a dataset of policy descriptions which are similar to the language descriptions we collect (Tambwekar et al., 2021) - however, they describe specific policies, rather than broad strategies for a task. Other datasets look to map language to trajectories or goals states within the trajectory (Padmakumar et al., 2021; Misra et al., 2018; Suhr et al., 2019). These datasets typically serve as a means of replacing physical demonstrations with language. These datasets lack explicit goals and constraints corresponding to the language collected, that can be applied towards seldonian optimization. Recent work provided a dataset with constraint statements (Yang et al., 2020) which are designer-specific; however, each constraint is associated with an isolated statement, making it unclear whether this approach will generalize to unprompted language describing multiple constraints. Unlike prior work, our dataset provides the ability to apply Seldonian optimization approaches from unstructured language. Furthermore, we conduct a study wherein we provide a human and ChatGPT baseline for our dataset to highlight the challenging nature of this task.", + "bbox": [ + 115, + 437, + 489, + 917 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Natural Language Strategies in RISK", + "text_level": 1, + "bbox": [ + 512, + 83, + 865, + 99 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our work aims to facilitate humans to specify their strategy or commander's intent to an AI system via language. In this section, we utilize the board game Risk to create a dataset that maps unstructured natural language descriptions of strategies to actionable intent in the form of goals and constraints.", + "bbox": [ + 512, + 112, + 880, + 208 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Board Game - RISK", + "text_level": 1, + "bbox": [ + 512, + 229, + 717, + 244 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Risk (Gibson et al., 2010) is a multiplayer strategy board game of diplomacy, conflict, and conquest, which was first invented in 1957. The gameplay of Risk consists of four phases: Draft, Recruit, Attack, and Move. The draft phase is conducted at the start of the game wherein each player drafts an initial set of continents and deploys a fixed number of troops onto those continents. This allocation of troops is a crucial participatory task (Muller and Kuhn, 1993) which involves humans reasoning about their strategy and setting up for the rest of the game. Participants may choose any of the empty territories on the map to deploy their troops, with a wide range of strategies that may depend on their opponent's troop allocation. For example, a more conservative player may draft troops to only one continent for better defense, whereas a player with a more aggressive strategy may choose to spread out their troops. After the draft phase, each subsequent turn for a player involves iteratively conducting the recruit, attack, and move phases. Further details about Risk can be found in Appendix-I.", + "bbox": [ + 512, + 256, + 882, + 609 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In our setting, we use a map layout that has 5 continents with a total of 21 territories/countries, as illustrated in Figure 1. Instead of real country names used in the Risk game, we use ad-hoc names for each continent (e.g., Red, Green, Blue, etc.) to mitigate participant bias. In the draft phase, each player takes turns to deploy 14 troops. The specific set of tasks that humans need to complete for our study include: (i) develop a strategy for Risk and deploy 14 troops after the two opposing players have completed their draft; (ii) provide six goals (on a 200-point scale) and up to eight constraints that were relevant to their allocation of troops and broader intents; (iii) use natural language to describe their overall strategy and the goals and constraints they considered. The troops of the opposing player are shown to the participants prior to completing these tasks. More details about this data collection process are discussed in Section 3.3.", + "bbox": [ + 512, + 614, + 882, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "12803", + "bbox": [ + 478, + 928, + 524, + 940 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2 Task Definition", + "text_level": 1, + "bbox": [ + 114, + 84, + 280, + 98 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Our goal is to develop a computational interface capable of inferring strategic intent from unstructured language descriptions of strategies. Formally, we define the task of Automatic Strategy Translation as follows: Given the troop deployments $S$ , a map $M$ , and the strategy $W$ , which is a paragraph written in natural language, our task is to automatically derive a set of goals $G$ and constraints $C$ . The troop selections $S$ include the name and number of troops for each territory drafted by the player. We have a total of 6 predefined goals, each of which takes a numeric value between $[-100, 100]$ . This numeric value corresponds to whether the goal positively or negatively aligns with the strategy. For example, for the goal \"maximize battles\", 100 implies that the player intends to battle as much as possible, and -100 implies that the player intends to battle as infrequently as possible. Each constraint is comprised of a class and value. We restrict the number of possible constraints to 8 as a reasonable upper bound per strategy. To summarize, each example $\\langle M, W, S, C, G \\rangle \\in \\mathcal{D}$ consists of a strategy $W$ described in natural language, for a player's troop selections, $S$ , on a map, $M$ , from which $C$ and $G$ are the gold standard constraints and goals.", + "bbox": [ + 115, + 108, + 490, + 510 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3 Data Collection", + "text_level": 1, + "bbox": [ + 112, + 525, + 284, + 539 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We collected a dataset $\\mathcal{D}$ of 1053 unique examples by recruiting participants on Amazon Mechanical Turk and Profilic (pro, 2014). Firstly, to familiarize participants with the game, we designed a tutorial that provided a description and annotated examples to explain the rules of the game and the tasks that participants needed to perform. As a further measure of improving data quality, participants were quizzed on the rules of Risk to reinforce their understanding (full quiz has been provided in §A.2). They were given three attempts to answer correctly, after which they were shown the answers. Upon completing the quiz, participants began the task. We showed participants a map, which shows the drafted troops of the two opposing players, and asked them to provide their own troop deployments. Following their draft, participants are asked to provide the goals and constraints they considered for their gameplay strategy/deployments and finally provide a language description of their strategy. The language strategy they provided needed to have at least 200 characters. Each participant was asked to repeat this task 5 times to create 5 data points,", + "bbox": [ + 115, + 549, + 489, + 919 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "each time with a different map. The maps seen by participants were selected from a set of 15 unique initial troop settings.", + "bbox": [ + 507, + 84, + 880, + 131 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Participants needed approximately 10 minutes per data point. Figure 1 depicts the format of our dataset. Our dataset included data from 230 participants. The average length of language descriptions in our dataset was 99.21 words, and the overall vocabulary size was 2,356 words. Additional details regarding our data collection protocol are available in Appendix A.", + "bbox": [ + 507, + 133, + 884, + 261 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4 Automatic Strategy Translation", + "text_level": 1, + "bbox": [ + 507, + 277, + 816, + 293 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Following the data collection in Section 3, our goal is to leverage this dataset to develop a model that can perform the task of automatic strategy translation. Inferring strategic intent from language is a non-trivial endeavor as unstructured language can be vague thus leading to ambiguous interpretations. We seek to develop an approach capable of performing this task better than the average human, so as to enable strategy specification via language to reduce the potential risk of human errors or the need of third-party expert interpreters. In this section, we cover the technical details which make this task possible in a low-data setting.", + "bbox": [ + 507, + 303, + 884, + 513 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.1 Text Encoder", + "text_level": 1, + "bbox": [ + 507, + 526, + 662, + 539 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We adopted the pretrained RoBERTa model (Liu et al., 2019) as our encoder which is parameterized by $\\theta$ . The input sequence to our model is comprised of the language description of the strategy, $W = [w_{1}, w_{2}, \\ldots, w_{|W|}]$ , and troop selections $S = [s_{1}, s_{2}, \\ldots, s_{|S|}]$ , where each troop selection is comprised of the country name along with the number of troops placed on that country (e.g., $S = [Red\\_A = 2, Red\\_C = 8, Purple\\_D = 4]$ ). The encoder learns the embedding function, which maps the text input, comprised of the strategy $W$ and selections $S$ , to a $d$ -dimensional real-valued vector which then be used towards predicting goals ( $\\S 4.2$ ) and constraints ( $\\S 4.3$ ).", + "bbox": [ + 507, + 548, + 882, + 772 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Ordinarily, the final embedding for the single [CLS] token learned by RoBERTa, i.e., $E_{\\theta} = BERT_{[CLS]}(W,S)$ , is used for classification. In this work, we incorporate multiple classification tokens (Chang et al., 2023), each of which corresponds to an individual goal or constraint. For $i$ th goal or constraint, we learn a separate classification embedding, $E_{\\theta}^{i} = BERT_{[CLS_{i}]}(W,S)$ . Using individual class-specific tokens improves the model", + "bbox": [ + 507, + 774, + 884, + 919 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "12804", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/6a01acf28d3193848848e84979e9032a2cce94525988538bceebcd6133cac663.jpg", + "image_caption": [ + "Figure 2: Illustration of our Automatic Strategy Translation model. The input to the model includes the classification tokens, language description, and troop selections (Section 4.1). The encoder then generates embeddings for each classification token, and passes them onto an individual classification head. Each classification head is a fully-connected layer that predicts a probability distribution for the respective goal ( $\\S 4.2$ ) or constraint ( $\\S 4.3$ )." + ], + "image_footnote": [], + "bbox": [ + 168, + 80, + 803, + 269 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "the capability to learn different attention weights corresponding to the classification embeddings for each goal or constraint. We utilize different encoders for predicting goals and constraints, which are parameterized by $\\theta_{g}$ and $\\theta_{c}$ , respectively.", + "bbox": [ + 112, + 363, + 487, + 447 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2 Goal Extraction Model", + "text_level": 1, + "bbox": [ + 112, + 455, + 341, + 469 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We treat the subtask of deriving goals from language as an ordinal classification task. Originally, in our dataset goals are specified as continuous values ranging from $[-100, 100]$ , which we discretize by creating 5 uniform buckets, i.e., $[-100, -60)$ , $[-60, -20)$ , etc. That is, for each goal, we predict an assignment as a 5-class classification as:", + "bbox": [ + 112, + 475, + 489, + 588 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nP _ {j} = L _ {\\phi_ {j}} \\left(E _ {\\theta_ {g}} ^ {j}\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 236, + 596, + 487, + 618 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $P_{j}$ represents the probability distribution across assignments for $j$ th goal and $E_{\\theta_g}^j$ corresponds to the embedding from the encoder. Each goal uses a separate classification layer $L$ parameterized by $\\phi_j$ . The goal extraction model is trained on a dual-criteria loss function that combines cross-entropy (CE) and mean-square-error (MSE) loss:", + "bbox": [ + 112, + 624, + 489, + 739 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {g o a l}} = \\alpha \\mathcal {L} _ {C E} + (1 - \\alpha) \\mathcal {L} _ {M S E}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 173, + 747, + 487, + 766 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\alpha$ is a simple weighting hyperparameter. The addition of MSE loss helps to account for the ordinal nature of goal value predictions.", + "bbox": [ + 112, + 774, + 487, + 822 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.3 Constraint Extraction Model", + "text_level": 1, + "bbox": [ + 112, + 832, + 389, + 847 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Similar to the goal extraction model, the input to each classification head for constraint prediction is $E_{\\theta_c}^k$ , which corresponds to the classification embedding learned by the encoder for the $k^{th}$ constraint.", + "bbox": [ + 112, + 853, + 489, + 917 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "However, unlike for the goal extraction model, each of the eight constraint classification heads learns to predict the constraint itself rather than a value for a fixed goal. Therefore, the model needs to predict the set of unordered constraints $\\{c_1, c_2, \\ldots, c_8\\}$ , wherein each $c_k$ is predicted from the set of all possible constraints $C$ (190 total possible constraints). Each strategy can have a maximum of eight constraints, i.e., the set $C$ includes a null value.", + "bbox": [ + 507, + 363, + 884, + 508 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "While providing constraints during data collection, participants merely assigned constraints to their strategy, but did not rank the ordering of constraints. As such, the order of constraints in our dataset does not necessarily correspond to the order in which each classification head needs to predict the constraints. Therefore, each classification head does not have a strict label it can utilize to compute a classification loss, making this task distinct from conventional sequence prediction or multiclass classification tasks. For instance, if the constraints predicted by the model are $\\{C,\\emptyset ,B,D\\}$ and the labels for this strategy are $\\{A,B,C,\\emptyset \\}$ , utilizing a standard classification loss function, such as cross-entropy, would result in a higher loss than what is representative of the prediction, as three out of four constraints have been predicted correctly. As such, this task requires a loss function that allows us to train our model to predict the correct constraints for a language strategy agnostic of the ordering of the labels. We chose to adopt a recently proposed loss function called Order-Agnostic Cross Entropy (OaXE) (Du et al., 2021). Intuitively, OaXE is defined as the cross entropy for the best possible alignment of output tokens.", + "bbox": [ + 507, + 517, + 884, + 919 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "12805", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/ebaba61375dbc6c286c471dabf32dbdc274ca8b42d201e4a03cb5df8ea25f89d.jpg", + "image_caption": [ + "Figure 3: Pipeline for augmenting synthetic or human-created data ( $\\S 4.4$ ). A strategy description is first split into sentences, then passed into the PEGASUS (Zhang et al., 2020) paraphrasing model and data quality filter." + ], + "image_footnote": [], + "bbox": [ + 114, + 80, + 884, + 190 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Let $O = \\{O_1, O_2, \\ldots, O_{|O|}\\}$ be the ordering space of all possible orderings of the target sequence of constraints, where each $O_l$ is one possible ordering of the target tokens. The final loss function is computed as:", + "bbox": [ + 112, + 252, + 489, + 332 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {O a X E} = - \\log P \\left(O ^ {*} \\mid X\\right) \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 201, + 347, + 487, + 363 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $O^{*}$ represents the best possible alignment from $O$ . This alignment is computed by applying the Hungarian algorithm, after casting this problem as maximum bipartite matching (Du et al., 2021). As our final loss function, we follow Du et al. (2021) in combining OaXE with cross-entropy loss:", + "bbox": [ + 112, + 376, + 489, + 474 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {c o n s t r a i n t}} = T _ {m} * \\mathcal {L} _ {C E} + (1 - T _ {m}) * \\mathcal {L} _ {O a X E} \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 122, + 487, + 487, + 504 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $T_{m}$ is a temperature parameter that is logistically annealed from 1 to 0. In our case, cross entropy $(\\mathcal{L}_{CE})$ is computed using the default ordering of labels in our dataset.", + "bbox": [ + 112, + 516, + 489, + 581 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.4 Data Augmentation Methods", + "text_level": 1, + "bbox": [ + 112, + 592, + 386, + 607 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Finally, we applied data augmentation procedures to improve our model's performance. First, we randomly generated 4000 unique sets of goals and constraints, and applied a text template to produce descriptions to develop a Synthetic (S) training corpus. For example, the constraint, \"I must have troops on Red\" could be represented as \"My strategy is to take over Red,\" or \"I need a large army on Red,\" or \"I need to place troops on Red.\" We further augmented this synthetic corpus with a pretrained PEGASUS (Zhang et al., 2020) paraphrasing model to create an Augmented-Synthetic (AS) dataset. We split each language description from the synthetic corpus into individual sentences and employed the paraphrasing model to generate candidate paraphrases. Sentences that replaced important keywords, such as continent names, or were too similar to the original sentence in terms of edit distance were removed. We randomly chose", + "bbox": [ + 112, + 613, + 489, + 919 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "a sentence from the remaining candidates as a replacement sentence, and combined the replacement sentences to form an augmented data point (see Figure 3). The two Synthetic datasets (S, AS) were used to pretrain our model prior to training on human data. The same techniques were also applied to our human dataset to form a Augmented-Human dataset (AH). Our final Augmented-Human data set is a version of our original crowdsourced dataset where each example is rephrased using our augmentation pipeline and is twice the size of our original human dataset. We experiment with utilizing the AH dataset in place of the original human dataset to see if the added diversity in our corpus through paraphrasing improves downstream performance. Examples of Synthetic (S), Augmented-Synthetic (AS), and Augmented-Human (AH) data are provided in Figure 6 in the Appendix.", + "bbox": [ + 507, + 253, + 884, + 542 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5 Experiments", + "text_level": 1, + "bbox": [ + 507, + 554, + 655, + 571 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "This section will present the empirical evaluations of our approach. We design two evaluation experiments to contrast our model's performance with humans, as well as ChatGPT trained to perform our task through in-context learning. Both human and ChatGPT performance was computed using the 30 held-out examples in our test set. We statistically measure the difference in the average number of goals/constraints predicted correctly per data point between our model and the two baselines (Human + ChatGPT). We conclude with an ablation analysis across the model and data augmentations utilized in this approach.", + "bbox": [ + 505, + 581, + 882, + 790 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.1 Human Performance", + "text_level": 1, + "bbox": [ + 507, + 801, + 721, + 815 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In our first study, we ask how well the average human can perform on the task of parsing strategic intent from language (see Table 1). We recruited 114 participants for our study from Prolific. Participants begin with a tutorial of the task and are provided an annotated example explaining how to", + "bbox": [ + 507, + 822, + 884, + 919 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "12806", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/23e8047ff7ae2deb553702cacf0680411b732ecac722fd3ec8d5877fce21afaf.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
BaselineGoals (Total = 6)Constraints (Total = 8)
Model (Ours)2.76 ± 1.055.53 ± 1.26
Human1.87 ± 1.124.28 ± 1.83
ChatGPT2.10 ± 1.273.80 ± 1.51
", + "bbox": [ + 119, + 80, + 485, + 141 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "assign goals and constraints given a language description and map. Following this tutorial, each participant is provided three randomly selected maps and language descriptions from our test set of 30 unique data points and is asked to annotate the goals and constraints for each given strategy. Our study included attention checks to ensure participants who were submitting random responses could be excluded. The average time taken for our study was 21 minutes, and participants were paid $3.6 for completing our task. We utilized a data filtering rubric to identify and remove individual data points which were inadequate or were from participants who appeared to blatantly ignore or misunderstand the instructions. The rubric is included in Appendix F. After filtering, a total of 270 responses remained.", + "bbox": [ + 112, + 209, + 490, + 467 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.2 ChatGPT Performance", + "text_level": 1, + "bbox": [ + 112, + 478, + 344, + 492 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We also evaluate ChatGPT (GPT-3.5 Default) as a baseline for our task (see Table 1). We design a 1000-word language prompt to train ChatGPT to perform the same task (see full prompt in Appendix G.1). This prompt includes a description of the environment and task, as well as an annotated example translating goals and constraints from language. Crucially, we design our prompt such that ChatGPT receives the same information that humans receive in our study in §5.1. Following this prompt, we iteratively input each strategy and troop deployment in our test set and store the constraints selected by ChatGPT. The additional prompt engineering we conduct is to notify ChatGPT when it makes formational mistakes while predicting constraints, such as predicting more than the maximum number of constraints or creating new constraint classes.", + "bbox": [ + 112, + 499, + 489, + 788 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.3 Results for Goal Extraction", + "text_level": 1, + "bbox": [ + 112, + 801, + 376, + 816 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The average number of goals predicted correctly per map can be seen in the first column of Table 1. We applied multivariate linear regression to compare the results of our model with our ChatGPT and human baselines, with Akaike information criterion (AIC) as our Occam's razor. AIC is a mathematical", + "bbox": [ + 112, + 822, + 490, + 917 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/1ca1827b36b5d3a298e1667ca1ba6f14a539e7029235d0bca29fa5e50e3ffc0f.jpg", + "table_caption": [ + "Table 1: Mean and standard deviations for the number of correct predictions of each approach." + ], + "table_footnote": [], + "table_body": "
Model TypeDataPretrainingAccuracy (Std)
RoBERTa base--44.37 (1.33)
w/ troopAHAS46.04 (1.85)
w/ troop + [CLSi]AHAS45.52 (1.48)
w/ troop + [CLSi]AHS45.32 (1.01)
w/ troop + [CLSi]AH-45.89 (1.26)
w/ [CLSi]AHAS44.29 (1.14)
w/ troop + [CLSi]H-45.07 (1.33)
", + "bbox": [ + 510, + 82, + 884, + 199 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/704680a6d53e65d7f75bf301abc229b00e6438a923c23630a3784b54987aaa1a.jpg", + "table_caption": [ + "Table 2: Ablation study (10-fold cross-validation) with respect to model and data augmentations for goal extraction. H: the human-created dataset (\\$3.3); S: the synthetic dataset created from templates; AH/AS: the augmented version of H/S via paraphrasing (\\$4.4). $[\\mathrm{CLS}_i]$ represents the use of individual classification tokens for each goal/constraint (\\$4.1); \"troop\" represents the inclusion of troop selections as a part of the input." + ], + "table_footnote": [], + "table_body": "
ModelDataPretrainingAccuracy (Std)
RoBERTa baseH-62.60 (1.60)
w/ troop + [CLSi]HS68.21 (1.08)
w/ troop + [CLSi]AHS67.79 (1.58)
w/ troop + [CLSi]HAS67.09 (1.28)
w/ troopHS65.96 (1.12)
w/ troop + [CLSi]H-65.76 (1.13)
w/ troop + [CLSi]AH-65.52 (1.42)
w/ [CLSi]HS65.31 (1.12)
", + "bbox": [ + 510, + 338, + 884, + 467 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3: Ablation study (10-fold cross-validation) for constraint extraction.", + "bbox": [ + 507, + 478, + 882, + 506 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "method for determining a model-fit so as to choose the regression model which best fits our data. For the goals model, we modeled each baseline (human vs. model vs. ChatGPT) as a fixed effects co-variate, and the datapoint number as a mixed effects variable. The datapoint corresponded to the numerical index (between 1 - 30) of the datapoint from the test set. We performed the Levene's test (Glass, 1966) to show homoscedasticity $(F(2,327) = 0.5435$ , $p = 0.581)$ . The residuals for our model were not normally distributed; however, prior work has shown that an F-test is robust to non-normality (Blanca Mena et al., 2017; Cochran, 1947). Therefore, we proceeded with our linear regression analysis. The dependent variable within our analysis was the number of goals predicted correctly. An ANOVA with respect to our dependent variable yielded a significant difference across conditions $(F(2,299.95) = 10.605$ , $p < 0.001)$ . A Tukey post-hoc test (Abdi and Williams, 2010) for pairwise significance further revealed a significant difference between the performance of our model vs humans $(p < 0.001)$ and vs ChatGPT $(p < 0.05)$ , i.e., our approach was able to significantly predict", + "bbox": [ + 505, + 533, + 885, + 919 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "12807", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "goals better than humans and ChatGPT.", + "bbox": [ + 114, + 84, + 410, + 99 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.4 Results for Constraint Extraction", + "text_level": 1, + "bbox": [ + 112, + 120, + 421, + 134 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The average number of constraints predicted correctly per map can be seen in column 2 of Table 1. To compare our constraint prediction model, to our human and ChatGPT baselines, we conducted a non-parametric Friedman's test (Pereira et al., 2015). We could not employ a multivariate regression analysis, as the regression model for constraints did not satisfy the assumption of homoscedasticity as per Levene's test $(F(2,327) = 5.4294, p < 0.01)$ . The Friedman's test yielded a significant difference across conditions for the task of predicting constraints $(\\chi^2 (2,90) = 16.768, p < 0.001)$ . A further pairwise Wilcoxon signed rank test (Woolson, 2007) revealed a significant difference between humans and our model $(p < 0.001)$ as well as ChatGPT and our model $(p < 0.001)$ , indicating that our approach is not just able to significantly outperform humans, but also ChatGPT for inferring constraints from language.", + "bbox": [ + 112, + 146, + 489, + 451 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.5 Discussion", + "text_level": 1, + "bbox": [ + 112, + 470, + 243, + 486 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our results emphasize that inferring strategic intent from language is a non-trivial task, as language interpretation can be subjective and malleable. ChatGPT is capable of performing novel tasks such as text classification (Li et al., 2023), mathematical problem solving (Frieder et al., 2023), and information extraction (He et al., 2023). through in-context learning. However, despite these capabilities, our model was found to significantly outperform chatGPT for inferring strategic intent from language. Success in highly specific and complex language interpretation tasks, such as ours, requires the model to build an understanding of the domain and the task itself as generic language interpretation learned by the majority of pretrained language models may not be applicable.", + "bbox": [ + 112, + 498, + 489, + 755 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Recent work on evaluating open question-answering on a challenge-dataset has shown that even for large-scale language models with between 6B-100B parameters, none of these models outperformed humans (Peinl and Wirth, 2023). By developing a computational interface which can infer strategic intent from language significantly better than humans, we show the usefulness of our pipeline towards solving complex domain-specific task in a low-data, -resource setting.", + "bbox": [ + 112, + 758, + 489, + 919 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/aea04a342117037367786e3345f5ff6cbea2f8af54cd266add4e8de766652812.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
BaselineConstraintsGoals
Roberta-base (Best)68.21 (1.08)46.04 (1.85)
GPT-Neo 125M (Best)65.22 (1.21)46.08 (0.73)
", + "bbox": [ + 510, + 80, + 884, + 136 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 4: This table depicts the performance when the roberta-base encoder is substituted with a SOTA autoregressive model, i.e. GPT-Neo (125 million parameters).", + "bbox": [ + 507, + 149, + 882, + 193 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.6 Ablation Study", + "text_level": 1, + "bbox": [ + 507, + 218, + 673, + 233 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Tables 3 and 2 provide the results from abating each model augmentation discussed in Section 4. The effects of these augmentations are more prominent in the model for predicting constraints ( $\\sim$ 6% performance boost) than predicting goals ( $\\sim$ 1.5% performance boost). For the constraints model, when any parameter, i.e. troop selections, pretraining, or CLS-Token, were removed, the accuracy dropped by $\\sim$ 3% individually. For predicting goals, the inclusion of troop selections was the only model-augmentation which seemed to have a decisive impact performance, as all models with selections had an accuracy of $\\sim$ 1% more than those without. We attribute the difficulty in improving the performance of the goals model to the contextual ambiguity for values assigned to each goal. Participants may not always follow the same metric while specifying goal values. Each participant could have a unique interpretation, for what any rating between -100 to 100 means for a particular goal, and description of that value through language (see Appendix for the data distribution corresponding to each goal). This disparity in interpreting values could be affecting the consistency of language descriptions for goals in our dataset.", + "bbox": [ + 505, + 237, + 884, + 640 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Finally, the last ablation conducted studied the effect of the type of encoder utilized in our approach. Therefore, we performed a comparison with a model which replaced the encoder with a SOTA pretrained autoregressive model. We utilized GPT-Neo (Black et al., 2021) for our experiments, as it has the same number of parameters as Roberta-base (125 million). Our findings (see Table 4) show that utilizing an autoregressive model as our encoder offers no benefits to a roberta-base model, the GPT-Neo model performed equivalently for predicting goals and about $3\\%$ worse for the constraints model.", + "bbox": [ + 507, + 640, + 882, + 848 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 862, + 640, + 876 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we develop a novel computational interface to automate inferring strategic intent, in the", + "bbox": [ + 507, + 887, + 882, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "12808", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "form of goals and constraints, from unstructured language descriptions of strategies. We develop a new benchmark for our dataset and broader task, and further conduct a novel head-to-head evaluation to determine the relative efficacy of our approach. We show that in a low-data setting, our approach towards inferring goals and constraints from language strategy descriptions can significantly outperform humans for the same tasks. Furthermore, we also found that our approach, with only 125 million parameters, was able to significantly outperform ChatGPT for inferring strategic intent from language. Our work endows researchers with valuable tools to further seldonian optimization approaches for mixed-initiative interaction.", + "bbox": [ + 115, + 84, + 485, + 324 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Future Work", + "text_level": 1, + "bbox": [ + 115, + 338, + 230, + 353 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "To measure ChatGPT performance, we employ a one-shot chain-of-thought prompt method with a detailed instructions of the task. We chose this method to maintain consistency between the information shown to humans and ChatGPT. Future work may explore ablations on the size of the initial prompt or the number of annotated examples in the prompt to tune the performance of ChatGPT on our strategy translation task. Secondly, an important next step that stems from this research pertains to multi-round inference and updating the initially learned strategy. In future work, it would be helpful to develop methods to allow users to modify their initial strategy throughout the game or task as their goals or values change. These methods could utilize approaches proposed in prior work wherein language inputs were leveraged to change the sub-goals that an agent is considering (Fu et al., 2019; Goyal et al., 2019). Furthermore, recent work has shown promise for the capabilities of ChatGPT/GPT-3.5 towards dialog-state tracking and task-oriented dialog (Labruna et al., 2023; Heck et al., 2023). Future work could also formulate this task of updating the initial strategy over the course of the game as a goal-oriented dialog, and tune GPT-3.5 or GPT-4 to update a user's initially translated strategy after multiple rounds of the game through language feedback.", + "bbox": [ + 115, + 365, + 485, + 814 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 115, + 829, + 216, + 843 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Firstly, we asked participants to provide natural language descriptions after providing their structured intent in the form of goals and constraints. This potentially biased the participant towards specifically", + "bbox": [ + 115, + 854, + 485, + 917 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "referencing the terminology utilized in the goals and constraints. While our dataset provides explanations that are the closest to natural, human-like descriptions of strategies, an important next step would entail comparing how our model performs on strategies collected \"in-the-wild.\" Secondly, in this paper we assume that utilizing language is more accessible than learning to use mathematical specifications directly to specify their intent to an intelligent agent. However, we do not test whether this assumption bears out in practice. In future work, we hope to develop a human-subjects study to confirm this hypothesis. Finally, despite converting language to goals and constraints, in this work we do not directly train a seldonian optimization approach. In this work, we focus on showing the capability of our machine learning pipeline in a low-data setting. However, we have provided all the components needed to train a reinforcement learning approach for an RL-agents constraining behavior through unstructured language (including a novel open-AI RL domain for the game Risk, see Appendix). Developing this approach is currently outside the scope of this work, and we thereby leave this exploration for future work.", + "bbox": [ + 512, + 84, + 880, + 485 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 512, + 500, + 658, + 514 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "As pretrained large-language models are utilized in our approach for automated strategy translation, we need to be cognizant of the prevalence of bias within these models. If these systems are translating strategies in safety-critical settings, it is important to make sure that the language models make the decisions solely based on the provided context rather than any inherent bias. Many sets prior work have studied approaches to identify and mitigate bias (Abid et al., 2021; Silva et al., 2021b; Guo et al., 2022; Viswanath and Zhang, 2023). We encourage authors to seek out such works prior to deploying any strategy translation module, towards a real-world task.", + "bbox": [ + 512, + 526, + 880, + 749 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 512, + 764, + 678, + 778 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work was supported by the Office of Naval Research under awards, N00014-19-1-2076, N00014-22-1-2834, N00014-23-1-2887, and the National Science Foundation under award, FMRG-2229260. We also thank Konica Minolta for their contribution to this work via a gift to the Georgia Tech Research Foundation.", + "bbox": [ + 512, + 791, + 880, + 901 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "12809", + "bbox": [ + 478, + 928, + 524, + 940 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 115, + 84, + 213, + 98 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "2014. Online participant recruitment for surveys and market research.", + "Herve Abdi and Lynne J Williams. 2010. Tukey's honestly significant difference (hsd) test. Encyclopedia of research design, 3(1):1-5.", + "Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 298-306.", + "Léo Andeol. 2018. Leoandeol/gym-risk: Gym environment for the risk game by hasbro.", + "Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning, pages 166-175. PMLR.", + "Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. 2018. Learning to understand goal specifications by modelling reward. arXiv preprint arXiv:1806.01946.", + "Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow. If you use this software, please cite it using these metadata.", + "María José Blanca Mena, Rafael Alarcón Postigo, Jaume Arnau Gras, Roser Bono Cabré, Rebecca Bendayan, et al. 2017. Non-normal data: Is anova still a valid option? Psicothema.", + "Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A Knepper, and Yoav Artzi. 2019. Learning to map natural language instructions to physical quadcopter control using simulated flight. arXiv preprint arXiv:1910.09664.", + "Haw-Shiuan Chang, Ruei-Yao Sun, Kathryn Ricci, and Andrew McCallum. 2023. Multi-CLS BERT: An efficient alternative to traditional ensembling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.", + "William G Cochran. 1947. Some consequences when the assumptions for the analysis of variance are not satisfied. Biometrics, 3(1):22-38.", + "Richard Dempsey and Jonathan M Chavous. 2013. Commander's intent and concept of operations. Military Review, 93(6):58-66.", + "Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Order-agnostic cross entropy for non-autoregressive machine translation. arXiv preprint arXiv:2106.05093.", + "Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. 2023. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867." + ], + "bbox": [ + 115, + 105, + 485, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. 2019. From language to goals: Inverse reinforcement learning for vision-based instruction following. arXiv preprint arXiv:1902.07742.", + "Richard Gibson, Neesha Desai, and Richard Zhao. 2010. An automated technique for drafting territories in the board game Risk. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 6(1):15-20.", + "Gene V Glass. 1966. Testing homogeneity of variances. American Educational Research Journal, 3(3):187-190.", + "Nakul Gopalan, Dilip Arumugam, Lawson Wong, and Stefanie Tellex. 2018. Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications. In Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania.", + "Prasoon Goyal, Scott Niekum, and Raymond J Mooney. 2019. Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020.", + "Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012-1023.", + "Philip J Hayes. 1985. The utility of natural language interfaces (panel session). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, page 19.", + "Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao Shen. 2023. Icl-d3ie: In-context learning with diverse demonstrations updating for document information extraction. arXiv preprint arXiv:2303.05063.", + "Michael Heck, Nurul Lubis, Benjamin Ruppik, Renato Vukovic, Shutong Feng, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, and Milica Gašić. 2023. Chatgpt for zero-shot dialogue state tracking: A solution or an opportunity? arXiv preprint arXiv:2306.01386.", + "Arie W Kruglanski. 1996. Goals as knowledge structures. P. M. Gollwitzer & J. A. Bargh (Eds.), The psychology of action: Linking cognition and motivation to behavior, pages 599-618.", + "Geert-Jan M Kruijff, M Janicek, Shanker Keshavdas, Benoit Larochelle, Hendrik Zender, Ninja JJM Smets, Tina Mioch, Mark A Neerincx, Jurriaan Van Diggelen, Francis Colas, et al. 2014. Experience in system design for human-robot teaming in urban search and rescue. In Field and Service Robotics, pages 111-125. Springer." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "12810", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Tiziano Labruna, Sofia Brenna, Andrea Zaninello, and Bernardo Magnini. 2023. Unraveling chatgpt: A critical analysis of ai-generated goal-oriented dialogues and annotations. arXiv preprint arXiv:2305.14556.", + "Jiazheng Li, Runcong Zhao, Yulan He, and Lin Gui. 2023. Overprompt: Enhancing chatgpt capabilities through an efficient in-context learning approach. arXiv preprint arXiv:2305.14973.", + "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.", + "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.", + "Joseph E Mercado, Michael A Rupp, Jessie YC Chen, Michael J Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent agent transparency in human-agent teaming for multi-uxv management. Human factors, 58(3):401-415.", + "Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3d environments with visual goal prediction. arXiv preprint arXiv:1809.00786.", + "Gordon B Moskowitz and Heidi Grant. 2009. The psychology of goals. Guilford press.", + "Michael J Muller and Sarah Kuhn. 1993. Participatory design. Communications of the ACM, 36(6):24-28.", + "Thomas Nickles. 1978. Scientific problems and constraints. In *PSA: Proceedings of the biennial meeting of the Philosophy of Science Association*, volume 1978, pages 134-148. Philosophy of Science Association.", + "David G Novick and Stephen Sutton. 1997. What is mixed-initiative interaction. In Proceedings of the AAAI spring symposium on computational models for mixed initiative interaction, volume 2, page 12.", + "Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. 2021. Teach: Taskdriven embodied agents that chat. arXiv preprint arXiv:2110.00534.", + "Réné Peinl and Johannes Wirth. 2023. Evaluation of medium-large language models at zero-shot closed book generative question answering. arXiv preprint arXiv:2305.11991.", + "Dulce G Pereira, Anabela Afonso, and Fátima Melo Medeiros. 2015. Overview of friedman's test and post-hoc analysis. Communications in Statistics-Simulation and Computation, 44(10):2636-2653." + ], + "bbox": [ + 115, + 85, + 485, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Ian C Rankin, Seth McCammon, and Geoffrey A Hollinger. 2021. Robotic information gathering using semantic language instructions. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4882-4888. IEEE.", + "Joseph Rosen, Eliot Grigg, Jaron Lanier, Susan McGrath, Scott Lillibridge, David Sargent, and C Everett Koop. 2002. The future of command and control for disaster response. IEEE engineering in medicine and biology magazine, 21(5):56-68.", + "Mohit Shridhar, Lucas Manuelli, and Dieter Fox. 2022. *Cliport: What and where pathways for robotic manipulation*. In *Conference on Robot Learning*, pages 894–906. PMLR.", + "Andrew Silva, Nina Moorman, William Silva, Zulfiqar Zaidi, Nakul Gopalan, and Matthew Gombolay. 2021a. Lancon-learn: Learning with language to enable generalization in multi-task manipulation. IEEE Robotics and Automation Letters.", + "Andrew Silva, Pradyumna Tambwekar, and Matthew Gombolay. 2021b. Towards a comprehensive understanding and accurate evaluation of societal biases in pre-trained transformers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2383-2389.", + "Alane Suhr, Claudia Yan, Jacob Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situated collaborative interactions. arXiv preprint arXiv:1910.03655.", + "Pradyumna Tambwekar, Andrew Silva, Nakul Gopalan, and Matthew Gombolay. 2021. Interpretable policy specification and synthesis through natural language and RL.", + "Stefanie TELlex, Nakul Gopalan, Hadas Kress-Gazit, and Cynthia Matuszek. 2020. Annual Review of Control, Robotics, and Autonomous Systems, 3:25-55.", + "Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, and Emma Brunskill. 2017. On ensuring that intelligent machines are well-behaved. arXiv preprint arXiv:1708.05448.", + "Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, Stephen Giguere, Yuriy Brun, and Emma Brunskill. 2019. Preventing undesirable behavior of intelligent machines. Science, 366(6468):999-1004.", + "Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J Mooney. 2019. Improving grounded natural language understanding through human-robot dialog. In 2019 International Conference on Robotics and Automation (ICRA), pages 6934-6941. IEEE." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "12811", + "bbox": [ + 477, + 928, + 522, + 940 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Hrishikesh Viswanath and Tianyi Zhang. 2023. Fairpy: A toolkit for evaluation of social biases and their mitigation in large language models. arXiv preprint arXiv:2302.05508.", + "bbox": [ + 115, + 85, + 487, + 137 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Edward C Williams, Nakul Gopalan, Mine Rhee, and Stefanie Tellex. 2018. Learning to parse natural language to grounded reward functions with weak supervision. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 4430-4436. IEEE.", + "bbox": [ + 115, + 147, + 487, + 225 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Robert F Woolson. 2007. Wilcoxon signed-rank test. Wiley encyclopedia of clinical trials, pages 1-3.", + "bbox": [ + 115, + 237, + 487, + 263 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Tsung-Yen Yang, Michael Hu, Yinlam Chow, Peter J Ramadge, and Karthik Narasimhan. 2020. Safe reinforcement learning with natural language constraints. arXiv preprint arXiv:2010.05150.", + "bbox": [ + 115, + 273, + 487, + 325 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328-11339. PMLR.", + "bbox": [ + 115, + 336, + 487, + 400 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A Additional Data Collection Details", + "text_level": 1, + "bbox": [ + 115, + 426, + 442, + 441 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Our study applied participatory design principles (Muller and Kuhn, 1993), to ensure that participants were engaged in the task and provided meaningful strategy descriptions. Each participant was initially given a partially setup map, where two other \"opponents\" had placed their troops. The participant was then asked to provide their troop placements, based on these initial placements. In Risk, the initial troop placements have a substantial impact on the strategies that a player can pursue for the rest of the game. As such, troop initialization provides a stand-in for a player's overall strategy in a game. By asking participants to participate in an actual aspect of the gameplay, e.g. deploying troops, participants were encouraged envision future situations and think about how their decisions could affect future gameplay and develop grounded strategies that could actually function as viable Risk gameplay strategies.", + "bbox": [ + 115, + 451, + 487, + 757 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Next, participants were asked to provide the goals and constraints which they considered after selecting their troop placements. These specific goals and constraints were selected as they cater to potential strategies that could be employed while playing Risk. The presence of these templates provided a scaffold within which participants, who may or may not have any experience with Risk, could ground their strategies. However, it is important to acknowledge the presence of an inductive", + "bbox": [ + 115, + 758, + 487, + 917 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "bias, due to the specific wording of the goals and constraint templates, which could have impacted the strategies submitted by the participants. For goals, participants were asked to rate how important each goal was to their strategy on a scale of -100 to 100. A score of -100 indicated that pursuing the goal was completely detrimental to their strategy, while 100 indicated that pursuing the goal was essential to their strategy. For constraints, participants were provided 9 constraint templates, and were asked to select and fill in the appropriate constraint that was represented in their strategy. Participants were required to provide at least three constraints to ensure that they did not skip this question. The specific goals and constraints in our dataset can be depicted in Table 5. Finally, participants were asked to summarize their strategy for the given map as a language description. Participants were encouraged to include references to their goals and constraints, but these descriptions were otherwise unprompted. Participants were paid up to $8.5 based on the number of adequate responses submitted. The payment scale was updated if the average time taken significantly changed.", + "bbox": [ + 507, + 84, + 882, + 469 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "As mentioned in the paper, we created three additional augmented datasets from our original corpus. Figure 6 provides some examples of the effect of the various augmentations we employed in each augmented dataset. Our full dataset can be found at the following anonymized Github repository - Anonymized Data Repository .", + "bbox": [ + 507, + 473, + 882, + 583 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A.1 Data Cleaning/Filtering", + "text_level": 1, + "bbox": [ + 509, + 604, + 746, + 619 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "We performed the least possible modifications to participant's responses to ensure responses were self-consistent while preserving the integrity of the organic data collection task. If a participant specifically referenced a goal or a constraint in their language, and did not include it in their response, then their response was modified to include it, and vice versa. We also corrected typos within a participants specifications, such as if they meant to reference the \"Blue\" continent instead of the \"Red\" continent. If a response was not salvageable without minimum modifications, the response was thrown out. Discarded responses included responses where participants simply did not understand the task or submitted blatantly insincere responses such as copying text from the study multiple times to reach the character limit. These decisions were made upon agreement of multiple reviewers.", + "bbox": [ + 507, + 629, + 882, + 917 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12812", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/ef39334c75b613e69998f9cfe5416b0ac7d83775b49097924d7e6282415f388d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 115, + 85, + 349, + 209 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/5c7c744ea4ec893f71e9ddc9ffbd2de8edb6c9ac7fb5d1c76b8a7f0d3fc96dab.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 371, + 85, + 608, + 209 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/f30a288e4c9660dd62936fc80e7fdb0f706b5ace52919d0b2bbb8fab2771db9a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 636, + 86, + 870, + 209 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/9d6347c06cd61aab75f2603fd61a59540ed133c304d7a954400a37ae0053be06.jpg", + "image_caption": [ + "Figure 4: Distribution of assigned values for each goal. The titles for each goal have been shortened for readability." + ], + "image_footnote": [], + "bbox": [ + 115, + 227, + 347, + 351 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/fb8ec1db6599a6415c401b1919a81aa5aeda7e9a5b3e55f5f8e10151a3c27c7c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 373, + 228, + 608, + 350 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/c4d254aad5d3d1bdefdde1079b6c7a34a2e32ebda6e3a60488f0447ca70a3e1a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 636, + 228, + 880, + 350 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/7dadec3d8286e861f9961dde765dde8738ff492805ebb010b0c4e4c5fb9df309.jpg", + "image_caption": [ + "Figure 5: Distribution of assigned values for each constraint type" + ], + "image_footnote": [], + "bbox": [ + 315, + 393, + 687, + 588 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.2 Data Collection Quiz", + "text_level": 1, + "bbox": [ + 112, + 643, + 331, + 659 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "In order to ensure that participants understood the rules of Risk prior to providing strategies for our dataset, each participant was asked answer a five question quiz. Participants needed to answer all questions correctly to proceed. Participants were given three tries to answer the questions after which they were shown the correct answers. The five questions in our quiz were as follows (correct answers to each question are in bold):", + "bbox": [ + 112, + 665, + 489, + 809 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "1. Which of these are NOT a phase in the game?", + "bbox": [ + 129, + 822, + 487, + 838 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) Attack", + "(b) Recruit", + "(c) Control opponent's troops", + "(d) Maneuver" + ], + "bbox": [ + 159, + 847, + 396, + 917 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "2. What is the objective of the game?", + "bbox": [ + 522, + 643, + 805, + 659 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) Control the rightmost continent", + "(b) Have the maximum number of island territories", + "(c) Have the most territories after 10 turns", + "(d) Occupy all territories on the board" + ], + "bbox": [ + 552, + 665, + 884, + 751 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "3. Which of these decides how many troops you receive at the start of each turn? (TWO CORRECT ANSWERS)", + "bbox": [ + 522, + 762, + 884, + 809 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) The number of territories you control", + "(b) The number of coastal territories on the map", + "(c) They physical size of the board game", + "(d) The number of continents you fully occupy" + ], + "bbox": [ + 552, + 816, + 884, + 917 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "12813", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "4. Which of the following statements are correct about attacking enemy territories in the game? (TWO CORRECT ANSWERS)", + "bbox": [ + 129, + 84, + 487, + 131 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) When you attack a territory you've already attacked, your attack points are doubled", + "(b) You CANNOT attack in the opposite direction of the arrows", + "(c) You can only attack territories you have access to", + "(d) You can never attack a territory in the same continent" + ], + "bbox": [ + 159, + 141, + 487, + 291 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5. Which of the following statements are true regarding how attacks are conducted? (TWO CORRECT ANSWERS)", + "bbox": [ + 129, + 303, + 487, + 351 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) A player with scattered troops always wins", + "(b) A player attacking from the left side always wins", + "(c) Both players roll a number of dice dependent on the number of their troops involved in the battle to decide the outcome", + "(d) A player can attack with up to 3 troops and defend with up to 2 troops in one battle" + ], + "bbox": [ + 159, + 360, + 487, + 541 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "B Dataset Utility", + "text_level": 1, + "bbox": [ + 114, + 555, + 278, + 571 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "This section provides a brief discussion on the potential future utility of our collated dataset. Firstly, this dataset provides strategy specifications in Risk that can be used to test seldonian optimization approaches in future work. Our dataset provides the first such instance language descriptions of strategic intent. Future work can analyze the flaws and strengths of our data to modify our data collection protocol and generate the specific examples they may need for their individual applications. However, there are many tangential applications for this data that are unrelated to the use-case specified in this paper. There is a dearth of natural language datasets which contain language with human-like speech patterns that is not scraped from internetcorpora. Many NLP techniques can be applied to further study this language data such as summarization, to figure out whether these policies can be summarized into a more easily digestible format, sentiment analysis, for broadly categorizing the language description into aggressive, defensive, etc,", + "bbox": [ + 112, + 581, + 489, + 917 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "or Q&A comprehension-based methods, to train AI agents to answer questions regarding a user's preferences by reading their strategy description.", + "bbox": [ + 507, + 84, + 880, + 133 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "C Dataset Distributions", + "text_level": 1, + "bbox": [ + 509, + 143, + 732, + 159 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "The data distribution for goals and constraints selected by participants are shown in Figure 4 and Figure 5 respectively. For Goals 3 (Keep your troops close together) and 5 (Maximize Battles) participants tended to skew towards answers in the 60-100 range. For the other goals, the responses were relatively uniform. On average, participants submitted 5.62 unique constraints per response.", + "bbox": [ + 507, + 168, + 882, + 298 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "D Implementation Details", + "text_level": 1, + "bbox": [ + 509, + 309, + 752, + 326 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Hyperparameters for both models were computed through a grid search over parameters. The constraints model was trained for 10 epochs with a batch size of 16 using a learning rate of 0.0005. The goals model was trained for 25 epochs with a batch size of 8 using a learning rate of 0.00001. The constraints model was Both models utilized an AdamW optimizer. The constraints model employed a cosine learning rate scheduler, and the goals model employed a linear learning rate scheduler. We hold-out 30 randomly selected examples for our human/ChatGPT evaluation (Section 5). We split the remaining 1023 examples into a 85/15 train/validation split to perform our grid search over hyperparameters. Finally, to report the accuracy of our model we computed the 10-fold cross-validation accuracy on the best performing hyperparameter setting. The best performing model for predicting constraints was pretrained on the synthetic corpus and trained on the un-augmented human corpus. The best goals model was pretrained on the synthetic-augmented dataset and trained on the human-augmented dataset. All experiments were conducted on a 48GB NVIDIA Quadro RTX GPU. Our code can be found at the following anonymized repository for further reference - Anonymized Code Repository.", + "bbox": [ + 507, + 335, + 884, + 770 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "E Human Evaluation Study - Additional Details", + "text_level": 1, + "bbox": [ + 509, + 780, + 875, + 812 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In this section, we report some additional details regarding our human-evaluation experiment. Firstly, we report that on average, the difference between scores for a participant's first and last response was -0.2143 for goals and -0.0102 for constraints, indicating that there is a negligible impact of factors", + "bbox": [ + 507, + 822, + 882, + 917 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "12814", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/f84717e0319211ebc3bc932733f3de984fc0334fd85d4dfd89a1e8d847ff25a5.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Synthetic DataSynthetic-Augmented Data
Why would I care about battling. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. I need to spread my troops out as far as possible. I can't win if I put any troops on Blue. I need to place troops on at least 5 countries. This time I will use a different strategy. I need to have troops on at least 5 continents. I don't intend to control continents.I don't know why I care about fighting. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. My troops need to be spread out as much as possible. If I put any troops on Blue, I will not win. I need to place troops on at least 5 countries. I will be using a different strategy this time. I need to have troops on at least 5 continents. I don't intend to control continents.
Human DataHuman-Augmented Data
I am going to attack and take over green c. That country is ripe for the taking since I have cut it off from other grey troops. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. Once the green continent is secure I will look to move my armies out to the red continent to battle black there. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on GreenI am going to attack and take over green c. Since I cut it off from other grey troops, that country is ripe for taking. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. I will move my armies to the red continent to fight black once the green continent is secure. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on Green.
", + "bbox": [ + 193, + 80, + 803, + 288 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Figure 6: Examples of data from Synthetic (top-left), Synthetic-Augmented (top-right), Human (bottom-left) and Human-Augmented (bottom-right). Highlighted sections represent the specific sentences changed by our augmentation procedure.", + "bbox": [ + 112, + 300, + 884, + 344 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/af0b89556f1bf0f40bd8061080d061efd1b079b1b95a9eb4b340cf5ce33b2ce3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
GoalsConstraints
G1: Surround enemy territoriesC1: I must have troops on (continent)
G2: Maximize number of countries occupiedC2: I must not have troops on (continent)
G3: Keep our troops close togetherC3: I must be able to access (continent) in one move
G4: Maximize battles throughout the gameC4: I need to protect the borders of (continent)
G5: Fortify borders for the continents you controlC5: I need a total of at least (number) troops to defend a continent
G6: Battle opposing players one at a timeC6: I must have at least (number) countries
C7: I must have troops on at least (number) continents
C8: I must place at least (number) troops to effectively defend a country
C9: I must have troops on at most (number) continents
", + "bbox": [ + 156, + 356, + 842, + 476 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Table 5: Goals and Constraints Selected for our Dataset", + "bbox": [ + 307, + 492, + 687, + 505 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "such as cognitive load or a learning curve. Secondly, it is important to note that we did not have the same number of responses per map from humans, as the map condition was randomly assigned to each participant. While this may slightly impact the results of the constraints model, as we aggregated performance across maps, due to the strong significant difference across baselines, it is unlikely to change our result.", + "bbox": [ + 112, + 532, + 489, + 677 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "F Human Evaluation Study - Data Filtering Rubric", + "text_level": 1, + "bbox": [ + 112, + 690, + 428, + 724 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Next, we cover the rubric we applied to filter data for the human-subjects study. Each response was independently evaluated by two graders and was included if both graders deemed it acceptable as per the predefined rubric. The rubric was as follows:", + "bbox": [ + 112, + 734, + 487, + 814 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "1. If constraints clearly don't match the selections for locations or access", + "bbox": [ + 129, + 829, + 489, + 859 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "- e.g. if someone has selected, \"I must have troops on Blue\" when there are no troops on Blue", + "bbox": [ + 157, + 871, + 487, + 917 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "2. If someone has submitted invalid constraints", + "bbox": [ + 522, + 532, + 878, + 546 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- e.g. If someone selects both \"I need troops on at least 2 continents\" + \"I need troops on at most 1 continent\"", + "- If someone mistakes \"country\" for \"continent\"" + ], + "bbox": [ + 552, + 556, + 882, + 638 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "3. If someone has selected the same value for all goals (or values within a small range, say $+ - 10$ ), when this clearly does not align with the strategy", + "bbox": [ + 522, + 650, + 882, + 715 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "- e.g. someone selects $-100$ for all goals when the strategy involves protecting a continent", + "bbox": [ + 552, + 722, + 882, + 768 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "G ChatGPT Prompt", + "text_level": 1, + "bbox": [ + 507, + 781, + 705, + 799 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "We utilized the following prompt for ChatGPT which included a description of the domain and task, as well as an annotated example.", + "bbox": [ + 507, + 807, + 882, + 854 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "G.1 Full Prompt", + "text_level": 1, + "bbox": [ + 507, + 866, + 658, + 882 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Reading the following section carefully will provide you with the information needed to complete", + "bbox": [ + 507, + 887, + 882, + 917 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "12815", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "this task.", + "bbox": [ + 112, + 85, + 184, + 98 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Risk is a board game in which an army commander tries to take over the world by defeating all enemy troops and controlling all countries. Risk is a simplified version of real conflict, and has rules designed to reflect this. These include the following:", + "bbox": [ + 112, + 101, + 489, + 197 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Players control countries by having troops in them", + "- The more countries and continents a player controls, the more resources they get", + "- Players win countries from other players by battling with their troops", + "- The more troops a player has when battling, the more likely they are to win", + "- Players can only attack or be attacked by countries that are next to them" + ], + "bbox": [ + 121, + 210, + 489, + 420 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "In this task, you will be asked to provide a set of constraints corresponding to the human player's strategy for the board game Risk. This includes their troop placements and a text description, which explains why the player decided to place their troops and how they plan to win this game of Risk given their opponents' choices.", + "bbox": [ + 112, + 434, + 487, + 546 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Your task will be to think about the player's strategy (selections and description) and predict what their constraints are with respect to the strategy. Constraints are rules that you think need to be followed to successfully execute a strategy.", + "bbox": [ + 112, + 546, + 489, + 627 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "CONSTRAINTS: Note: For predicting goals, this section would be replaced with a description of what goals are", + "bbox": [ + 112, + 627, + 487, + 675 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Constraints are comprised of constraint classes and constraint values. Your job is to assign constraints to the human's strategy. Each constraint is comprised of a constraint class and a constraint value. You will be provided a list of possible constraint classes and values to choose from. You may choose the same class of constraint more than once, but you may not submit duplicate constraints. For example, you may submit \"I must have troops on Green\" and \"I must have troops on Blue\" but you may not submit \"I must have troops on Green\" twice. Choose all constraints relevant to the strategy. You may choose up to 8 constraints per strategy.", + "bbox": [ + 112, + 678, + 489, + 901 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The constraints you can choose from are", + "bbox": [ + 131, + 903, + 435, + 917 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- I must have troops on [Continent]", + "- I must not have troops on [Continent]", + "- I must be able to access [Continent] with one move", + "- I need to protect the borders of [Continent]", + "- I need a total of at least [Number] troops to defend a continent", + "- I must have at least at least [Number] countries", + "- I must have troops on at least [Number] continents", + "- I must place at least [Number] troops to effectively defend a country", + "- I must have troops on at most [Number] continents" + ], + "bbox": [ + 515, + 84, + 882, + 401 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The possible constraint values you can choose from are", + "bbox": [ + 507, + 416, + 880, + 445 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Continent - Blue, Green, Yellow, Red, Purple", + "Number-1,2,3,4,5,6,7,8,9,10,11,12,13,14" + ], + "bbox": [ + 515, + 460, + 865, + 502 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Our modified RISK Map contains 5 continents - Red, Green, Purple, Yellow and Blue. Each continent is made up of countries. Red continent has 3 countries, Green has 5 countries, Purple has 5 countries, Yellow has 4 countries and Blue has 4 countries. Green_A, Yellow_B, Blue_C, etc. are referred to as countries or territories Green, Yellow, Blue, Red, Purple are referred to as continents. Continents also have different connections between them through which the troops can move. These connections are one way i.e troops from the source country can only move to the destination country and not the other way round.", + "bbox": [ + 505, + 516, + 882, + 724 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The map has the following connections - Yellow_D is connected to Green_A, Greed_D is connected to Red_A, Red_A is connected to Green_D, Red_B is connected to Purple_E, Red_C is connected to Yellow_B, Red_C is connected to Blue_B, Blue_A is connected to Yellow_C, Yellow_C is connected to Blue_D, Blue_C is connected to Purple_A, Purple_A is connected to Green_E and Green_E is connected to Purple_A", + "bbox": [ + 507, + 726, + 882, + 870 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "We will now give you a tutorial on how to ascertain the goals from a human player's strategy and placements on the RISK board.", + "bbox": [ + 507, + 871, + 882, + 917 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "12816", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The two opposing players are denoted by the \"grey\" and \"black\" player. In this scenario, the grey player has placed its troops on the following territories - 5 troops on Yellow_C, 4 troops on Yellow_D, 1 troop on Red_A, 2 troops on Red_B, 2 troops on Red_C. The black player has placed its troops on the following territories - 4 troops on Blue_A, 2 troops on Blue_C, 2 troops on Green_E, 5 troops on Purple_A and 1 troop on Purple_B.", + "bbox": [ + 110, + 84, + 487, + 228 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Now that you have seen where the opposition troops are, you will now be shown how the human player has decided to deploy their troops and the strategy they used.", + "bbox": [ + 112, + 229, + 485, + 293 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "The human player (white) has placed 14 troops to battle the opponents. They have placed the troops on the following territories - 7 troops on Purple_E, 5 troops on Purple_C and 2 troops on Purple_D. You will now be guessing the constraints the human player (white) focused on while coming up with their strategy. The following text contains the human player's description of the strategy they used to place their troops. It is critical that you read this description, as it contains information about the constraints considered by the human player.", + "bbox": [ + 112, + 294, + 485, + 469 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "\"I put all my troops in Purple, because I felt as though I needed all my available troops to defend Purple. I wanted to protect Purple. With 7 troops on Purple_E, I feel like I cannot be beat on purple. I wasn't too keen on getting involved in battles, or taking an overly aggressive strategy. I would like to focus on beating the black player first, I don't think I can battle two people at the same time. I'm going to avoid Red for now since it seems to be the hardest continent to control.\"", + "bbox": [ + 112, + 470, + 487, + 630 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We will now show you how to determine constraints from a strategy and via an example. Please carefully review the example and use the given information about both selections and text to fill out constraints for this strategy.", + "bbox": [ + 112, + 631, + 489, + 711 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "An appropriate set of constraints for the strategy shown above would be", + "bbox": [ + 112, + 712, + 485, + 743 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "- I must have troops on Purple", + "bbox": [ + 121, + 755, + 351, + 771 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "- Reason: The player mentioned that \"they put all their troops on Purple\"", + "bbox": [ + 139, + 778, + 485, + 810 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "- I must not have troops on Red", + "bbox": [ + 121, + 821, + 359, + 835 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "- Reason: The player mentioned that \"they would like to avoid Red for now\"", + "bbox": [ + 139, + 844, + 485, + 875 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "- I must place at least 7 troops to effectively defend a country", + "bbox": [ + 121, + 887, + 487, + 917 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "- Reason: The player mentioned that \"with 7 troops on Purple_E, I cannot be beaten on Purple\"", + "bbox": [ + 534, + 84, + 880, + 131 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "H Risk Reinforcement Learning Simulator", + "text_level": 1, + "bbox": [ + 509, + 145, + 806, + 177 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We have shown that our proposed computational interface can remove the need for human-interpreters for the task of parsing intent from unstructured language. However, to test how well commander's intent interpreted from language can be applied towards optimizing an agent's behavior, we require a reinforcement learning domain to train our agent. As such, to enable seldonian optimization, via unstructured language descriptions, we developed a novel open-ai gym environment for simulating Risk gameplay. This environment closes the loop on the methods presented in this paper by providing all the necessary components for humans to specify their intent to an AI agent and evaluate whether their specifications have been satisfied by the learnt agent. Our environment also provides an additional means of collecting data and conducting studies for human-specification within multi-player team scenarios.", + "bbox": [ + 507, + 187, + 884, + 492 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "For this task, we adapted an existing open-air gym environment for Risk (Andeol, 2018). We modified the codebase to allow for RL agents to be trained to play all phases of Risk, according to the setup utilized in our approach. We also developed a pygame-UI for our simulator (see Figure 7). A detailed description of the functionality of the domain and the state space is provided in the appendix. In future work, we aim to leverage our domain to develop approaches which allow humans to constrain an agent's optimization methods through human-like language specifications of intent, which has not been accomplished in any prior work. We also provide a link to an anonymized github repository with the risk environment for further reference - Anonymized Gym-Risk Environment", + "bbox": [ + 507, + 494, + 882, + 751 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "I Risk Domain - Additional Domain Information", + "text_level": 1, + "bbox": [ + 507, + 763, + 836, + 796 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "This section provides additional information about our setup for Risk Domain. In our version of Risk, the ego player (Alpha), plays against two opponents (Charlie and Bravo) whose actions are controlled by a pre-determined heuristic. The gameplay within our Risk simulator is comprised of four phases", + "bbox": [ + 507, + 806, + 884, + 917 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "12817", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/ab78e42bf6b832df7dbc6c6d574f87a5929dbb91b923c2801758750045b620c6.jpg", + "image_caption": [ + "Figure 7: This figure shows our Risk simulator with the playable (teal) and two other (orange and pink) agents." + ], + "image_footnote": [], + "bbox": [ + 144, + 80, + 458, + 250 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Drafting - Players draft their initial troops on empty territories.", + "2. Reinforce - Players assign reinforcements to their existing territories.", + "3. Attack - Players can choose to attack a neighboring territory with their troops.", + "4. Freemove - Players can move their troops between their territories." + ], + "bbox": [ + 127, + 313, + 487, + 472 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "The game begins with a drafting phase. During this phase, the agent decides where to place their initial 14 troops amongst the available territories. The two opposing players draft their troops before the agent is allowed to draft any troops. The opposing players drafts are either hard-coded to match one of the maps utilized in our study, or they are drafted based on a drafting heuristic. The drafting phase occurs only once in the game. Following drafting, the agent executes the next three phases in sequence. First, in the \"Reinforce\" phase, the agent receives a specific number of reinforcements based on the number of territories and continents they control. The agent needs to assign the given reinforcements to the territories they control. Each country reinforced is an individual action. Next, the agent moves on to the \"Attack\" phase. In this phase, the agent can attack adjacent territories with their troops. Within each attack action, the agent specifies which opposing territory they would like to attack, along with the territory they would like to attack from. The agent must also specify the number of troops they would like to move into the opposing territory should the win the conflict. Each combat sequence between two territories is executed in a similar manner to the physical board game,", + "bbox": [ + 112, + 483, + 489, + 917 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. A maximum of three troops are chosen from the attacking territory, and a maximum of two troops are chosen from the defending territory", + "2. For both the attacker and defender, a number of die are rolled based on the number of troops involved in each attack.", + "3. The rolls are sorted in descending order, and each roll is compared between the attacking and defending country.", + "4. For each comparison, the country with the lower roll loses one troop. The defending territory wins all ties.", + "5. The above steps are repeated until either the attacking or defending player has been defeated." + ], + "bbox": [ + 522, + 84, + 882, + 357 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Following combat, the agent can move all but one troop into the conquered territory. Once the agent has finished attacking, they move on to the final phase in their turn, \"Freemove.\" In the \"Freemove\" phase, the player can move troops from one territory they control to another, as long as the territories are connected. Once the agent executes all their actions, the actions of the two agents are simulated and the player is reset to the \"Reinforce\" phase to start their next turn. The game is complete when either the agent is out of troops or controls all territories.", + "bbox": [ + 507, + 370, + 884, + 561 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "An action is specified by a four-item tuple, i.e. $< p, s, t, tr >$ . The first item, $p$ , specifies which type of action is being conducted, among the four possible phases in the game. Item two, $s$ , denotes the source country for the action. For reinforce and drafting actions this is the country that the agent wants to add troops to, whereas for the attack and freemove actions, $s$ denotes the country you will be attacking or moving from. The, final two items, $t$ and $tr$ , are specifically for attack and move actions. $t$ specifies the country that you would like to attack or move to. For the attack action, $tr$ specifies the number of troops you would like to move from the attacking country if you win the combat. When the agent specifies a move action, $tr$ denotes the number of troops to be moved from $s$ to $t$ .", + "bbox": [ + 507, + 564, + 882, + 821 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "I.1 State Space", + "text_level": 1, + "bbox": [ + 507, + 833, + 643, + 848 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "The state of the game is stored as a dictionary. The state dictionary records information such as country ownership, number of troops on each country, continent ownership, etc. We also record information", + "bbox": [ + 507, + 854, + 882, + 917 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "12818", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "about players such as number of reinforcements available to a player, number of players alive, current turn number, etc. We have provided six functions to encode the state space which can be passed as an input to a Reinforcement Learning model.", + "bbox": [ + 112, + 84, + 487, + 164 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The first function encodes the state using 54 features. The initial 42 features contain country related information for each opponent (21 features each) and the next 5 features contain continent ownership data. The remaining features are used for other information related to the game like number of areas controlled by the player, troops left to be drafted by the player, troops left for reinforcement, number of players alive, current turn number and if the current turn belongs to the player.", + "bbox": [ + 112, + 167, + 487, + 326 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The second function encodes the information in the form of one hots. It has a total of 132 features, the first 84 features contain information regarding country ownership as one hots, 21 each for the player, opponents and countries with no owner. The next 21 features denote the number of troops on each country. The next 20 features contain information regarding continent ownership, 5 each for the player, opponents and no owner. The remaining features contain other relevant information as described for the first function. For both of the first two functions described, we also provide normalized versions of these functions where all the real valued spaces are divided by a normalising constant.", + "bbox": [ + 115, + 330, + 489, + 568 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The fifth encoding function contains all the 132 features of the third function and additional information for the current phase. It contains 134 features in total. This function returns normalised values. The last encoding function contains 298 features. The initial features are similar to the ones present in the third encoding function. Apart of that it explicitly contains information about where an agent or player can attack and execute a freemove. This information can help the reinforcement learning model more easily. This function also returns normalised values.", + "bbox": [ + 112, + 571, + 489, + 764 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "I.2 Reward Functions", + "text_level": 1, + "bbox": [ + 112, + 782, + 304, + 797 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We have setup four different types of reward functions ranging from sparse to dense. The recommended reward function is the rules-based reward which provides rewards for successful actions, finishing a phase, successful action in a phase and winning the game. The rewards for winning the game are weighted by a factor of 10 compared to", + "bbox": [ + 112, + 806, + 489, + 917 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "others which are weighted by a factor of 1.", + "bbox": [ + 507, + 84, + 828, + 99 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The most simple reward function available is a sparse reward function which provides negative rewards for losing the game and positive rewards for winning the game. In order to increase the number of rewards given throughout the game, we created the turn count reward function which rewards the agent for every turn it plays. Survival reward function was built on top of this to provide an additional negative reward for losing apart from the reward for surviving.", + "bbox": [ + 507, + 99, + 884, + 261 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "I.3 Human Drafting", + "text_level": 1, + "bbox": [ + 507, + 272, + 685, + 288 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Finally, we have also setup a functionality in our simulator that allows player or the opponents to skip the drafting phase and follow a fixed draft based on a predefined map. In such cases, we have predefined fifteen types of map initialisation containing troops for both opponents, which correspond to the exact maps utilized in our data collection procedure. Our setup chooses one of the map initializations and corresponding selections made by a participant in the user study to simulate the game.", + "bbox": [ + 507, + 293, + 884, + 469 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "12819", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 18 + } +] \ No newline at end of file diff --git a/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_model.json b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ac9c64d7d042067f0f392c6b8984aa70efb5d041 --- /dev/null +++ b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_model.json @@ -0,0 +1,3769 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.08, + 0.881, + 0.12 + ], + "angle": 0, + "content": "A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting" + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.125, + 0.823, + 0.16 + ], + "angle": 0, + "content": "Pradyumna Tambwekar1, Lakshita Dodeja2*, Nathan Vaska3*, Wei Xu1, and Matthew Gombolay1" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.16, + 0.772, + 0.176 + ], + "angle": 0, + "content": "1School of Interactive Computing, Georgia Institute of Technology" + }, + { + "type": "text", + "bbox": [ + 0.295, + 0.177, + 0.709, + 0.193 + ], + "angle": 0, + "content": "\\(^{2}\\)Computer Science Department, Brown University" + }, + { + "type": "text", + "bbox": [ + 0.258, + 0.193, + 0.746, + 0.21 + ], + "angle": 0, + "content": "3Massachusetts Institute of Technology, Lincoln Laboratory" + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.21, + 0.825, + 0.242 + ], + "angle": 0, + "content": "pradyumna.tambwekar@.gatech.edu, lakshita_dodeja@brown.edu, nathan.vaska@ll.mit.edu,{wei.xu, matthew.gombolay}@cc.gatech.edu" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.267 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.282, + 0.461, + 0.652 + ], + "angle": 0, + "content": "Many real-world tasks involve a mixed-initiative setup, wherein humans and AI systems collaboratively perform a task. While significant work has been conducted towards enabling humans to specify, through language, exactly how an agent should complete a task (i.e., low-level specification), prior work lacks on interpreting the high-level strategic intent of the human commanders. Parsing strategic intent from language will allow autonomous systems to independently operate according to the user's plan without frequent guidance or instruction. In this paper, we build a computational interface capable of translating unstructured language strategies into actionable intent in the form of goals and constraints. Leveraging a game environment, we collect a dataset of over 1000 examples, mapping language strategies to the corresponding goals and constraints, and show that our model, trained on this dataset, significantly outperforms human interpreters in inferring strategic intent (i.e., goals and constraints) from language \\((p < 0.05)\\). Furthermore, we show that our model (125M parameters) significantly outperforms ChatGPT for this task \\((p < 0.05)\\) in a low-data setting." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.666, + 0.26, + 0.681 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.692, + 0.49, + 0.884 + ], + "angle": 0, + "content": "Effective communication is essential for the proper functioning of organizational teams. \"Commander's Intent\" is a method for developing a theory of mind utilized in many domains such as the search and rescue, pandemic response, military, etc (Mercado et al., 2016; Rosen et al., 2002; Kruijff et al., 2014). Commanders and leaders often utilize the formulation of \"Commander's Intent\" to convey the tasks that need to be accomplished and engender an understanding of the criteria for success to their subordinates (Dempsey and Chavous, 2013). Commander's Intent could similarly function as" + }, + { + "type": "image", + "bbox": [ + 0.512, + 0.251, + 0.885, + 0.493 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.501, + 0.885, + 0.658 + ], + "angle": 0, + "content": "Figure 1: Our work aims to facilitate humans to specify their strategy to an AI system via language. Using the board game Risk as a simulated environment, we collect language descriptions of a strategy (top-left) corresponding to a player's troop deployments (bottom-left). The player's selections are shown by the white icons, and the grey and black icons denote the troops of the two opposing players. Each strategy corresponds to a set of goals (bottom-right) and constraints (top-right) The green and orange text corresponds to the language relating to constraints and goals respectively." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.691, + 0.885, + 0.819 + ], + "angle": 0, + "content": "an effective scaffold to represent a human's strategic intent in a mixed-initiative interaction (Novick and Sutton, 1997). Commander's Intent provides a functionality for expert-specifiers to engender a degree of \"shared-cognition,\" between an AI-collaborator and a human-specifier, by aligning the actions of the AI system to the human-specifiers values or reward function." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.823, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Commander's intent is formally represented by a set of goals and constraints. Goals (or preferences) are categorized as a desirable set of states or affairs that the agent intends to obtain (Moskowitz and Grant, 2009; Kruglanski, 1996) and constraints refer to conditions that are imposed on solutions" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.894, + 0.488, + 0.919 + ], + "angle": 0, + "content": "*These authors contributed to this paper while they were at Georgia Institute of Technology." + }, + { + "type": "page_number", + "bbox": [ + 0.477, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "12801" + }, + { + "type": "footer", + "bbox": [ + 0.21, + 0.946, + 0.788, + 0.959 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12801-12819" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.959, + 0.72, + 0.972 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.312 + ], + "angle": 0, + "content": "formulated by an agent (Nickles, 1978). Translating unstructured language-based strategy into this machine-readable specification is a non-trivial challenge. This translation could be conducted via a human interpreter, however, interpreters with the requisite expertise will not always be available. Alternatively, humans could utilize a structured interface to specify their intent. However, interfaces can become overly complicated, and humans become demotivated to work with an AI system when they cannot easily navigate the interface (Hayes, 1985). Enabling humans to express their strategic intent in everyday language provides an effective solution to these issues." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.318, + 0.492, + 0.624 + ], + "angle": 0, + "content": "In this paper, we develop an approach to solve a task we call automatic strategy translation, wherein we learn to infer strategic intent, in the form of goals and constraints, from language. Prior work has developed methods to utilize language to specify policies of an AI agent (Tambwekar et al., 2021; Gopalan et al., 2018; Thomason et al., 2019; Blukis et al., 2019) or specify reward functions or tasks which can be optimized for, via reinforcement learning (RL) or a planner (Gopalan et al., 2018; Padmakumar et al., 2021; Silva et al., 2021a). However, our work is the first to translate language into goals and constraints, which can be applied towards constrained optimization approaches for directing agent behavior independent of the original human specifier. Unlike prior work, we focus on interpreting language description of complex gameplay strategies, rather than simple individual commands (e.g., \"move from A to B; open the door\")." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.63, + 0.492, + 0.92 + ], + "angle": 0, + "content": "First, we collect a dataset of over 1000 examples mapping language to goals and constraints, leveraging a game environment of Risk. Next, we fine-tuned a pretrained RoBERTa model (Liu et al., 2019), equipped with model augmentations and customized loss functions such as Order-Agnostic Cross Entropy (Du et al., 2021), to infer goals and constraints from language strategy specifications. Finally, we employ a human evaluation to test our approach. Recent work has shown that automated evaluation metrics for language models may provide a misleading measure of performance (Liang et al., 2022). Therefore, we design a head-to-head evaluation, whereby, we can directly compare our model to the average human interpreter. In addition to humans, we prompted ChatGPT to perform the same task on a held-out set of 30 examples. We computed the statistical difference between our" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.885, + 0.133 + ], + "angle": 0, + "content": "model and these baselines, providing a concrete measure of the relative efficacy of our approach. Our contributions are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.144, + 0.887, + 0.21 + ], + "angle": 0, + "content": "- We propose one of the first complete machine learning pipelines including data collection, augmentation and model training for inferring structured strategic intent from human language." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.22, + 0.887, + 0.285 + ], + "angle": 0, + "content": "- Through a human study, we show that our proposed approach can interpret goals and constraints from language descriptions better than the average human \\((p < 0.001)\\)." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.289, + 0.887, + 0.354 + ], + "angle": 0, + "content": "- Through in-context learning, we evaluate ChatGPT's performance to gauge the relative efficacy of our approach, and show that our approach significantly outperforms ChatGPT (p < 0.05)." + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.144, + 0.887, + 0.354 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.363, + 0.666, + 0.379 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.389, + 0.886, + 0.454 + ], + "angle": 0, + "content": "This section covers prior work on learning strategies from language, as well as methods and datasets to enable humans to specify AI-behavior in a mixed-initiative setting." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.465, + 0.834, + 0.48 + ], + "angle": 0, + "content": "2.1 Learning strategies from Language" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.486, + 0.886, + 0.92 + ], + "angle": 0, + "content": "A common approach for specifying strategies through language has been through encoding language instructions, via planning-based representation languages, such as PDDL or LTL (Williams et al., 2018; Bahdanau et al., 2018; Thomason et al., 2019; Tellex et al., 2020), or deep learning (Fu et al., 2019; Blukis et al., 2019; Gopalan et al., 2018). Such formulations facilitate the ability to constrain actions taken by the agent to the instruction specified, e.g. \"Go around the tree to your left and place the ball.\" Another popular alternative is language-conditioned learning, where language is employed to specify a reward function, or a task (Silva et al., 2021a; Goyal et al., 2019; Andreas et al., 2017; Shridhar et al., 2022). Such approaches seek to improve the ability of an agent to complete a task(s) through intermediate language inputs, such as \"take the ladder to your left\". However, these approaches do not allow a supervisor to specify their strategic intent, such that the agent can complete it's primary task while still adhering to the specifier's plan. Recent work proposed a novel approach to mapping language to constraints and rewards via a dependency tree (Rankin et al., 2021), however their approach relies on a pre-trained grammar to extract a dependency tree, thus may not scale to human-like language." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12802" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.117, + 0.085, + 0.491, + 0.391 + ], + "angle": 0, + "content": "Formally, the process of optimizing AI systems given goals and constraints has been broadly categorized as Seldonian Optimization (Thomas et al., 2019, 2017). In this framework, the goal is to optimize the priorities of an objective function while adhering to a given set of constraints as opposed to simply optimizing based on the reward or loss function. (Yang et al., 2020) proposed a Seldonian optimization approach to translate constraints into a feature representation, encoding invalid regions in the state space, which is then applied towards safe RL. However their application is restricted to learning to parse individual constraint statements such as \"Don't get too close to the water,\" rather than facilitating constraint extraction from more realistic descriptions pertaining to an entire strategy. In our work, we provide a first-of-its-kind dataset, and correspondent model, to capacitate seldonian optimization through unstructured language." + }, + { + "type": "title", + "bbox": [ + 0.117, + 0.408, + 0.414, + 0.424 + ], + "angle": 0, + "content": "2.2 Language and Strategy Datasets" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.438, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Prior datasets for instruction following and policy specifications are often comprised of shorter instructions describing individual tasks. In contrast, our dataset consists of larger, unstructured descriptions of strategies which may be more reflective of potential strategy descriptions from in-the-wild users. Recent work has published a dataset of policy descriptions which are similar to the language descriptions we collect (Tambwekar et al., 2021) - however, they describe specific policies, rather than broad strategies for a task. Other datasets look to map language to trajectories or goals states within the trajectory (Padmakumar et al., 2021; Misra et al., 2018; Suhr et al., 2019). These datasets typically serve as a means of replacing physical demonstrations with language. These datasets lack explicit goals and constraints corresponding to the language collected, that can be applied towards seldonian optimization. Recent work provided a dataset with constraint statements (Yang et al., 2020) which are designer-specific; however, each constraint is associated with an isolated statement, making it unclear whether this approach will generalize to unprompted language describing multiple constraints. Unlike prior work, our dataset provides the ability to apply Seldonian optimization approaches from unstructured language. Furthermore, we conduct a study wherein we provide a human and ChatGPT baseline for our dataset to highlight the challenging nature of this task." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.084, + 0.867, + 0.101 + ], + "angle": 0, + "content": "3 Natural Language Strategies in RISK" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.114, + 0.882, + 0.209 + ], + "angle": 0, + "content": "Our work aims to facilitate humans to specify their strategy or commander's intent to an AI system via language. In this section, we utilize the board game Risk to create a dataset that maps unstructured natural language descriptions of strategies to actionable intent in the form of goals and constraints." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.23, + 0.719, + 0.245 + ], + "angle": 0, + "content": "3.1 Board Game - RISK" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.258, + 0.884, + 0.61 + ], + "angle": 0, + "content": "Risk (Gibson et al., 2010) is a multiplayer strategy board game of diplomacy, conflict, and conquest, which was first invented in 1957. The gameplay of Risk consists of four phases: Draft, Recruit, Attack, and Move. The draft phase is conducted at the start of the game wherein each player drafts an initial set of continents and deploys a fixed number of troops onto those continents. This allocation of troops is a crucial participatory task (Muller and Kuhn, 1993) which involves humans reasoning about their strategy and setting up for the rest of the game. Participants may choose any of the empty territories on the map to deploy their troops, with a wide range of strategies that may depend on their opponent's troop allocation. For example, a more conservative player may draft troops to only one continent for better defense, whereas a player with a more aggressive strategy may choose to spread out their troops. After the draft phase, each subsequent turn for a player involves iteratively conducting the recruit, attack, and move phases. Further details about Risk can be found in Appendix-I." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.615, + 0.884, + 0.918 + ], + "angle": 0, + "content": "In our setting, we use a map layout that has 5 continents with a total of 21 territories/countries, as illustrated in Figure 1. Instead of real country names used in the Risk game, we use ad-hoc names for each continent (e.g., Red, Green, Blue, etc.) to mitigate participant bias. In the draft phase, each player takes turns to deploy 14 troops. The specific set of tasks that humans need to complete for our study include: (i) develop a strategy for Risk and deploy 14 troops after the two opposing players have completed their draft; (ii) provide six goals (on a 200-point scale) and up to eight constraints that were relevant to their allocation of troops and broader intents; (iii) use natural language to describe their overall strategy and the goals and constraints they considered. The troops of the opposing player are shown to the participants prior to completing these tasks. More details about this data collection process are discussed in Section 3.3." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "12803" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.282, + 0.099 + ], + "angle": 0, + "content": "3.2 Task Definition" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.109, + 0.491, + 0.511 + ], + "angle": 0, + "content": "Our goal is to develop a computational interface capable of inferring strategic intent from unstructured language descriptions of strategies. Formally, we define the task of Automatic Strategy Translation as follows: Given the troop deployments \\( S \\), a map \\( M \\), and the strategy \\( W \\), which is a paragraph written in natural language, our task is to automatically derive a set of goals \\( G \\) and constraints \\( C \\). The troop selections \\( S \\) include the name and number of troops for each territory drafted by the player. We have a total of 6 predefined goals, each of which takes a numeric value between \\([-100, 100]\\). This numeric value corresponds to whether the goal positively or negatively aligns with the strategy. For example, for the goal \"maximize battles\", 100 implies that the player intends to battle as much as possible, and -100 implies that the player intends to battle as infrequently as possible. Each constraint is comprised of a class and value. We restrict the number of possible constraints to 8 as a reasonable upper bound per strategy. To summarize, each example \\(\\langle M, W, S, C, G \\rangle \\in \\mathcal{D}\\) consists of a strategy \\( W \\) described in natural language, for a player's troop selections, \\( S \\), on a map, \\( M \\), from which \\( C \\) and \\( G \\) are the gold standard constraints and goals." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.526, + 0.285, + 0.54 + ], + "angle": 0, + "content": "3.3 Data Collection" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.55, + 0.49, + 0.92 + ], + "angle": 0, + "content": "We collected a dataset \\(\\mathcal{D}\\) of 1053 unique examples by recruiting participants on Amazon Mechanical Turk and Profilic (pro, 2014). Firstly, to familiarize participants with the game, we designed a tutorial that provided a description and annotated examples to explain the rules of the game and the tasks that participants needed to perform. As a further measure of improving data quality, participants were quizzed on the rules of Risk to reinforce their understanding (full quiz has been provided in §A.2). They were given three attempts to answer correctly, after which they were shown the answers. Upon completing the quiz, participants began the task. We showed participants a map, which shows the drafted troops of the two opposing players, and asked them to provide their own troop deployments. Following their draft, participants are asked to provide the goals and constraints they considered for their gameplay strategy/deployments and finally provide a language description of their strategy. The language strategy they provided needed to have at least 200 characters. Each participant was asked to repeat this task 5 times to create 5 data points," + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.882, + 0.133 + ], + "angle": 0, + "content": "each time with a different map. The maps seen by participants were selected from a set of 15 unique initial troop settings." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.134, + 0.885, + 0.262 + ], + "angle": 0, + "content": "Participants needed approximately 10 minutes per data point. Figure 1 depicts the format of our dataset. Our dataset included data from 230 participants. The average length of language descriptions in our dataset was 99.21 words, and the overall vocabulary size was 2,356 words. Additional details regarding our data collection protocol are available in Appendix A." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.278, + 0.818, + 0.294 + ], + "angle": 0, + "content": "4 Automatic Strategy Translation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.304, + 0.885, + 0.514 + ], + "angle": 0, + "content": "Following the data collection in Section 3, our goal is to leverage this dataset to develop a model that can perform the task of automatic strategy translation. Inferring strategic intent from language is a non-trivial endeavor as unstructured language can be vague thus leading to ambiguous interpretations. We seek to develop an approach capable of performing this task better than the average human, so as to enable strategy specification via language to reduce the potential risk of human errors or the need of third-party expert interpreters. In this section, we cover the technical details which make this task possible in a low-data setting." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.527, + 0.663, + 0.541 + ], + "angle": 0, + "content": "4.1 Text Encoder" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.549, + 0.884, + 0.773 + ], + "angle": 0, + "content": "We adopted the pretrained RoBERTa model (Liu et al., 2019) as our encoder which is parameterized by \\(\\theta\\). The input sequence to our model is comprised of the language description of the strategy, \\(W = [w_{1}, w_{2}, \\ldots, w_{|W|}]\\), and troop selections \\(S = [s_{1}, s_{2}, \\ldots, s_{|S|}]\\), where each troop selection is comprised of the country name along with the number of troops placed on that country (e.g., \\(S = [Red\\_A = 2, Red\\_C = 8, Purple\\_D = 4]\\)). The encoder learns the embedding function, which maps the text input, comprised of the strategy \\(W\\) and selections \\(S\\), to a \\(d\\)-dimensional real-valued vector which then be used towards predicting goals (\\(\\S 4.2\\)) and constraints (\\(\\S 4.3\\))." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.775, + 0.885, + 0.92 + ], + "angle": 0, + "content": "Ordinarily, the final embedding for the single [CLS] token learned by RoBERTa, i.e., \\( E_{\\theta} = BERT_{[CLS]}(W,S) \\), is used for classification. In this work, we incorporate multiple classification tokens (Chang et al., 2023), each of which corresponds to an individual goal or constraint. For \\( i \\)th goal or constraint, we learn a separate classification embedding, \\( E_{\\theta}^{i} = BERT_{[CLS_{i}]}(W,S) \\). Using individual class-specific tokens improves the model" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12804" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.169, + 0.081, + 0.805, + 0.271 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.283, + 0.885, + 0.341 + ], + "angle": 0, + "content": "Figure 2: Illustration of our Automatic Strategy Translation model. The input to the model includes the classification tokens, language description, and troop selections (Section 4.1). The encoder then generates embeddings for each classification token, and passes them onto an individual classification head. Each classification head is a fully-connected layer that predicts a probability distribution for the respective goal (\\(\\S 4.2\\)) or constraint (\\(\\S 4.3\\))." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.365, + 0.489, + 0.448 + ], + "angle": 0, + "content": "the capability to learn different attention weights corresponding to the classification embeddings for each goal or constraint. We utilize different encoders for predicting goals and constraints, which are parameterized by \\(\\theta_{g}\\) and \\(\\theta_{c}\\), respectively." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.456, + 0.342, + 0.47 + ], + "angle": 0, + "content": "4.2 Goal Extraction Model" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.476, + 0.49, + 0.589 + ], + "angle": 0, + "content": "We treat the subtask of deriving goals from language as an ordinal classification task. Originally, in our dataset goals are specified as continuous values ranging from \\([-100, 100]\\), which we discretize by creating 5 uniform buckets, i.e., \\([-100, -60)\\), \\([-60, -20)\\), etc. That is, for each goal, we predict an assignment as a 5-class classification as:" + }, + { + "type": "equation", + "bbox": [ + 0.237, + 0.597, + 0.488, + 0.619 + ], + "angle": 0, + "content": "\\[\nP _ {j} = L _ {\\phi_ {j}} \\left(E _ {\\theta_ {g}} ^ {j}\\right), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.625, + 0.49, + 0.74 + ], + "angle": 0, + "content": "where \\(P_{j}\\) represents the probability distribution across assignments for \\(j\\)th goal and \\(E_{\\theta_g}^j\\) corresponds to the embedding from the encoder. Each goal uses a separate classification layer \\(L\\) parameterized by \\(\\phi_j\\). The goal extraction model is trained on a dual-criteria loss function that combines cross-entropy (CE) and mean-square-error (MSE) loss:" + }, + { + "type": "equation", + "bbox": [ + 0.174, + 0.749, + 0.488, + 0.767 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {g o a l}} = \\alpha \\mathcal {L} _ {C E} + (1 - \\alpha) \\mathcal {L} _ {M S E}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.775, + 0.489, + 0.824 + ], + "angle": 0, + "content": "where \\(\\alpha\\) is a simple weighting hyperparameter. The addition of MSE loss helps to account for the ordinal nature of goal value predictions." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.833, + 0.391, + 0.848 + ], + "angle": 0, + "content": "4.3 Constraint Extraction Model" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.854, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Similar to the goal extraction model, the input to each classification head for constraint prediction is \\( E_{\\theta_c}^k \\), which corresponds to the classification embedding learned by the encoder for the \\( k^{th} \\) constraint." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.365, + 0.885, + 0.51 + ], + "angle": 0, + "content": "However, unlike for the goal extraction model, each of the eight constraint classification heads learns to predict the constraint itself rather than a value for a fixed goal. Therefore, the model needs to predict the set of unordered constraints \\(\\{c_1, c_2, \\ldots, c_8\\}\\), wherein each \\(c_k\\) is predicted from the set of all possible constraints \\(C\\) (190 total possible constraints). Each strategy can have a maximum of eight constraints, i.e., the set \\(C\\) includes a null value." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.518, + 0.885, + 0.92 + ], + "angle": 0, + "content": "While providing constraints during data collection, participants merely assigned constraints to their strategy, but did not rank the ordering of constraints. As such, the order of constraints in our dataset does not necessarily correspond to the order in which each classification head needs to predict the constraints. Therefore, each classification head does not have a strict label it can utilize to compute a classification loss, making this task distinct from conventional sequence prediction or multiclass classification tasks. For instance, if the constraints predicted by the model are \\(\\{C,\\emptyset ,B,D\\}\\) and the labels for this strategy are \\(\\{A,B,C,\\emptyset \\}\\), utilizing a standard classification loss function, such as cross-entropy, would result in a higher loss than what is representative of the prediction, as three out of four constraints have been predicted correctly. As such, this task requires a loss function that allows us to train our model to predict the correct constraints for a language strategy agnostic of the ordering of the labels. We chose to adopt a recently proposed loss function called Order-Agnostic Cross Entropy (OaXE) (Du et al., 2021). Intuitively, OaXE is defined as the cross entropy for the best possible alignment of output tokens." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12805" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.115, + 0.082, + 0.885, + 0.191 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.199, + 0.884, + 0.23 + ], + "angle": 0, + "content": "Figure 3: Pipeline for augmenting synthetic or human-created data (\\(\\S 4.4\\)). A strategy description is first split into sentences, then passed into the PEGASUS (Zhang et al., 2020) paraphrasing model and data quality filter." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.253, + 0.49, + 0.334 + ], + "angle": 0, + "content": "Let \\( O = \\{O_1, O_2, \\ldots, O_{|O|}\\} \\) be the ordering space of all possible orderings of the target sequence of constraints, where each \\( O_l \\) is one possible ordering of the target tokens. The final loss function is computed as:" + }, + { + "type": "equation", + "bbox": [ + 0.202, + 0.348, + 0.489, + 0.365 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {O a X E} = - \\log P \\left(O ^ {*} \\mid X\\right) \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.377, + 0.49, + 0.475 + ], + "angle": 0, + "content": "where \\(O^{*}\\) represents the best possible alignment from \\(O\\). This alignment is computed by applying the Hungarian algorithm, after casting this problem as maximum bipartite matching (Du et al., 2021). As our final loss function, we follow Du et al. (2021) in combining OaXE with cross-entropy loss:" + }, + { + "type": "equation", + "bbox": [ + 0.124, + 0.488, + 0.489, + 0.505 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {c o n s t r a i n t}} = T _ {m} * \\mathcal {L} _ {C E} + (1 - T _ {m}) * \\mathcal {L} _ {O a X E} \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.517, + 0.49, + 0.582 + ], + "angle": 0, + "content": "where \\(T_{m}\\) is a temperature parameter that is logistically annealed from 1 to 0. In our case, cross entropy \\((\\mathcal{L}_{CE})\\) is computed using the default ordering of labels in our dataset." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.593, + 0.388, + 0.608 + ], + "angle": 0, + "content": "4.4 Data Augmentation Methods" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.614, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Finally, we applied data augmentation procedures to improve our model's performance. First, we randomly generated 4000 unique sets of goals and constraints, and applied a text template to produce descriptions to develop a Synthetic (S) training corpus. For example, the constraint, \"I must have troops on Red\" could be represented as \"My strategy is to take over Red,\" or \"I need a large army on Red,\" or \"I need to place troops on Red.\" We further augmented this synthetic corpus with a pretrained PEGASUS (Zhang et al., 2020) paraphrasing model to create an Augmented-Synthetic (AS) dataset. We split each language description from the synthetic corpus into individual sentences and employed the paraphrasing model to generate candidate paraphrases. Sentences that replaced important keywords, such as continent names, or were too similar to the original sentence in terms of edit distance were removed. We randomly chose" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.254, + 0.885, + 0.543 + ], + "angle": 0, + "content": "a sentence from the remaining candidates as a replacement sentence, and combined the replacement sentences to form an augmented data point (see Figure 3). The two Synthetic datasets (S, AS) were used to pretrain our model prior to training on human data. The same techniques were also applied to our human dataset to form a Augmented-Human dataset (AH). Our final Augmented-Human data set is a version of our original crowdsourced dataset where each example is rephrased using our augmentation pipeline and is twice the size of our original human dataset. We experiment with utilizing the AH dataset in place of the original human dataset to see if the added diversity in our corpus through paraphrasing improves downstream performance. Examples of Synthetic (S), Augmented-Synthetic (AS), and Augmented-Human (AH) data are provided in Figure 6 in the Appendix." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.555, + 0.656, + 0.572 + ], + "angle": 0, + "content": "5 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.582, + 0.884, + 0.791 + ], + "angle": 0, + "content": "This section will present the empirical evaluations of our approach. We design two evaluation experiments to contrast our model's performance with humans, as well as ChatGPT trained to perform our task through in-context learning. Both human and ChatGPT performance was computed using the 30 held-out examples in our test set. We statistically measure the difference in the average number of goals/constraints predicted correctly per data point between our model and the two baselines (Human + ChatGPT). We conclude with an ablation analysis across the model and data augmentations utilized in this approach." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.802, + 0.722, + 0.816 + ], + "angle": 0, + "content": "5.1 Human Performance" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.823, + 0.885, + 0.92 + ], + "angle": 0, + "content": "In our first study, we ask how well the average human can perform on the task of parsing strategic intent from language (see Table 1). We recruited 114 participants for our study from Prolific. Participants begin with a tutorial of the task and are provided an annotated example explaining how to" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12806" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.12, + 0.082, + 0.486, + 0.142 + ], + "angle": 0, + "content": "
BaselineGoals (Total = 6)Constraints (Total = 8)
Model (Ours)2.76 ± 1.055.53 ± 1.26
Human1.87 ± 1.124.28 ± 1.83
ChatGPT2.10 ± 1.273.80 ± 1.51
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.154, + 0.489, + 0.184 + ], + "angle": 0, + "content": "Table 1: Mean and standard deviations for the number of correct predictions of each approach." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.21, + 0.492, + 0.468 + ], + "angle": 0, + "content": "assign goals and constraints given a language description and map. Following this tutorial, each participant is provided three randomly selected maps and language descriptions from our test set of 30 unique data points and is asked to annotate the goals and constraints for each given strategy. Our study included attention checks to ensure participants who were submitting random responses could be excluded. The average time taken for our study was 21 minutes, and participants were paid $3.6 for completing our task. We utilized a data filtering rubric to identify and remove individual data points which were inadequate or were from participants who appeared to blatantly ignore or misunderstand the instructions. The rubric is included in Appendix F. After filtering, a total of 270 responses remained." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.479, + 0.345, + 0.493 + ], + "angle": 0, + "content": "5.2 ChatGPT Performance" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.5, + 0.49, + 0.789 + ], + "angle": 0, + "content": "We also evaluate ChatGPT (GPT-3.5 Default) as a baseline for our task (see Table 1). We design a 1000-word language prompt to train ChatGPT to perform the same task (see full prompt in Appendix G.1). This prompt includes a description of the environment and task, as well as an annotated example translating goals and constraints from language. Crucially, we design our prompt such that ChatGPT receives the same information that humans receive in our study in §5.1. Following this prompt, we iteratively input each strategy and troop deployment in our test set and store the constraints selected by ChatGPT. The additional prompt engineering we conduct is to notify ChatGPT when it makes formational mistakes while predicting constraints, such as predicting more than the maximum number of constraints or creating new constraint classes." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.802, + 0.378, + 0.817 + ], + "angle": 0, + "content": "5.3 Results for Goal Extraction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.491, + 0.919 + ], + "angle": 0, + "content": "The average number of goals predicted correctly per map can be seen in the first column of Table 1. We applied multivariate linear regression to compare the results of our model with our ChatGPT and human baselines, with Akaike information criterion (AIC) as our Occam's razor. AIC is a mathematical" + }, + { + "type": "table", + "bbox": [ + 0.512, + 0.083, + 0.885, + 0.2 + ], + "angle": 0, + "content": "
Model TypeDataPretrainingAccuracy (Std)
RoBERTa base--44.37 (1.33)
w/ troopAHAS46.04 (1.85)
w/ troop + [CLSi]AHAS45.52 (1.48)
w/ troop + [CLSi]AHS45.32 (1.01)
w/ troop + [CLSi]AH-45.89 (1.26)
w/ [CLSi]AHAS44.29 (1.14)
w/ troop + [CLSi]H-45.07 (1.33)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.21, + 0.886, + 0.326 + ], + "angle": 0, + "content": "Table 2: Ablation study (10-fold cross-validation) with respect to model and data augmentations for goal extraction. H: the human-created dataset (\\$3.3); S: the synthetic dataset created from templates; AH/AS: the augmented version of H/S via paraphrasing (\\$4.4). \\([\\mathrm{CLS}_i]\\) represents the use of individual classification tokens for each goal/constraint (\\$4.1); \"troop\" represents the inclusion of troop selections as a part of the input." + }, + { + "type": "table", + "bbox": [ + 0.512, + 0.339, + 0.885, + 0.468 + ], + "angle": 0, + "content": "
ModelDataPretrainingAccuracy (Std)
RoBERTa baseH-62.60 (1.60)
w/ troop + [CLSi]HS68.21 (1.08)
w/ troop + [CLSi]AHS67.79 (1.58)
w/ troop + [CLSi]HAS67.09 (1.28)
w/ troopHS65.96 (1.12)
w/ troop + [CLSi]H-65.76 (1.13)
w/ troop + [CLSi]AH-65.52 (1.42)
w/ [CLSi]HS65.31 (1.12)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.479, + 0.884, + 0.507 + ], + "angle": 0, + "content": "Table 3: Ablation study (10-fold cross-validation) for constraint extraction." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.534, + 0.886, + 0.92 + ], + "angle": 0, + "content": "method for determining a model-fit so as to choose the regression model which best fits our data. For the goals model, we modeled each baseline (human vs. model vs. ChatGPT) as a fixed effects co-variate, and the datapoint number as a mixed effects variable. The datapoint corresponded to the numerical index (between 1 - 30) of the datapoint from the test set. We performed the Levene's test (Glass, 1966) to show homoscedasticity \\((F(2,327) = 0.5435\\), \\(p = 0.581)\\). The residuals for our model were not normally distributed; however, prior work has shown that an F-test is robust to non-normality (Blanca Mena et al., 2017; Cochran, 1947). Therefore, we proceeded with our linear regression analysis. The dependent variable within our analysis was the number of goals predicted correctly. An ANOVA with respect to our dependent variable yielded a significant difference across conditions \\((F(2,299.95) = 10.605\\), \\(p < 0.001)\\). A Tukey post-hoc test (Abdi and Williams, 2010) for pairwise significance further revealed a significant difference between the performance of our model vs humans \\((p < 0.001)\\) and vs ChatGPT \\((p < 0.05)\\), i.e., our approach was able to significantly predict" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12807" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.115, + 0.085, + 0.411, + 0.1 + ], + "angle": 0, + "content": "goals better than humans and ChatGPT." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.121, + 0.423, + 0.135 + ], + "angle": 0, + "content": "5.4 Results for Constraint Extraction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.147, + 0.49, + 0.453 + ], + "angle": 0, + "content": "The average number of constraints predicted correctly per map can be seen in column 2 of Table 1. To compare our constraint prediction model, to our human and ChatGPT baselines, we conducted a non-parametric Friedman's test (Pereira et al., 2015). We could not employ a multivariate regression analysis, as the regression model for constraints did not satisfy the assumption of homoscedasticity as per Levene's test \\((F(2,327) = 5.4294, p < 0.01)\\). The Friedman's test yielded a significant difference across conditions for the task of predicting constraints \\((\\chi^2 (2,90) = 16.768, p < 0.001)\\). A further pairwise Wilcoxon signed rank test (Woolson, 2007) revealed a significant difference between humans and our model \\((p < 0.001)\\) as well as ChatGPT and our model \\((p < 0.001)\\), indicating that our approach is not just able to significantly outperform humans, but also ChatGPT for inferring constraints from language." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.472, + 0.245, + 0.487 + ], + "angle": 0, + "content": "5.5 Discussion" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.499, + 0.49, + 0.756 + ], + "angle": 0, + "content": "Our results emphasize that inferring strategic intent from language is a non-trivial task, as language interpretation can be subjective and malleable. ChatGPT is capable of performing novel tasks such as text classification (Li et al., 2023), mathematical problem solving (Frieder et al., 2023), and information extraction (He et al., 2023). through in-context learning. However, despite these capabilities, our model was found to significantly outperform chatGPT for inferring strategic intent from language. Success in highly specific and complex language interpretation tasks, such as ours, requires the model to build an understanding of the domain and the task itself as generic language interpretation learned by the majority of pretrained language models may not be applicable." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.759, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Recent work on evaluating open question-answering on a challenge-dataset has shown that even for large-scale language models with between 6B-100B parameters, none of these models outperformed humans (Peinl and Wirth, 2023). By developing a computational interface which can infer strategic intent from language significantly better than humans, we show the usefulness of our pipeline towards solving complex domain-specific task in a low-data, -resource setting." + }, + { + "type": "table", + "bbox": [ + 0.511, + 0.082, + 0.885, + 0.137 + ], + "angle": 0, + "content": "
BaselineConstraintsGoals
Roberta-base (Best)68.21 (1.08)46.04 (1.85)
GPT-Neo 125M (Best)65.22 (1.21)46.08 (0.73)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.15, + 0.884, + 0.195 + ], + "angle": 0, + "content": "Table 4: This table depicts the performance when the roberta-base encoder is substituted with a SOTA autoregressive model, i.e. GPT-Neo (125 million parameters)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.219, + 0.674, + 0.234 + ], + "angle": 0, + "content": "5.6 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.239, + 0.885, + 0.641 + ], + "angle": 0, + "content": "Tables 3 and 2 provide the results from abating each model augmentation discussed in Section 4. The effects of these augmentations are more prominent in the model for predicting constraints (\\(\\sim\\) 6% performance boost) than predicting goals (\\(\\sim\\) 1.5% performance boost). For the constraints model, when any parameter, i.e. troop selections, pretraining, or CLS-Token, were removed, the accuracy dropped by \\(\\sim\\) 3% individually. For predicting goals, the inclusion of troop selections was the only model-augmentation which seemed to have a decisive impact performance, as all models with selections had an accuracy of \\(\\sim\\) 1% more than those without. We attribute the difficulty in improving the performance of the goals model to the contextual ambiguity for values assigned to each goal. Participants may not always follow the same metric while specifying goal values. Each participant could have a unique interpretation, for what any rating between -100 to 100 means for a particular goal, and description of that value through language (see Appendix for the data distribution corresponding to each goal). This disparity in interpreting values could be affecting the consistency of language descriptions for goals in our dataset." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.642, + 0.884, + 0.85 + ], + "angle": 0, + "content": "Finally, the last ablation conducted studied the effect of the type of encoder utilized in our approach. Therefore, we performed a comparison with a model which replaced the encoder with a SOTA pretrained autoregressive model. We utilized GPT-Neo (Black et al., 2021) for our experiments, as it has the same number of parameters as Roberta-base (125 million). Our findings (see Table 4) show that utilizing an autoregressive model as our encoder offers no benefits to a roberta-base model, the GPT-Neo model performed equivalently for predicting goals and about \\(3\\%\\) worse for the constraints model." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.863, + 0.642, + 0.877 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.888, + 0.884, + 0.919 + ], + "angle": 0, + "content": "In this paper, we develop a novel computational interface to automate inferring strategic intent, in the" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12808" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.117, + 0.085, + 0.487, + 0.325 + ], + "angle": 0, + "content": "form of goals and constraints, from unstructured language descriptions of strategies. We develop a new benchmark for our dataset and broader task, and further conduct a novel head-to-head evaluation to determine the relative efficacy of our approach. We show that in a low-data setting, our approach towards inferring goals and constraints from language strategy descriptions can significantly outperform humans for the same tasks. Furthermore, we also found that our approach, with only 125 million parameters, was able to significantly outperform ChatGPT for inferring strategic intent from language. Our work endows researchers with valuable tools to further seldonian optimization approaches for mixed-initiative interaction." + }, + { + "type": "title", + "bbox": [ + 0.117, + 0.34, + 0.231, + 0.354 + ], + "angle": 0, + "content": "Future Work" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.366, + 0.487, + 0.815 + ], + "angle": 0, + "content": "To measure ChatGPT performance, we employ a one-shot chain-of-thought prompt method with a detailed instructions of the task. We chose this method to maintain consistency between the information shown to humans and ChatGPT. Future work may explore ablations on the size of the initial prompt or the number of annotated examples in the prompt to tune the performance of ChatGPT on our strategy translation task. Secondly, an important next step that stems from this research pertains to multi-round inference and updating the initially learned strategy. In future work, it would be helpful to develop methods to allow users to modify their initial strategy throughout the game or task as their goals or values change. These methods could utilize approaches proposed in prior work wherein language inputs were leveraged to change the sub-goals that an agent is considering (Fu et al., 2019; Goyal et al., 2019). Furthermore, recent work has shown promise for the capabilities of ChatGPT/GPT-3.5 towards dialog-state tracking and task-oriented dialog (Labruna et al., 2023; Heck et al., 2023). Future work could also formulate this task of updating the initial strategy over the course of the game as a goal-oriented dialog, and tune GPT-3.5 or GPT-4 to update a user's initially translated strategy after multiple rounds of the game through language feedback." + }, + { + "type": "title", + "bbox": [ + 0.117, + 0.83, + 0.218, + 0.844 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.856, + 0.487, + 0.919 + ], + "angle": 0, + "content": "Firstly, we asked participants to provide natural language descriptions after providing their structured intent in the form of goals and constraints. This potentially biased the participant towards specifically" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.085, + 0.882, + 0.486 + ], + "angle": 0, + "content": "referencing the terminology utilized in the goals and constraints. While our dataset provides explanations that are the closest to natural, human-like descriptions of strategies, an important next step would entail comparing how our model performs on strategies collected \"in-the-wild.\" Secondly, in this paper we assume that utilizing language is more accessible than learning to use mathematical specifications directly to specify their intent to an intelligent agent. However, we do not test whether this assumption bears out in practice. In future work, we hope to develop a human-subjects study to confirm this hypothesis. Finally, despite converting language to goals and constraints, in this work we do not directly train a seldonian optimization approach. In this work, we focus on showing the capability of our machine learning pipeline in a low-data setting. However, we have provided all the components needed to train a reinforcement learning approach for an RL-agents constraining behavior through unstructured language (including a novel open-AI RL domain for the game Risk, see Appendix). Developing this approach is currently outside the scope of this work, and we thereby leave this exploration for future work." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.501, + 0.659, + 0.515 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.527, + 0.882, + 0.75 + ], + "angle": 0, + "content": "As pretrained large-language models are utilized in our approach for automated strategy translation, we need to be cognizant of the prevalence of bias within these models. If these systems are translating strategies in safety-critical settings, it is important to make sure that the language models make the decisions solely based on the provided context rather than any inherent bias. Many sets prior work have studied approaches to identify and mitigate bias (Abid et al., 2021; Silva et al., 2021b; Guo et al., 2022; Viswanath and Zhang, 2023). We encourage authors to seek out such works prior to deploying any strategy translation module, towards a real-world task." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.765, + 0.68, + 0.78 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.792, + 0.882, + 0.902 + ], + "angle": 0, + "content": "This work was supported by the Office of Naval Research under awards, N00014-19-1-2076, N00014-22-1-2834, N00014-23-1-2887, and the National Science Foundation under award, FMRG-2229260. We also thank Konica Minolta for their contribution to this work via a gift to the Georgia Tech Research Foundation." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "12809" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.116, + 0.085, + 0.214, + 0.099 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.107, + 0.486, + 0.133 + ], + "angle": 0, + "content": "2014. Online participant recruitment for surveys and market research." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.142, + 0.487, + 0.182 + ], + "angle": 0, + "content": "Herve Abdi and Lynne J Williams. 2010. Tukey's honestly significant difference (hsd) test. Encyclopedia of research design, 3(1):1-5." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.19, + 0.487, + 0.243 + ], + "angle": 0, + "content": "Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 298-306." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.252, + 0.487, + 0.279 + ], + "angle": 0, + "content": "Léo Andeol. 2018. Leoandeol/gym-risk: Gym environment for the risk game by hasbro." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.287, + 0.487, + 0.34 + ], + "angle": 0, + "content": "Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning, pages 166-175. PMLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.348, + 0.487, + 0.413 + ], + "angle": 0, + "content": "Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. 2018. Learning to understand goal specifications by modelling reward. arXiv preprint arXiv:1806.01946." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.422, + 0.487, + 0.489 + ], + "angle": 0, + "content": "Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow. If you use this software, please cite it using these metadata." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.497, + 0.487, + 0.55 + ], + "angle": 0, + "content": "María José Blanca Mena, Rafael Alarcón Postigo, Jaume Arnau Gras, Roser Bono Cabré, Rebecca Bendayan, et al. 2017. Non-normal data: Is anova still a valid option? Psicothema." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.559, + 0.487, + 0.624 + ], + "angle": 0, + "content": "Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A Knepper, and Yoav Artzi. 2019. Learning to map natural language instructions to physical quadcopter control using simulated flight. arXiv preprint arXiv:1910.09664." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.633, + 0.487, + 0.7 + ], + "angle": 0, + "content": "Haw-Shiuan Chang, Ruei-Yao Sun, Kathryn Ricci, and Andrew McCallum. 2023. Multi-CLS BERT: An efficient alternative to traditional ensembling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.707, + 0.487, + 0.747 + ], + "angle": 0, + "content": "William G Cochran. 1947. Some consequences when the assumptions for the analysis of variance are not satisfied. Biometrics, 3(1):22-38." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.756, + 0.487, + 0.796 + ], + "angle": 0, + "content": "Richard Dempsey and Jonathan M Chavous. 2013. Commander's intent and concept of operations. Military Review, 93(6):58-66." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.804, + 0.487, + 0.844 + ], + "angle": 0, + "content": "Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Order-agnostic cross entropy for non-autoregressive machine translation. arXiv preprint arXiv:2106.05093." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.852, + 0.487, + 0.919 + ], + "angle": 0, + "content": "Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. 2023. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.107, + 0.487, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.139 + ], + "angle": 0, + "content": "Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. 2019. From language to goals: Inverse reinforcement learning for vision-based instruction following. arXiv preprint arXiv:1902.07742." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.152, + 0.882, + 0.218 + ], + "angle": 0, + "content": "Richard Gibson, Neesha Desai, and Richard Zhao. 2010. An automated technique for drafting territories in the board game Risk. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 6(1):15-20." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.232, + 0.882, + 0.27 + ], + "angle": 0, + "content": "Gene V Glass. 1966. Testing homogeneity of variances. American Educational Research Journal, 3(3):187-190." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.285, + 0.882, + 0.35 + ], + "angle": 0, + "content": "Nakul Gopalan, Dilip Arumugam, Lawson Wong, and Stefanie Tellex. 2018. Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications. In Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.364, + 0.882, + 0.416 + ], + "angle": 0, + "content": "Prasoon Goyal, Scott Niekum, and Raymond J Mooney. 2019. Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.43, + 0.882, + 0.508 + ], + "angle": 0, + "content": "Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012-1023." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.522, + 0.882, + 0.576 + ], + "angle": 0, + "content": "Philip J Hayes. 1985. The utility of natural language interfaces (panel session). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, page 19." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.589, + 0.882, + 0.654 + ], + "angle": 0, + "content": "Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao Shen. 2023. Icl-d3ie: In-context learning with diverse demonstrations updating for document information extraction. arXiv preprint arXiv:2303.05063." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.668, + 0.882, + 0.746 + ], + "angle": 0, + "content": "Michael Heck, Nurul Lubis, Benjamin Ruppik, Renato Vukovic, Shutong Feng, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, and Milica Gašić. 2023. Chatgpt for zero-shot dialogue state tracking: A solution or an opportunity? arXiv preprint arXiv:2306.01386." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.761, + 0.882, + 0.813 + ], + "angle": 0, + "content": "Arie W Kruglanski. 1996. Goals as knowledge structures. P. M. Gollwitzer & J. A. Bargh (Eds.), The psychology of action: Linking cognition and motivation to behavior, pages 599-618." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.826, + 0.882, + 0.919 + ], + "angle": 0, + "content": "Geert-Jan M Kruijff, M Janicek, Shanker Keshavdas, Benoit Larochelle, Hendrik Zender, Ninja JJM Smets, Tina Mioch, Mark A Neerincx, Jurriaan Van Diggelen, Francis Colas, et al. 2014. Experience in system design for human-robot teaming in urban search and rescue. In Field and Service Robotics, pages 111-125. Springer." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12810" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.139 + ], + "angle": 0, + "content": "Tiziano Labruna, Sofia Brenna, Andrea Zaninello, and Bernardo Magnini. 2023. Unraveling chatgpt: A critical analysis of ai-generated goal-oriented dialogues and annotations. arXiv preprint arXiv:2305.14556." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.148, + 0.487, + 0.2 + ], + "angle": 0, + "content": "Jiazheng Li, Runcong Zhao, Yulan He, and Lin Gui. 2023. Overprompt: Enhancing chatgpt capabilities through an efficient in-context learning approach. arXiv preprint arXiv:2305.14973." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.21, + 0.486, + 0.275 + ], + "angle": 0, + "content": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.284, + 0.487, + 0.35 + ], + "angle": 0, + "content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.36, + 0.487, + 0.424 + ], + "angle": 0, + "content": "Joseph E Mercado, Michael A Rupp, Jessie YC Chen, Michael J Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent agent transparency in human-agent teaming for multi-uxv management. Human factors, 58(3):401-415." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.434, + 0.487, + 0.498 + ], + "angle": 0, + "content": "Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3d environments with visual goal prediction. arXiv preprint arXiv:1809.00786." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.509, + 0.487, + 0.535 + ], + "angle": 0, + "content": "Gordon B Moskowitz and Heidi Grant. 2009. The psychology of goals. Guilford press." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.544, + 0.486, + 0.57 + ], + "angle": 0, + "content": "Michael J Muller and Sarah Kuhn. 1993. Participatory design. Communications of the ACM, 36(6):24-28." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.58, + 0.487, + 0.644 + ], + "angle": 0, + "content": "Thomas Nickles. 1978. Scientific problems and constraints. In *PSA: Proceedings of the biennial meeting of the Philosophy of Science Association*, volume 1978, pages 134-148. Philosophy of Science Association." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.655, + 0.486, + 0.707 + ], + "angle": 0, + "content": "David G Novick and Stephen Sutton. 1997. What is mixed-initiative interaction. In Proceedings of the AAAI spring symposium on computational models for mixed initiative interaction, volume 2, page 12." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.716, + 0.487, + 0.793 + ], + "angle": 0, + "content": "Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. 2021. Teach: Taskdriven embodied agents that chat. arXiv preprint arXiv:2110.00534." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.804, + 0.487, + 0.856 + ], + "angle": 0, + "content": "Réné Peinl and Johannes Wirth. 2023. Evaluation of medium-large language models at zero-shot closed book generative question answering. arXiv preprint arXiv:2305.11991." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.866, + 0.487, + 0.918 + ], + "angle": 0, + "content": "Dulce G Pereira, Anabela Afonso, and Fátima Melo Medeiros. 2015. Overview of friedman's test and post-hoc analysis. Communications in Statistics-Simulation and Computation, 44(10):2636-2653." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.151 + ], + "angle": 0, + "content": "Ian C Rankin, Seth McCammon, and Geoffrey A Hollinger. 2021. Robotic information gathering using semantic language instructions. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4882-4888. IEEE." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.166, + 0.882, + 0.23 + ], + "angle": 0, + "content": "Joseph Rosen, Eliot Grigg, Jaron Lanier, Susan McGrath, Scott Lillibridge, David Sargent, and C Everett Koop. 2002. The future of command and control for disaster response. IEEE engineering in medicine and biology magazine, 21(5):56-68." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.244, + 0.882, + 0.296 + ], + "angle": 0, + "content": "Mohit Shridhar, Lucas Manuelli, and Dieter Fox. 2022. *Cliport: What and where pathways for robotic manipulation*. In *Conference on Robot Learning*, pages 894–906. PMLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.311, + 0.882, + 0.375 + ], + "angle": 0, + "content": "Andrew Silva, Nina Moorman, William Silva, Zulfiqar Zaidi, Nakul Gopalan, and Matthew Gombolay. 2021a. Lancon-learn: Learning with language to enable generalization in multi-task manipulation. IEEE Robotics and Automation Letters." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.39, + 0.882, + 0.482 + ], + "angle": 0, + "content": "Andrew Silva, Pradyumna Tambwekar, and Matthew Gombolay. 2021b. Towards a comprehensive understanding and accurate evaluation of societal biases in pre-trained transformers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2383-2389." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.496, + 0.882, + 0.56 + ], + "angle": 0, + "content": "Alane Suhr, Claudia Yan, Jacob Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situated collaborative interactions. arXiv preprint arXiv:1910.03655." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.575, + 0.882, + 0.626 + ], + "angle": 0, + "content": "Pradyumna Tambwekar, Andrew Silva, Nakul Gopalan, and Matthew Gombolay. 2021. Interpretable policy specification and synthesis through natural language and RL." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.641, + 0.882, + 0.68 + ], + "angle": 0, + "content": "Stefanie TELlex, Nakul Gopalan, Hadas Kress-Gazit, and Cynthia Matuszek. 2020. Annual Review of Control, Robotics, and Autonomous Systems, 3:25-55." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.694, + 0.882, + 0.747 + ], + "angle": 0, + "content": "Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, and Emma Brunskill. 2017. On ensuring that intelligent machines are well-behaved. arXiv preprint arXiv:1708.05448." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.761, + 0.882, + 0.813 + ], + "angle": 0, + "content": "Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, Stephen Giguere, Yuriy Brun, and Emma Brunskill. 2019. Preventing undesirable behavior of intelligent machines. Science, 366(6468):999-1004." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.827, + 0.882, + 0.918 + ], + "angle": 0, + "content": "Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J Mooney. 2019. Improving grounded natural language understanding through human-robot dialog. In 2019 International Conference on Robotics and Automation (ICRA), pages 6934-6941. IEEE." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.524, + 0.941 + ], + "angle": 0, + "content": "12811" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.116, + 0.086, + 0.488, + 0.138 + ], + "angle": 0, + "content": "Hrishikesh Viswanath and Tianyi Zhang. 2023. Fairpy: A toolkit for evaluation of social biases and their mitigation in large language models. arXiv preprint arXiv:2302.05508." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.148, + 0.488, + 0.226 + ], + "angle": 0, + "content": "Edward C Williams, Nakul Gopalan, Mine Rhee, and Stefanie Tellex. 2018. Learning to parse natural language to grounded reward functions with weak supervision. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 4430-4436. IEEE." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.238, + 0.488, + 0.264 + ], + "angle": 0, + "content": "Robert F Woolson. 2007. Wilcoxon signed-rank test. Wiley encyclopedia of clinical trials, pages 1-3." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.274, + 0.488, + 0.326 + ], + "angle": 0, + "content": "Tsung-Yen Yang, Michael Hu, Yinlam Chow, Peter J Ramadge, and Karthik Narasimhan. 2020. Safe reinforcement learning with natural language constraints. arXiv preprint arXiv:2010.05150." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.337, + 0.488, + 0.401 + ], + "angle": 0, + "content": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328-11339. PMLR." + }, + { + "type": "title", + "bbox": [ + 0.116, + 0.428, + 0.443, + 0.442 + ], + "angle": 0, + "content": "A Additional Data Collection Details" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.453, + 0.488, + 0.758 + ], + "angle": 0, + "content": "Our study applied participatory design principles (Muller and Kuhn, 1993), to ensure that participants were engaged in the task and provided meaningful strategy descriptions. Each participant was initially given a partially setup map, where two other \"opponents\" had placed their troops. The participant was then asked to provide their troop placements, based on these initial placements. In Risk, the initial troop placements have a substantial impact on the strategies that a player can pursue for the rest of the game. As such, troop initialization provides a stand-in for a player's overall strategy in a game. By asking participants to participate in an actual aspect of the gameplay, e.g. deploying troops, participants were encouraged envision future situations and think about how their decisions could affect future gameplay and develop grounded strategies that could actually function as viable Risk gameplay strategies." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.759, + 0.488, + 0.918 + ], + "angle": 0, + "content": "Next, participants were asked to provide the goals and constraints which they considered after selecting their troop placements. These specific goals and constraints were selected as they cater to potential strategies that could be employed while playing Risk. The presence of these templates provided a scaffold within which participants, who may or may not have any experience with Risk, could ground their strategies. However, it is important to acknowledge the presence of an inductive" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.884, + 0.47 + ], + "angle": 0, + "content": "bias, due to the specific wording of the goals and constraint templates, which could have impacted the strategies submitted by the participants. For goals, participants were asked to rate how important each goal was to their strategy on a scale of -100 to 100. A score of -100 indicated that pursuing the goal was completely detrimental to their strategy, while 100 indicated that pursuing the goal was essential to their strategy. For constraints, participants were provided 9 constraint templates, and were asked to select and fill in the appropriate constraint that was represented in their strategy. Participants were required to provide at least three constraints to ensure that they did not skip this question. The specific goals and constraints in our dataset can be depicted in Table 5. Finally, participants were asked to summarize their strategy for the given map as a language description. Participants were encouraged to include references to their goals and constraints, but these descriptions were otherwise unprompted. Participants were paid up to $8.5 based on the number of adequate responses submitted. The payment scale was updated if the average time taken significantly changed." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.474, + 0.884, + 0.585 + ], + "angle": 0, + "content": "As mentioned in the paper, we created three additional augmented datasets from our original corpus. Figure 6 provides some examples of the effect of the various augmentations we employed in each augmented dataset. Our full dataset can be found at the following anonymized Github repository - Anonymized Data Repository ." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.605, + 0.747, + 0.62 + ], + "angle": 0, + "content": "A.1 Data Cleaning/Filtering" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.63, + 0.884, + 0.918 + ], + "angle": 0, + "content": "We performed the least possible modifications to participant's responses to ensure responses were self-consistent while preserving the integrity of the organic data collection task. If a participant specifically referenced a goal or a constraint in their language, and did not include it in their response, then their response was modified to include it, and vice versa. We also corrected typos within a participants specifications, such as if they meant to reference the \"Blue\" continent instead of the \"Red\" continent. If a response was not salvageable without minimum modifications, the response was thrown out. Discarded responses included responses where participants simply did not understand the task or submitted blatantly insincere responses such as copying text from the study multiple times to reach the character limit. These decisions were made upon agreement of multiple reviewers." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12812" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.117, + 0.086, + 0.351, + 0.21 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.372, + 0.086, + 0.609, + 0.21 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.637, + 0.087, + 0.871, + 0.21 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.117, + 0.228, + 0.349, + 0.352 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.374, + 0.229, + 0.609, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.637, + 0.229, + 0.882, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.361, + 0.882, + 0.377 + ], + "angle": 0, + "content": "Figure 4: Distribution of assigned values for each goal. The titles for each goal have been shortened for readability." + }, + { + "type": "image", + "bbox": [ + 0.317, + 0.394, + 0.688, + 0.589 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.278, + 0.604, + 0.72, + 0.62 + ], + "angle": 0, + "content": "Figure 5: Distribution of assigned values for each constraint type" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.644, + 0.332, + 0.66 + ], + "angle": 0, + "content": "A.2 Data Collection Quiz" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.666, + 0.49, + 0.81 + ], + "angle": 0, + "content": "In order to ensure that participants understood the rules of Risk prior to providing strategies for our dataset, each participant was asked answer a five question quiz. Participants needed to answer all questions correctly to proceed. Participants were given three tries to answer the questions after which they were shown the correct answers. The five questions in our quiz were as follows (correct answers to each question are in bold):" + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.823, + 0.489, + 0.839 + ], + "angle": 0, + "content": "1. Which of these are NOT a phase in the game?" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.848, + 0.243, + 0.862 + ], + "angle": 0, + "content": "(a) Attack" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.866, + 0.247, + 0.88 + ], + "angle": 0, + "content": "(b) Recruit" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.885, + 0.398, + 0.901 + ], + "angle": 0, + "content": "(c) Control opponent's troops" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.904, + 0.268, + 0.918 + ], + "angle": 0, + "content": "(d) Maneuver" + }, + { + "type": "list", + "bbox": [ + 0.16, + 0.848, + 0.398, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.644, + 0.806, + 0.66 + ], + "angle": 0, + "content": "2. What is the objective of the game?" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.667, + 0.82, + 0.682 + ], + "angle": 0, + "content": "(a) Control the rightmost continent" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.685, + 0.885, + 0.714 + ], + "angle": 0, + "content": "(b) Have the maximum number of island territories" + }, + { + "type": "text", + "bbox": [ + 0.555, + 0.719, + 0.872, + 0.734 + ], + "angle": 0, + "content": "(c) Have the most territories after 10 turns" + }, + { + "type": "text", + "bbox": [ + 0.555, + 0.737, + 0.86, + 0.752 + ], + "angle": 0, + "content": "(d) Occupy all territories on the board" + }, + { + "type": "list", + "bbox": [ + 0.554, + 0.667, + 0.885, + 0.752 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.763, + 0.885, + 0.81 + ], + "angle": 0, + "content": "3. Which of these decides how many troops you receive at the start of each turn? (TWO CORRECT ANSWERS)" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.818, + 0.882, + 0.834 + ], + "angle": 0, + "content": "(a) The number of territories you control" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.836, + 0.881, + 0.868 + ], + "angle": 0, + "content": "(b) The number of coastal territories on the map" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.87, + 0.86, + 0.886 + ], + "angle": 0, + "content": "(c) They physical size of the board game" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.888, + 0.885, + 0.919 + ], + "angle": 0, + "content": "(d) The number of continents you fully occupy" + }, + { + "type": "list", + "bbox": [ + 0.554, + 0.818, + 0.885, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "12813" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.13, + 0.085, + 0.488, + 0.133 + ], + "angle": 0, + "content": "4. Which of the following statements are correct about attacking enemy territories in the game? (TWO CORRECT ANSWERS)" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.142, + 0.488, + 0.188 + ], + "angle": 0, + "content": "(a) When you attack a territory you've already attacked, your attack points are doubled" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.192, + 0.486, + 0.223 + ], + "angle": 0, + "content": "(b) You CANNOT attack in the opposite direction of the arrows" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.227, + 0.486, + 0.257 + ], + "angle": 0, + "content": "(c) You can only attack territories you have access to" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.262, + 0.486, + 0.292 + ], + "angle": 0, + "content": "(d) You can never attack a territory in the same continent" + }, + { + "type": "list", + "bbox": [ + 0.16, + 0.142, + 0.488, + 0.292 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.304, + 0.488, + 0.352 + ], + "angle": 0, + "content": "5. Which of the following statements are true regarding how attacks are conducted? (TWO CORRECT ANSWERS)" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.361, + 0.486, + 0.39 + ], + "angle": 0, + "content": "(a) A player with scattered troops always wins" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.395, + 0.488, + 0.426 + ], + "angle": 0, + "content": "(b) A player attacking from the left side always wins" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.43, + 0.489, + 0.491 + ], + "angle": 0, + "content": "(c) Both players roll a number of dice dependent on the number of their troops involved in the battle to decide the outcome" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.497, + 0.486, + 0.542 + ], + "angle": 0, + "content": "(d) A player can attack with up to 3 troops and defend with up to 2 troops in one battle" + }, + { + "type": "list", + "bbox": [ + 0.16, + 0.361, + 0.489, + 0.542 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.556, + 0.279, + 0.573 + ], + "angle": 0, + "content": "B Dataset Utility" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.582, + 0.49, + 0.919 + ], + "angle": 0, + "content": "This section provides a brief discussion on the potential future utility of our collated dataset. Firstly, this dataset provides strategy specifications in Risk that can be used to test seldonian optimization approaches in future work. Our dataset provides the first such instance language descriptions of strategic intent. Future work can analyze the flaws and strengths of our data to modify our data collection protocol and generate the specific examples they may need for their individual applications. However, there are many tangential applications for this data that are unrelated to the use-case specified in this paper. There is a dearth of natural language datasets which contain language with human-like speech patterns that is not scraped from internetcorpora. Many NLP techniques can be applied to further study this language data such as summarization, to figure out whether these policies can be summarized into a more easily digestible format, sentiment analysis, for broadly categorizing the language description into aggressive, defensive, etc," + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.882, + 0.134 + ], + "angle": 0, + "content": "or Q&A comprehension-based methods, to train AI agents to answer questions regarding a user's preferences by reading their strategy description." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.145, + 0.733, + 0.16 + ], + "angle": 0, + "content": "C Dataset Distributions" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.17, + 0.884, + 0.299 + ], + "angle": 0, + "content": "The data distribution for goals and constraints selected by participants are shown in Figure 4 and Figure 5 respectively. For Goals 3 (Keep your troops close together) and 5 (Maximize Battles) participants tended to skew towards answers in the 60-100 range. For the other goals, the responses were relatively uniform. On average, participants submitted 5.62 unique constraints per response." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.31, + 0.753, + 0.327 + ], + "angle": 0, + "content": "D Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.336, + 0.885, + 0.771 + ], + "angle": 0, + "content": "Hyperparameters for both models were computed through a grid search over parameters. The constraints model was trained for 10 epochs with a batch size of 16 using a learning rate of 0.0005. The goals model was trained for 25 epochs with a batch size of 8 using a learning rate of 0.00001. The constraints model was Both models utilized an AdamW optimizer. The constraints model employed a cosine learning rate scheduler, and the goals model employed a linear learning rate scheduler. We hold-out 30 randomly selected examples for our human/ChatGPT evaluation (Section 5). We split the remaining 1023 examples into a 85/15 train/validation split to perform our grid search over hyperparameters. Finally, to report the accuracy of our model we computed the 10-fold cross-validation accuracy on the best performing hyperparameter setting. The best performing model for predicting constraints was pretrained on the synthetic corpus and trained on the un-augmented human corpus. The best goals model was pretrained on the synthetic-augmented dataset and trained on the human-augmented dataset. All experiments were conducted on a 48GB NVIDIA Quadro RTX GPU. Our code can be found at the following anonymized repository for further reference - Anonymized Code Repository." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.781, + 0.877, + 0.813 + ], + "angle": 0, + "content": "E Human Evaluation Study - Additional Details" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.824, + 0.884, + 0.919 + ], + "angle": 0, + "content": "In this section, we report some additional details regarding our human-evaluation experiment. Firstly, we report that on average, the difference between scores for a participant's first and last response was -0.2143 for goals and -0.0102 for constraints, indicating that there is a negligible impact of factors" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12814" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.194, + 0.082, + 0.805, + 0.29 + ], + "angle": 0, + "content": "
Synthetic DataSynthetic-Augmented Data
Why would I care about battling. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. I need to spread my troops out as far as possible. I can't win if I put any troops on Blue. I need to place troops on at least 5 countries. This time I will use a different strategy. I need to have troops on at least 5 continents. I don't intend to control continents.I don't know why I care about fighting. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. My troops need to be spread out as much as possible. If I put any troops on Blue, I will not win. I need to place troops on at least 5 countries. I will be using a different strategy this time. I need to have troops on at least 5 continents. I don't intend to control continents.
Human DataHuman-Augmented Data
I am going to attack and take over green c. That country is ripe for the taking since I have cut it off from other grey troops. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. Once the green continent is secure I will look to move my armies out to the red continent to battle black there. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on GreenI am going to attack and take over green c. Since I cut it off from other grey troops, that country is ripe for taking. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. I will move my armies to the red continent to fight black once the green continent is secure. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on Green.
" + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.301, + 0.885, + 0.345 + ], + "angle": 0, + "content": "Figure 6: Examples of data from Synthetic (top-left), Synthetic-Augmented (top-right), Human (bottom-left) and Human-Augmented (bottom-right). Highlighted sections represent the specific sentences changed by our augmentation procedure." + }, + { + "type": "table", + "bbox": [ + 0.157, + 0.357, + 0.843, + 0.478 + ], + "angle": 0, + "content": "
GoalsConstraints
G1: Surround enemy territoriesC1: I must have troops on (continent)
G2: Maximize number of countries occupiedC2: I must not have troops on (continent)
G3: Keep our troops close togetherC3: I must be able to access (continent) in one move
G4: Maximize battles throughout the gameC4: I need to protect the borders of (continent)
G5: Fortify borders for the continents you controlC5: I need a total of at least (number) troops to defend a continent
G6: Battle opposing players one at a timeC6: I must have at least (number) countries
C7: I must have troops on at least (number) continents
C8: I must place at least (number) troops to effectively defend a country
C9: I must have troops on at most (number) continents
" + }, + { + "type": "table_caption", + "bbox": [ + 0.309, + 0.493, + 0.688, + 0.506 + ], + "angle": 0, + "content": "Table 5: Goals and Constraints Selected for our Dataset" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.533, + 0.49, + 0.678 + ], + "angle": 0, + "content": "such as cognitive load or a learning curve. Secondly, it is important to note that we did not have the same number of responses per map from humans, as the map condition was randomly assigned to each participant. While this may slightly impact the results of the constraints model, as we aggregated performance across maps, due to the strong significant difference across baselines, it is unlikely to change our result." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.692, + 0.43, + 0.725 + ], + "angle": 0, + "content": "F Human Evaluation Study - Data Filtering Rubric" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.736, + 0.489, + 0.815 + ], + "angle": 0, + "content": "Next, we cover the rubric we applied to filter data for the human-subjects study. Each response was independently evaluated by two graders and was included if both graders deemed it acceptable as per the predefined rubric. The rubric was as follows:" + }, + { + "type": "text", + "bbox": [ + 0.131, + 0.83, + 0.49, + 0.86 + ], + "angle": 0, + "content": "1. If constraints clearly don't match the selections for locations or access" + }, + { + "type": "text", + "bbox": [ + 0.158, + 0.872, + 0.488, + 0.918 + ], + "angle": 0, + "content": "- e.g. if someone has selected, \"I must have troops on Blue\" when there are no troops on Blue" + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.533, + 0.879, + 0.548 + ], + "angle": 0, + "content": "2. If someone has submitted invalid constraints" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.557, + 0.882, + 0.604 + ], + "angle": 0, + "content": "- e.g. If someone selects both \"I need troops on at least 2 continents\" + \"I need troops on at most 1 continent\"" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.608, + 0.884, + 0.639 + ], + "angle": 0, + "content": "- If someone mistakes \"country\" for \"continent\"" + }, + { + "type": "list", + "bbox": [ + 0.554, + 0.557, + 0.884, + 0.639 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.651, + 0.883, + 0.716 + ], + "angle": 0, + "content": "3. If someone has selected the same value for all goals (or values within a small range, say \\(+ - 10\\)), when this clearly does not align with the strategy" + }, + { + "type": "text", + "bbox": [ + 0.554, + 0.723, + 0.884, + 0.769 + ], + "angle": 0, + "content": "- e.g. someone selects \\(-100\\) for all goals when the strategy involves protecting a continent" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.782, + 0.707, + 0.8 + ], + "angle": 0, + "content": "G ChatGPT Prompt" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.808, + 0.883, + 0.856 + ], + "angle": 0, + "content": "We utilized the following prompt for ChatGPT which included a description of the domain and task, as well as an annotated example." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.867, + 0.659, + 0.883 + ], + "angle": 0, + "content": "G.1 Full Prompt" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.888, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Reading the following section carefully will provide you with the information needed to complete" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12815" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.086, + 0.185, + 0.099 + ], + "angle": 0, + "content": "this task." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.102, + 0.49, + 0.198 + ], + "angle": 0, + "content": "Risk is a board game in which an army commander tries to take over the world by defeating all enemy troops and controlling all countries. Risk is a simplified version of real conflict, and has rules designed to reflect this. These include the following:" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.211, + 0.486, + 0.242 + ], + "angle": 0, + "content": "- Players control countries by having troops in them" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.256, + 0.49, + 0.287 + ], + "angle": 0, + "content": "- The more countries and continents a player controls, the more resources they get" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.301, + 0.49, + 0.332 + ], + "angle": 0, + "content": "- Players win countries from other players by battling with their troops" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.345, + 0.486, + 0.377 + ], + "angle": 0, + "content": "- The more troops a player has when battling, the more likely they are to win" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.39, + 0.49, + 0.421 + ], + "angle": 0, + "content": "- Players can only attack or be attacked by countries that are next to them" + }, + { + "type": "list", + "bbox": [ + 0.122, + 0.211, + 0.49, + 0.421 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.435, + 0.488, + 0.547 + ], + "angle": 0, + "content": "In this task, you will be asked to provide a set of constraints corresponding to the human player's strategy for the board game Risk. This includes their troop placements and a text description, which explains why the player decided to place their troops and how they plan to win this game of Risk given their opponents' choices." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.548, + 0.49, + 0.628 + ], + "angle": 0, + "content": "Your task will be to think about the player's strategy (selections and description) and predict what their constraints are with respect to the strategy. Constraints are rules that you think need to be followed to successfully execute a strategy." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.629, + 0.489, + 0.676 + ], + "angle": 0, + "content": "CONSTRAINTS: Note: For predicting goals, this section would be replaced with a description of what goals are" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.679, + 0.49, + 0.902 + ], + "angle": 0, + "content": "Constraints are comprised of constraint classes and constraint values. Your job is to assign constraints to the human's strategy. Each constraint is comprised of a constraint class and a constraint value. You will be provided a list of possible constraint classes and values to choose from. You may choose the same class of constraint more than once, but you may not submit duplicate constraints. For example, you may submit \"I must have troops on Green\" and \"I must have troops on Blue\" but you may not submit \"I must have troops on Green\" twice. Choose all constraints relevant to the strategy. You may choose up to 8 constraints per strategy." + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.904, + 0.436, + 0.919 + ], + "angle": 0, + "content": "The constraints you can choose from are" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.085, + 0.782, + 0.101 + ], + "angle": 0, + "content": "- I must have troops on [Continent]" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.114, + 0.81, + 0.129 + ], + "angle": 0, + "content": "- I must not have troops on [Continent]" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.141, + 0.882, + 0.171 + ], + "angle": 0, + "content": "- I must be able to access [Continent] with one move" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.185, + 0.85, + 0.201 + ], + "angle": 0, + "content": "- I need to protect the borders of [Continent]" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.213, + 0.882, + 0.243 + ], + "angle": 0, + "content": "- I need a total of at least [Number] troops to defend a continent" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.257, + 0.88, + 0.272 + ], + "angle": 0, + "content": "- I must have at least at least [Number] countries" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.285, + 0.884, + 0.315 + ], + "angle": 0, + "content": "- I must have troops on at least [Number] continents" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.329, + 0.884, + 0.36 + ], + "angle": 0, + "content": "- I must place at least [Number] troops to effectively defend a country" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.373, + 0.884, + 0.403 + ], + "angle": 0, + "content": "- I must have troops on at most [Number] continents" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.085, + 0.884, + 0.403 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.417, + 0.882, + 0.447 + ], + "angle": 0, + "content": "The possible constraint values you can choose from are" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.461, + 0.866, + 0.477 + ], + "angle": 0, + "content": "- Continent - Blue, Green, Yellow, Red, Purple" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.489, + 0.844, + 0.504 + ], + "angle": 0, + "content": "Number-1,2,3,4,5,6,7,8,9,10,11,12,13,14" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.461, + 0.866, + 0.504 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.517, + 0.884, + 0.725 + ], + "angle": 0, + "content": "Our modified RISK Map contains 5 continents - Red, Green, Purple, Yellow and Blue. Each continent is made up of countries. Red continent has 3 countries, Green has 5 countries, Purple has 5 countries, Yellow has 4 countries and Blue has 4 countries. Green_A, Yellow_B, Blue_C, etc. are referred to as countries or territories Green, Yellow, Blue, Red, Purple are referred to as continents. Continents also have different connections between them through which the troops can move. These connections are one way i.e troops from the source country can only move to the destination country and not the other way round." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.727, + 0.884, + 0.871 + ], + "angle": 0, + "content": "The map has the following connections - Yellow_D is connected to Green_A, Greed_D is connected to Red_A, Red_A is connected to Green_D, Red_B is connected to Purple_E, Red_C is connected to Yellow_B, Red_C is connected to Blue_B, Blue_A is connected to Yellow_C, Yellow_C is connected to Blue_D, Blue_C is connected to Purple_A, Purple_A is connected to Green_E and Green_E is connected to Purple_A" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.872, + 0.884, + 0.919 + ], + "angle": 0, + "content": "We will now give you a tutorial on how to ascertain the goals from a human player's strategy and placements on the RISK board." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12816" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.112, + 0.085, + 0.488, + 0.229 + ], + "angle": 0, + "content": "The two opposing players are denoted by the \"grey\" and \"black\" player. In this scenario, the grey player has placed its troops on the following territories - 5 troops on Yellow_C, 4 troops on Yellow_D, 1 troop on Red_A, 2 troops on Red_B, 2 troops on Red_C. The black player has placed its troops on the following territories - 4 troops on Blue_A, 2 troops on Blue_C, 2 troops on Green_E, 5 troops on Purple_A and 1 troop on Purple_B." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.23, + 0.487, + 0.294 + ], + "angle": 0, + "content": "Now that you have seen where the opposition troops are, you will now be shown how the human player has decided to deploy their troops and the strategy they used." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.295, + 0.487, + 0.47 + ], + "angle": 0, + "content": "The human player (white) has placed 14 troops to battle the opponents. They have placed the troops on the following territories - 7 troops on Purple_E, 5 troops on Purple_C and 2 troops on Purple_D. You will now be guessing the constraints the human player (white) focused on while coming up with their strategy. The following text contains the human player's description of the strategy they used to place their troops. It is critical that you read this description, as it contains information about the constraints considered by the human player." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.472, + 0.489, + 0.631 + ], + "angle": 0, + "content": "\"I put all my troops in Purple, because I felt as though I needed all my available troops to defend Purple. I wanted to protect Purple. With 7 troops on Purple_E, I feel like I cannot be beat on purple. I wasn't too keen on getting involved in battles, or taking an overly aggressive strategy. I would like to focus on beating the black player first, I don't think I can battle two people at the same time. I'm going to avoid Red for now since it seems to be the hardest continent to control.\"" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.632, + 0.49, + 0.712 + ], + "angle": 0, + "content": "We will now show you how to determine constraints from a strategy and via an example. Please carefully review the example and use the given information about both selections and text to fill out constraints for this strategy." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.713, + 0.487, + 0.744 + ], + "angle": 0, + "content": "An appropriate set of constraints for the strategy shown above would be" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.756, + 0.352, + 0.772 + ], + "angle": 0, + "content": "- I must have troops on Purple" + }, + { + "type": "text", + "bbox": [ + 0.14, + 0.78, + 0.486, + 0.811 + ], + "angle": 0, + "content": "- Reason: The player mentioned that \"they put all their troops on Purple\"" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.822, + 0.361, + 0.837 + ], + "angle": 0, + "content": "- I must not have troops on Red" + }, + { + "type": "text", + "bbox": [ + 0.14, + 0.845, + 0.486, + 0.876 + ], + "angle": 0, + "content": "- Reason: The player mentioned that \"they would like to avoid Red for now\"" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.888, + 0.489, + 0.919 + ], + "angle": 0, + "content": "- I must place at least 7 troops to effectively defend a country" + }, + { + "type": "text", + "bbox": [ + 0.535, + 0.085, + 0.882, + 0.133 + ], + "angle": 0, + "content": "- Reason: The player mentioned that \"with 7 troops on Purple_E, I cannot be beaten on Purple\"" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.146, + 0.807, + 0.178 + ], + "angle": 0, + "content": "H Risk Reinforcement Learning Simulator" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.189, + 0.885, + 0.493 + ], + "angle": 0, + "content": "We have shown that our proposed computational interface can remove the need for human-interpreters for the task of parsing intent from unstructured language. However, to test how well commander's intent interpreted from language can be applied towards optimizing an agent's behavior, we require a reinforcement learning domain to train our agent. As such, to enable seldonian optimization, via unstructured language descriptions, we developed a novel open-ai gym environment for simulating Risk gameplay. This environment closes the loop on the methods presented in this paper by providing all the necessary components for humans to specify their intent to an AI agent and evaluate whether their specifications have been satisfied by the learnt agent. Our environment also provides an additional means of collecting data and conducting studies for human-specification within multi-player team scenarios." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.495, + 0.884, + 0.752 + ], + "angle": 0, + "content": "For this task, we adapted an existing open-air gym environment for Risk (Andeol, 2018). We modified the codebase to allow for RL agents to be trained to play all phases of Risk, according to the setup utilized in our approach. We also developed a pygame-UI for our simulator (see Figure 7). A detailed description of the functionality of the domain and the state space is provided in the appendix. In future work, we aim to leverage our domain to develop approaches which allow humans to constrain an agent's optimization methods through human-like language specifications of intent, which has not been accomplished in any prior work. We also provide a link to an anonymized github repository with the risk environment for further reference - Anonymized Gym-Risk Environment" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.764, + 0.838, + 0.797 + ], + "angle": 0, + "content": "I Risk Domain - Additional Domain Information" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.807, + 0.885, + 0.919 + ], + "angle": 0, + "content": "This section provides additional information about our setup for Risk Domain. In our version of Risk, the ego player (Alpha), plays against two opponents (Charlie and Bravo) whose actions are controlled by a pre-determined heuristic. The gameplay within our Risk simulator is comprised of four phases" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12817" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.145, + 0.082, + 0.46, + 0.252 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.262, + 0.489, + 0.293 + ], + "angle": 0, + "content": "Figure 7: This figure shows our Risk simulator with the playable (teal) and two other (orange and pink) agents." + }, + { + "type": "text", + "bbox": [ + 0.131, + 0.315, + 0.486, + 0.347 + ], + "angle": 0, + "content": "1. Drafting - Players draft their initial troops on empty territories." + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.358, + 0.487, + 0.389 + ], + "angle": 0, + "content": "2. Reinforce - Players assign reinforcements to their existing territories." + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.4, + 0.489, + 0.432 + ], + "angle": 0, + "content": "3. Attack - Players can choose to attack a neighboring territory with their troops." + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.443, + 0.489, + 0.473 + ], + "angle": 0, + "content": "4. Freemove - Players can move their troops between their territories." + }, + { + "type": "list", + "bbox": [ + 0.129, + 0.315, + 0.489, + 0.473 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.485, + 0.49, + 0.919 + ], + "angle": 0, + "content": "The game begins with a drafting phase. During this phase, the agent decides where to place their initial 14 troops amongst the available territories. The two opposing players draft their troops before the agent is allowed to draft any troops. The opposing players drafts are either hard-coded to match one of the maps utilized in our study, or they are drafted based on a drafting heuristic. The drafting phase occurs only once in the game. Following drafting, the agent executes the next three phases in sequence. First, in the \"Reinforce\" phase, the agent receives a specific number of reinforcements based on the number of territories and continents they control. The agent needs to assign the given reinforcements to the territories they control. Each country reinforced is an individual action. Next, the agent moves on to the \"Attack\" phase. In this phase, the agent can attack adjacent territories with their troops. Within each attack action, the agent specifies which opposing territory they would like to attack, along with the territory they would like to attack from. The agent must also specify the number of troops they would like to move into the opposing territory should the win the conflict. Each combat sequence between two territories is executed in a similar manner to the physical board game," + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.085, + 0.882, + 0.134 + ], + "angle": 0, + "content": "1. A maximum of three troops are chosen from the attacking territory, and a maximum of two troops are chosen from the defending territory" + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.145, + 0.883, + 0.192 + ], + "angle": 0, + "content": "2. For both the attacker and defender, a number of die are rolled based on the number of troops involved in each attack." + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.206, + 0.882, + 0.254 + ], + "angle": 0, + "content": "3. The rolls are sorted in descending order, and each roll is compared between the attacking and defending country." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.267, + 0.882, + 0.314 + ], + "angle": 0, + "content": "4. For each comparison, the country with the lower roll loses one troop. The defending territory wins all ties." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.327, + 0.884, + 0.358 + ], + "angle": 0, + "content": "5. The above steps are repeated until either the attacking or defending player has been defeated." + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.085, + 0.884, + 0.358 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.371, + 0.885, + 0.562 + ], + "angle": 0, + "content": "Following combat, the agent can move all but one troop into the conquered territory. Once the agent has finished attacking, they move on to the final phase in their turn, \"Freemove.\" In the \"Freemove\" phase, the player can move troops from one territory they control to another, as long as the territories are connected. Once the agent executes all their actions, the actions of the two agents are simulated and the player is reset to the \"Reinforce\" phase to start their next turn. The game is complete when either the agent is out of troops or controls all territories." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.565, + 0.884, + 0.822 + ], + "angle": 0, + "content": "An action is specified by a four-item tuple, i.e. \\( < p, s, t, tr > \\). The first item, \\( p \\), specifies which type of action is being conducted, among the four possible phases in the game. Item two, \\( s \\), denotes the source country for the action. For reinforce and drafting actions this is the country that the agent wants to add troops to, whereas for the attack and freemove actions, \\( s \\) denotes the country you will be attacking or moving from. The, final two items, \\( t \\) and \\( tr \\), are specifically for attack and move actions. \\( t \\) specifies the country that you would like to attack or move to. For the attack action, \\( tr \\) specifies the number of troops you would like to move from the attacking country if you win the combat. When the agent specifies a move action, \\( tr \\) denotes the number of troops to be moved from \\( s \\) to \\( t \\)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.834, + 0.645, + 0.85 + ], + "angle": 0, + "content": "I.1 State Space" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.855, + 0.884, + 0.919 + ], + "angle": 0, + "content": "The state of the game is stored as a dictionary. The state dictionary records information such as country ownership, number of troops on each country, continent ownership, etc. We also record information" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12818" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.489, + 0.165 + ], + "angle": 0, + "content": "about players such as number of reinforcements available to a player, number of players alive, current turn number, etc. We have provided six functions to encode the state space which can be passed as an input to a Reinforcement Learning model." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.168, + 0.489, + 0.327 + ], + "angle": 0, + "content": "The first function encodes the state using 54 features. The initial 42 features contain country related information for each opponent (21 features each) and the next 5 features contain continent ownership data. The remaining features are used for other information related to the game like number of areas controlled by the player, troops left to be drafted by the player, troops left for reinforcement, number of players alive, current turn number and if the current turn belongs to the player." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.331, + 0.49, + 0.569 + ], + "angle": 0, + "content": "The second function encodes the information in the form of one hots. It has a total of 132 features, the first 84 features contain information regarding country ownership as one hots, 21 each for the player, opponents and countries with no owner. The next 21 features denote the number of troops on each country. The next 20 features contain information regarding continent ownership, 5 each for the player, opponents and no owner. The remaining features contain other relevant information as described for the first function. For both of the first two functions described, we also provide normalized versions of these functions where all the real valued spaces are divided by a normalising constant." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.573, + 0.49, + 0.765 + ], + "angle": 0, + "content": "The fifth encoding function contains all the 132 features of the third function and additional information for the current phase. It contains 134 features in total. This function returns normalised values. The last encoding function contains 298 features. The initial features are similar to the ones present in the third encoding function. Apart of that it explicitly contains information about where an agent or player can attack and execute a freemove. This information can help the reinforcement learning model more easily. This function also returns normalised values." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.783, + 0.305, + 0.798 + ], + "angle": 0, + "content": "I.2 Reward Functions" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.807, + 0.49, + 0.919 + ], + "angle": 0, + "content": "We have setup four different types of reward functions ranging from sparse to dense. The recommended reward function is the rules-based reward which provides rewards for successful actions, finishing a phase, successful action in a phase and winning the game. The rewards for winning the game are weighted by a factor of 10 compared to" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.829, + 0.101 + ], + "angle": 0, + "content": "others which are weighted by a factor of 1." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.101, + 0.885, + 0.262 + ], + "angle": 0, + "content": "The most simple reward function available is a sparse reward function which provides negative rewards for losing the game and positive rewards for winning the game. In order to increase the number of rewards given throughout the game, we created the turn count reward function which rewards the agent for every turn it plays. Survival reward function was built on top of this to provide an additional negative reward for losing apart from the reward for surviving." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.273, + 0.687, + 0.289 + ], + "angle": 0, + "content": "I.3 Human Drafting" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.294, + 0.885, + 0.47 + ], + "angle": 0, + "content": "Finally, we have also setup a functionality in our simulator that allows player or the opponents to skip the drafting phase and follow a fixed draft based on a predefined map. In such cases, we have predefined fifteen types of map initialisation containing troops for both opponents, which correspond to the exact maps utilized in our data collection procedure. Our setup chooses one of the map initializations and corresponding selections made by a participant in the user study to simulate the game." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "12819" + } + ] +] \ No newline at end of file diff --git a/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_origin.pdf b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e9e88a656cb6cae1bf972afc0eba158f46839217 --- /dev/null +++ b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/0ecff77c-66e5-47c5-93be-49d90731c30d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33aa933e48dd818c74befdf6b287fcd9fa641deb638fd5e5001068821b1e445b +size 2933510 diff --git a/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/full.md b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bfc41850d6fd99113dc842b35530ad327177af90 --- /dev/null +++ b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/full.md @@ -0,0 +1,520 @@ +# A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting + +Pradyumna Tambwekar1, Lakshita Dodeja2*, Nathan Vaska3*, Wei Xu1, and Matthew Gombolay1 + +1School of Interactive Computing, Georgia Institute of Technology + +$^{2}$ Computer Science Department, Brown University + +3Massachusetts Institute of Technology, Lincoln Laboratory + +pradyumna.tambwekar@.gatech.edu, lakshita_dodeja@brown.edu, nathan.vaska@ll.mit.edu,{wei.xu, matthew.gombolay}@cc.gatech.edu + +# Abstract + +Many real-world tasks involve a mixed-initiative setup, wherein humans and AI systems collaboratively perform a task. While significant work has been conducted towards enabling humans to specify, through language, exactly how an agent should complete a task (i.e., low-level specification), prior work lacks on interpreting the high-level strategic intent of the human commanders. Parsing strategic intent from language will allow autonomous systems to independently operate according to the user's plan without frequent guidance or instruction. In this paper, we build a computational interface capable of translating unstructured language strategies into actionable intent in the form of goals and constraints. Leveraging a game environment, we collect a dataset of over 1000 examples, mapping language strategies to the corresponding goals and constraints, and show that our model, trained on this dataset, significantly outperforms human interpreters in inferring strategic intent (i.e., goals and constraints) from language $(p < 0.05)$ . Furthermore, we show that our model (125M parameters) significantly outperforms ChatGPT for this task $(p < 0.05)$ in a low-data setting. + +# 1 Introduction + +Effective communication is essential for the proper functioning of organizational teams. "Commander's Intent" is a method for developing a theory of mind utilized in many domains such as the search and rescue, pandemic response, military, etc (Mercado et al., 2016; Rosen et al., 2002; Kruijff et al., 2014). Commanders and leaders often utilize the formulation of "Commander's Intent" to convey the tasks that need to be accomplished and engender an understanding of the criteria for success to their subordinates (Dempsey and Chavous, 2013). Commander's Intent could similarly function as + +![](images/ce21f28bb23cd9ce31105d2c2a8f6a2a73c430a3946f368f8d44af13693a963f.jpg) +Figure 1: Our work aims to facilitate humans to specify their strategy to an AI system via language. Using the board game Risk as a simulated environment, we collect language descriptions of a strategy (top-left) corresponding to a player's troop deployments (bottom-left). The player's selections are shown by the white icons, and the grey and black icons denote the troops of the two opposing players. Each strategy corresponds to a set of goals (bottom-right) and constraints (top-right) The green and orange text corresponds to the language relating to constraints and goals respectively. + +an effective scaffold to represent a human's strategic intent in a mixed-initiative interaction (Novick and Sutton, 1997). Commander's Intent provides a functionality for expert-specifiers to engender a degree of "shared-cognition," between an AI-collaborator and a human-specifier, by aligning the actions of the AI system to the human-specifiers values or reward function. + +Commander's intent is formally represented by a set of goals and constraints. Goals (or preferences) are categorized as a desirable set of states or affairs that the agent intends to obtain (Moskowitz and Grant, 2009; Kruglanski, 1996) and constraints refer to conditions that are imposed on solutions + +formulated by an agent (Nickles, 1978). Translating unstructured language-based strategy into this machine-readable specification is a non-trivial challenge. This translation could be conducted via a human interpreter, however, interpreters with the requisite expertise will not always be available. Alternatively, humans could utilize a structured interface to specify their intent. However, interfaces can become overly complicated, and humans become demotivated to work with an AI system when they cannot easily navigate the interface (Hayes, 1985). Enabling humans to express their strategic intent in everyday language provides an effective solution to these issues. + +In this paper, we develop an approach to solve a task we call automatic strategy translation, wherein we learn to infer strategic intent, in the form of goals and constraints, from language. Prior work has developed methods to utilize language to specify policies of an AI agent (Tambwekar et al., 2021; Gopalan et al., 2018; Thomason et al., 2019; Blukis et al., 2019) or specify reward functions or tasks which can be optimized for, via reinforcement learning (RL) or a planner (Gopalan et al., 2018; Padmakumar et al., 2021; Silva et al., 2021a). However, our work is the first to translate language into goals and constraints, which can be applied towards constrained optimization approaches for directing agent behavior independent of the original human specifier. Unlike prior work, we focus on interpreting language description of complex gameplay strategies, rather than simple individual commands (e.g., "move from A to B; open the door"). + +First, we collect a dataset of over 1000 examples mapping language to goals and constraints, leveraging a game environment of Risk. Next, we fine-tuned a pretrained RoBERTa model (Liu et al., 2019), equipped with model augmentations and customized loss functions such as Order-Agnostic Cross Entropy (Du et al., 2021), to infer goals and constraints from language strategy specifications. Finally, we employ a human evaluation to test our approach. Recent work has shown that automated evaluation metrics for language models may provide a misleading measure of performance (Liang et al., 2022). Therefore, we design a head-to-head evaluation, whereby, we can directly compare our model to the average human interpreter. In addition to humans, we prompted ChatGPT to perform the same task on a held-out set of 30 examples. We computed the statistical difference between our + +model and these baselines, providing a concrete measure of the relative efficacy of our approach. Our contributions are as follows: + +- We propose one of the first complete machine learning pipelines including data collection, augmentation and model training for inferring structured strategic intent from human language. +- Through a human study, we show that our proposed approach can interpret goals and constraints from language descriptions better than the average human $(p < 0.001)$ . +- Through in-context learning, we evaluate ChatGPT's performance to gauge the relative efficacy of our approach, and show that our approach significantly outperforms ChatGPT (p < 0.05). + +# 2 Related Work + +This section covers prior work on learning strategies from language, as well as methods and datasets to enable humans to specify AI-behavior in a mixed-initiative setting. + +# 2.1 Learning strategies from Language + +A common approach for specifying strategies through language has been through encoding language instructions, via planning-based representation languages, such as PDDL or LTL (Williams et al., 2018; Bahdanau et al., 2018; Thomason et al., 2019; Tellex et al., 2020), or deep learning (Fu et al., 2019; Blukis et al., 2019; Gopalan et al., 2018). Such formulations facilitate the ability to constrain actions taken by the agent to the instruction specified, e.g. "Go around the tree to your left and place the ball." Another popular alternative is language-conditioned learning, where language is employed to specify a reward function, or a task (Silva et al., 2021a; Goyal et al., 2019; Andreas et al., 2017; Shridhar et al., 2022). Such approaches seek to improve the ability of an agent to complete a task(s) through intermediate language inputs, such as "take the ladder to your left". However, these approaches do not allow a supervisor to specify their strategic intent, such that the agent can complete it's primary task while still adhering to the specifier's plan. Recent work proposed a novel approach to mapping language to constraints and rewards via a dependency tree (Rankin et al., 2021), however their approach relies on a pre-trained grammar to extract a dependency tree, thus may not scale to human-like language. + +Formally, the process of optimizing AI systems given goals and constraints has been broadly categorized as Seldonian Optimization (Thomas et al., 2019, 2017). In this framework, the goal is to optimize the priorities of an objective function while adhering to a given set of constraints as opposed to simply optimizing based on the reward or loss function. (Yang et al., 2020) proposed a Seldonian optimization approach to translate constraints into a feature representation, encoding invalid regions in the state space, which is then applied towards safe RL. However their application is restricted to learning to parse individual constraint statements such as "Don't get too close to the water," rather than facilitating constraint extraction from more realistic descriptions pertaining to an entire strategy. In our work, we provide a first-of-its-kind dataset, and correspondent model, to capacitate seldonian optimization through unstructured language. + +# 2.2 Language and Strategy Datasets + +Prior datasets for instruction following and policy specifications are often comprised of shorter instructions describing individual tasks. In contrast, our dataset consists of larger, unstructured descriptions of strategies which may be more reflective of potential strategy descriptions from in-the-wild users. Recent work has published a dataset of policy descriptions which are similar to the language descriptions we collect (Tambwekar et al., 2021) - however, they describe specific policies, rather than broad strategies for a task. Other datasets look to map language to trajectories or goals states within the trajectory (Padmakumar et al., 2021; Misra et al., 2018; Suhr et al., 2019). These datasets typically serve as a means of replacing physical demonstrations with language. These datasets lack explicit goals and constraints corresponding to the language collected, that can be applied towards seldonian optimization. Recent work provided a dataset with constraint statements (Yang et al., 2020) which are designer-specific; however, each constraint is associated with an isolated statement, making it unclear whether this approach will generalize to unprompted language describing multiple constraints. Unlike prior work, our dataset provides the ability to apply Seldonian optimization approaches from unstructured language. Furthermore, we conduct a study wherein we provide a human and ChatGPT baseline for our dataset to highlight the challenging nature of this task. + +# 3 Natural Language Strategies in RISK + +Our work aims to facilitate humans to specify their strategy or commander's intent to an AI system via language. In this section, we utilize the board game Risk to create a dataset that maps unstructured natural language descriptions of strategies to actionable intent in the form of goals and constraints. + +# 3.1 Board Game - RISK + +Risk (Gibson et al., 2010) is a multiplayer strategy board game of diplomacy, conflict, and conquest, which was first invented in 1957. The gameplay of Risk consists of four phases: Draft, Recruit, Attack, and Move. The draft phase is conducted at the start of the game wherein each player drafts an initial set of continents and deploys a fixed number of troops onto those continents. This allocation of troops is a crucial participatory task (Muller and Kuhn, 1993) which involves humans reasoning about their strategy and setting up for the rest of the game. Participants may choose any of the empty territories on the map to deploy their troops, with a wide range of strategies that may depend on their opponent's troop allocation. For example, a more conservative player may draft troops to only one continent for better defense, whereas a player with a more aggressive strategy may choose to spread out their troops. After the draft phase, each subsequent turn for a player involves iteratively conducting the recruit, attack, and move phases. Further details about Risk can be found in Appendix-I. + +In our setting, we use a map layout that has 5 continents with a total of 21 territories/countries, as illustrated in Figure 1. Instead of real country names used in the Risk game, we use ad-hoc names for each continent (e.g., Red, Green, Blue, etc.) to mitigate participant bias. In the draft phase, each player takes turns to deploy 14 troops. The specific set of tasks that humans need to complete for our study include: (i) develop a strategy for Risk and deploy 14 troops after the two opposing players have completed their draft; (ii) provide six goals (on a 200-point scale) and up to eight constraints that were relevant to their allocation of troops and broader intents; (iii) use natural language to describe their overall strategy and the goals and constraints they considered. The troops of the opposing player are shown to the participants prior to completing these tasks. More details about this data collection process are discussed in Section 3.3. + +# 3.2 Task Definition + +Our goal is to develop a computational interface capable of inferring strategic intent from unstructured language descriptions of strategies. Formally, we define the task of Automatic Strategy Translation as follows: Given the troop deployments $S$ , a map $M$ , and the strategy $W$ , which is a paragraph written in natural language, our task is to automatically derive a set of goals $G$ and constraints $C$ . The troop selections $S$ include the name and number of troops for each territory drafted by the player. We have a total of 6 predefined goals, each of which takes a numeric value between $[-100, 100]$ . This numeric value corresponds to whether the goal positively or negatively aligns with the strategy. For example, for the goal "maximize battles", 100 implies that the player intends to battle as much as possible, and -100 implies that the player intends to battle as infrequently as possible. Each constraint is comprised of a class and value. We restrict the number of possible constraints to 8 as a reasonable upper bound per strategy. To summarize, each example $\langle M, W, S, C, G \rangle \in \mathcal{D}$ consists of a strategy $W$ described in natural language, for a player's troop selections, $S$ , on a map, $M$ , from which $C$ and $G$ are the gold standard constraints and goals. + +# 3.3 Data Collection + +We collected a dataset $\mathcal{D}$ of 1053 unique examples by recruiting participants on Amazon Mechanical Turk and Profilic (pro, 2014). Firstly, to familiarize participants with the game, we designed a tutorial that provided a description and annotated examples to explain the rules of the game and the tasks that participants needed to perform. As a further measure of improving data quality, participants were quizzed on the rules of Risk to reinforce their understanding (full quiz has been provided in §A.2). They were given three attempts to answer correctly, after which they were shown the answers. Upon completing the quiz, participants began the task. We showed participants a map, which shows the drafted troops of the two opposing players, and asked them to provide their own troop deployments. Following their draft, participants are asked to provide the goals and constraints they considered for their gameplay strategy/deployments and finally provide a language description of their strategy. The language strategy they provided needed to have at least 200 characters. Each participant was asked to repeat this task 5 times to create 5 data points, + +each time with a different map. The maps seen by participants were selected from a set of 15 unique initial troop settings. + +Participants needed approximately 10 minutes per data point. Figure 1 depicts the format of our dataset. Our dataset included data from 230 participants. The average length of language descriptions in our dataset was 99.21 words, and the overall vocabulary size was 2,356 words. Additional details regarding our data collection protocol are available in Appendix A. + +# 4 Automatic Strategy Translation + +Following the data collection in Section 3, our goal is to leverage this dataset to develop a model that can perform the task of automatic strategy translation. Inferring strategic intent from language is a non-trivial endeavor as unstructured language can be vague thus leading to ambiguous interpretations. We seek to develop an approach capable of performing this task better than the average human, so as to enable strategy specification via language to reduce the potential risk of human errors or the need of third-party expert interpreters. In this section, we cover the technical details which make this task possible in a low-data setting. + +# 4.1 Text Encoder + +We adopted the pretrained RoBERTa model (Liu et al., 2019) as our encoder which is parameterized by $\theta$ . The input sequence to our model is comprised of the language description of the strategy, $W = [w_{1}, w_{2}, \ldots, w_{|W|}]$ , and troop selections $S = [s_{1}, s_{2}, \ldots, s_{|S|}]$ , where each troop selection is comprised of the country name along with the number of troops placed on that country (e.g., $S = [Red\_A = 2, Red\_C = 8, Purple\_D = 4]$ ). The encoder learns the embedding function, which maps the text input, comprised of the strategy $W$ and selections $S$ , to a $d$ -dimensional real-valued vector which then be used towards predicting goals ( $\S 4.2$ ) and constraints ( $\S 4.3$ ). + +Ordinarily, the final embedding for the single [CLS] token learned by RoBERTa, i.e., $E_{\theta} = BERT_{[CLS]}(W,S)$ , is used for classification. In this work, we incorporate multiple classification tokens (Chang et al., 2023), each of which corresponds to an individual goal or constraint. For $i$ th goal or constraint, we learn a separate classification embedding, $E_{\theta}^{i} = BERT_{[CLS_{i}]}(W,S)$ . Using individual class-specific tokens improves the model + +![](images/6a01acf28d3193848848e84979e9032a2cce94525988538bceebcd6133cac663.jpg) +Figure 2: Illustration of our Automatic Strategy Translation model. The input to the model includes the classification tokens, language description, and troop selections (Section 4.1). The encoder then generates embeddings for each classification token, and passes them onto an individual classification head. Each classification head is a fully-connected layer that predicts a probability distribution for the respective goal ( $\S 4.2$ ) or constraint ( $\S 4.3$ ). + +the capability to learn different attention weights corresponding to the classification embeddings for each goal or constraint. We utilize different encoders for predicting goals and constraints, which are parameterized by $\theta_{g}$ and $\theta_{c}$ , respectively. + +# 4.2 Goal Extraction Model + +We treat the subtask of deriving goals from language as an ordinal classification task. Originally, in our dataset goals are specified as continuous values ranging from $[-100, 100]$ , which we discretize by creating 5 uniform buckets, i.e., $[-100, -60)$ , $[-60, -20)$ , etc. That is, for each goal, we predict an assignment as a 5-class classification as: + +$$ +P _ {j} = L _ {\phi_ {j}} \left(E _ {\theta_ {g}} ^ {j}\right), \tag {1} +$$ + +where $P_{j}$ represents the probability distribution across assignments for $j$ th goal and $E_{\theta_g}^j$ corresponds to the embedding from the encoder. Each goal uses a separate classification layer $L$ parameterized by $\phi_j$ . The goal extraction model is trained on a dual-criteria loss function that combines cross-entropy (CE) and mean-square-error (MSE) loss: + +$$ +\mathcal {L} _ {\text {g o a l}} = \alpha \mathcal {L} _ {C E} + (1 - \alpha) \mathcal {L} _ {M S E}, \tag {2} +$$ + +where $\alpha$ is a simple weighting hyperparameter. The addition of MSE loss helps to account for the ordinal nature of goal value predictions. + +# 4.3 Constraint Extraction Model + +Similar to the goal extraction model, the input to each classification head for constraint prediction is $E_{\theta_c}^k$ , which corresponds to the classification embedding learned by the encoder for the $k^{th}$ constraint. + +However, unlike for the goal extraction model, each of the eight constraint classification heads learns to predict the constraint itself rather than a value for a fixed goal. Therefore, the model needs to predict the set of unordered constraints $\{c_1, c_2, \ldots, c_8\}$ , wherein each $c_k$ is predicted from the set of all possible constraints $C$ (190 total possible constraints). Each strategy can have a maximum of eight constraints, i.e., the set $C$ includes a null value. + +While providing constraints during data collection, participants merely assigned constraints to their strategy, but did not rank the ordering of constraints. As such, the order of constraints in our dataset does not necessarily correspond to the order in which each classification head needs to predict the constraints. Therefore, each classification head does not have a strict label it can utilize to compute a classification loss, making this task distinct from conventional sequence prediction or multiclass classification tasks. For instance, if the constraints predicted by the model are $\{C,\emptyset ,B,D\}$ and the labels for this strategy are $\{A,B,C,\emptyset \}$ , utilizing a standard classification loss function, such as cross-entropy, would result in a higher loss than what is representative of the prediction, as three out of four constraints have been predicted correctly. As such, this task requires a loss function that allows us to train our model to predict the correct constraints for a language strategy agnostic of the ordering of the labels. We chose to adopt a recently proposed loss function called Order-Agnostic Cross Entropy (OaXE) (Du et al., 2021). Intuitively, OaXE is defined as the cross entropy for the best possible alignment of output tokens. + +![](images/ebaba61375dbc6c286c471dabf32dbdc274ca8b42d201e4a03cb5df8ea25f89d.jpg) +Figure 3: Pipeline for augmenting synthetic or human-created data ( $\S 4.4$ ). A strategy description is first split into sentences, then passed into the PEGASUS (Zhang et al., 2020) paraphrasing model and data quality filter. + +Let $O = \{O_1, O_2, \ldots, O_{|O|}\}$ be the ordering space of all possible orderings of the target sequence of constraints, where each $O_l$ is one possible ordering of the target tokens. The final loss function is computed as: + +$$ +\mathcal {L} _ {O a X E} = - \log P \left(O ^ {*} \mid X\right) \tag {3} +$$ + +where $O^{*}$ represents the best possible alignment from $O$ . This alignment is computed by applying the Hungarian algorithm, after casting this problem as maximum bipartite matching (Du et al., 2021). As our final loss function, we follow Du et al. (2021) in combining OaXE with cross-entropy loss: + +$$ +\mathcal {L} _ {\text {c o n s t r a i n t}} = T _ {m} * \mathcal {L} _ {C E} + (1 - T _ {m}) * \mathcal {L} _ {O a X E} \tag {4} +$$ + +where $T_{m}$ is a temperature parameter that is logistically annealed from 1 to 0. In our case, cross entropy $(\mathcal{L}_{CE})$ is computed using the default ordering of labels in our dataset. + +# 4.4 Data Augmentation Methods + +Finally, we applied data augmentation procedures to improve our model's performance. First, we randomly generated 4000 unique sets of goals and constraints, and applied a text template to produce descriptions to develop a Synthetic (S) training corpus. For example, the constraint, "I must have troops on Red" could be represented as "My strategy is to take over Red," or "I need a large army on Red," or "I need to place troops on Red." We further augmented this synthetic corpus with a pretrained PEGASUS (Zhang et al., 2020) paraphrasing model to create an Augmented-Synthetic (AS) dataset. We split each language description from the synthetic corpus into individual sentences and employed the paraphrasing model to generate candidate paraphrases. Sentences that replaced important keywords, such as continent names, or were too similar to the original sentence in terms of edit distance were removed. We randomly chose + +a sentence from the remaining candidates as a replacement sentence, and combined the replacement sentences to form an augmented data point (see Figure 3). The two Synthetic datasets (S, AS) were used to pretrain our model prior to training on human data. The same techniques were also applied to our human dataset to form a Augmented-Human dataset (AH). Our final Augmented-Human data set is a version of our original crowdsourced dataset where each example is rephrased using our augmentation pipeline and is twice the size of our original human dataset. We experiment with utilizing the AH dataset in place of the original human dataset to see if the added diversity in our corpus through paraphrasing improves downstream performance. Examples of Synthetic (S), Augmented-Synthetic (AS), and Augmented-Human (AH) data are provided in Figure 6 in the Appendix. + +# 5 Experiments + +This section will present the empirical evaluations of our approach. We design two evaluation experiments to contrast our model's performance with humans, as well as ChatGPT trained to perform our task through in-context learning. Both human and ChatGPT performance was computed using the 30 held-out examples in our test set. We statistically measure the difference in the average number of goals/constraints predicted correctly per data point between our model and the two baselines (Human + ChatGPT). We conclude with an ablation analysis across the model and data augmentations utilized in this approach. + +# 5.1 Human Performance + +In our first study, we ask how well the average human can perform on the task of parsing strategic intent from language (see Table 1). We recruited 114 participants for our study from Prolific. Participants begin with a tutorial of the task and are provided an annotated example explaining how to + +
BaselineGoals (Total = 6)Constraints (Total = 8)
Model (Ours)2.76 ± 1.055.53 ± 1.26
Human1.87 ± 1.124.28 ± 1.83
ChatGPT2.10 ± 1.273.80 ± 1.51
+ +assign goals and constraints given a language description and map. Following this tutorial, each participant is provided three randomly selected maps and language descriptions from our test set of 30 unique data points and is asked to annotate the goals and constraints for each given strategy. Our study included attention checks to ensure participants who were submitting random responses could be excluded. The average time taken for our study was 21 minutes, and participants were paid $3.6 for completing our task. We utilized a data filtering rubric to identify and remove individual data points which were inadequate or were from participants who appeared to blatantly ignore or misunderstand the instructions. The rubric is included in Appendix F. After filtering, a total of 270 responses remained. + +# 5.2 ChatGPT Performance + +We also evaluate ChatGPT (GPT-3.5 Default) as a baseline for our task (see Table 1). We design a 1000-word language prompt to train ChatGPT to perform the same task (see full prompt in Appendix G.1). This prompt includes a description of the environment and task, as well as an annotated example translating goals and constraints from language. Crucially, we design our prompt such that ChatGPT receives the same information that humans receive in our study in §5.1. Following this prompt, we iteratively input each strategy and troop deployment in our test set and store the constraints selected by ChatGPT. The additional prompt engineering we conduct is to notify ChatGPT when it makes formational mistakes while predicting constraints, such as predicting more than the maximum number of constraints or creating new constraint classes. + +# 5.3 Results for Goal Extraction + +The average number of goals predicted correctly per map can be seen in the first column of Table 1. We applied multivariate linear regression to compare the results of our model with our ChatGPT and human baselines, with Akaike information criterion (AIC) as our Occam's razor. AIC is a mathematical + +Table 1: Mean and standard deviations for the number of correct predictions of each approach. + +
Model TypeDataPretrainingAccuracy (Std)
RoBERTa base--44.37 (1.33)
w/ troopAHAS46.04 (1.85)
w/ troop + [CLSi]AHAS45.52 (1.48)
w/ troop + [CLSi]AHS45.32 (1.01)
w/ troop + [CLSi]AH-45.89 (1.26)
w/ [CLSi]AHAS44.29 (1.14)
w/ troop + [CLSi]H-45.07 (1.33)
+ +Table 2: Ablation study (10-fold cross-validation) with respect to model and data augmentations for goal extraction. H: the human-created dataset (\$3.3); S: the synthetic dataset created from templates; AH/AS: the augmented version of H/S via paraphrasing (\$4.4). $[\mathrm{CLS}_i]$ represents the use of individual classification tokens for each goal/constraint (\$4.1); "troop" represents the inclusion of troop selections as a part of the input. + +
ModelDataPretrainingAccuracy (Std)
RoBERTa baseH-62.60 (1.60)
w/ troop + [CLSi]HS68.21 (1.08)
w/ troop + [CLSi]AHS67.79 (1.58)
w/ troop + [CLSi]HAS67.09 (1.28)
w/ troopHS65.96 (1.12)
w/ troop + [CLSi]H-65.76 (1.13)
w/ troop + [CLSi]AH-65.52 (1.42)
w/ [CLSi]HS65.31 (1.12)
+ +Table 3: Ablation study (10-fold cross-validation) for constraint extraction. + +method for determining a model-fit so as to choose the regression model which best fits our data. For the goals model, we modeled each baseline (human vs. model vs. ChatGPT) as a fixed effects co-variate, and the datapoint number as a mixed effects variable. The datapoint corresponded to the numerical index (between 1 - 30) of the datapoint from the test set. We performed the Levene's test (Glass, 1966) to show homoscedasticity $(F(2,327) = 0.5435$ , $p = 0.581)$ . The residuals for our model were not normally distributed; however, prior work has shown that an F-test is robust to non-normality (Blanca Mena et al., 2017; Cochran, 1947). Therefore, we proceeded with our linear regression analysis. The dependent variable within our analysis was the number of goals predicted correctly. An ANOVA with respect to our dependent variable yielded a significant difference across conditions $(F(2,299.95) = 10.605$ , $p < 0.001)$ . A Tukey post-hoc test (Abdi and Williams, 2010) for pairwise significance further revealed a significant difference between the performance of our model vs humans $(p < 0.001)$ and vs ChatGPT $(p < 0.05)$ , i.e., our approach was able to significantly predict + +goals better than humans and ChatGPT. + +# 5.4 Results for Constraint Extraction + +The average number of constraints predicted correctly per map can be seen in column 2 of Table 1. To compare our constraint prediction model, to our human and ChatGPT baselines, we conducted a non-parametric Friedman's test (Pereira et al., 2015). We could not employ a multivariate regression analysis, as the regression model for constraints did not satisfy the assumption of homoscedasticity as per Levene's test $(F(2,327) = 5.4294, p < 0.01)$ . The Friedman's test yielded a significant difference across conditions for the task of predicting constraints $(\chi^2 (2,90) = 16.768, p < 0.001)$ . A further pairwise Wilcoxon signed rank test (Woolson, 2007) revealed a significant difference between humans and our model $(p < 0.001)$ as well as ChatGPT and our model $(p < 0.001)$ , indicating that our approach is not just able to significantly outperform humans, but also ChatGPT for inferring constraints from language. + +# 5.5 Discussion + +Our results emphasize that inferring strategic intent from language is a non-trivial task, as language interpretation can be subjective and malleable. ChatGPT is capable of performing novel tasks such as text classification (Li et al., 2023), mathematical problem solving (Frieder et al., 2023), and information extraction (He et al., 2023). through in-context learning. However, despite these capabilities, our model was found to significantly outperform chatGPT for inferring strategic intent from language. Success in highly specific and complex language interpretation tasks, such as ours, requires the model to build an understanding of the domain and the task itself as generic language interpretation learned by the majority of pretrained language models may not be applicable. + +Recent work on evaluating open question-answering on a challenge-dataset has shown that even for large-scale language models with between 6B-100B parameters, none of these models outperformed humans (Peinl and Wirth, 2023). By developing a computational interface which can infer strategic intent from language significantly better than humans, we show the usefulness of our pipeline towards solving complex domain-specific task in a low-data, -resource setting. + +
BaselineConstraintsGoals
Roberta-base (Best)68.21 (1.08)46.04 (1.85)
GPT-Neo 125M (Best)65.22 (1.21)46.08 (0.73)
+ +Table 4: This table depicts the performance when the roberta-base encoder is substituted with a SOTA autoregressive model, i.e. GPT-Neo (125 million parameters). + +# 5.6 Ablation Study + +Tables 3 and 2 provide the results from abating each model augmentation discussed in Section 4. The effects of these augmentations are more prominent in the model for predicting constraints ( $\sim$ 6% performance boost) than predicting goals ( $\sim$ 1.5% performance boost). For the constraints model, when any parameter, i.e. troop selections, pretraining, or CLS-Token, were removed, the accuracy dropped by $\sim$ 3% individually. For predicting goals, the inclusion of troop selections was the only model-augmentation which seemed to have a decisive impact performance, as all models with selections had an accuracy of $\sim$ 1% more than those without. We attribute the difficulty in improving the performance of the goals model to the contextual ambiguity for values assigned to each goal. Participants may not always follow the same metric while specifying goal values. Each participant could have a unique interpretation, for what any rating between -100 to 100 means for a particular goal, and description of that value through language (see Appendix for the data distribution corresponding to each goal). This disparity in interpreting values could be affecting the consistency of language descriptions for goals in our dataset. + +Finally, the last ablation conducted studied the effect of the type of encoder utilized in our approach. Therefore, we performed a comparison with a model which replaced the encoder with a SOTA pretrained autoregressive model. We utilized GPT-Neo (Black et al., 2021) for our experiments, as it has the same number of parameters as Roberta-base (125 million). Our findings (see Table 4) show that utilizing an autoregressive model as our encoder offers no benefits to a roberta-base model, the GPT-Neo model performed equivalently for predicting goals and about $3\%$ worse for the constraints model. + +# 6 Conclusion + +In this paper, we develop a novel computational interface to automate inferring strategic intent, in the + +form of goals and constraints, from unstructured language descriptions of strategies. We develop a new benchmark for our dataset and broader task, and further conduct a novel head-to-head evaluation to determine the relative efficacy of our approach. We show that in a low-data setting, our approach towards inferring goals and constraints from language strategy descriptions can significantly outperform humans for the same tasks. Furthermore, we also found that our approach, with only 125 million parameters, was able to significantly outperform ChatGPT for inferring strategic intent from language. Our work endows researchers with valuable tools to further seldonian optimization approaches for mixed-initiative interaction. + +# Future Work + +To measure ChatGPT performance, we employ a one-shot chain-of-thought prompt method with a detailed instructions of the task. We chose this method to maintain consistency between the information shown to humans and ChatGPT. Future work may explore ablations on the size of the initial prompt or the number of annotated examples in the prompt to tune the performance of ChatGPT on our strategy translation task. Secondly, an important next step that stems from this research pertains to multi-round inference and updating the initially learned strategy. In future work, it would be helpful to develop methods to allow users to modify their initial strategy throughout the game or task as their goals or values change. These methods could utilize approaches proposed in prior work wherein language inputs were leveraged to change the sub-goals that an agent is considering (Fu et al., 2019; Goyal et al., 2019). Furthermore, recent work has shown promise for the capabilities of ChatGPT/GPT-3.5 towards dialog-state tracking and task-oriented dialog (Labruna et al., 2023; Heck et al., 2023). Future work could also formulate this task of updating the initial strategy over the course of the game as a goal-oriented dialog, and tune GPT-3.5 or GPT-4 to update a user's initially translated strategy after multiple rounds of the game through language feedback. + +# Limitations + +Firstly, we asked participants to provide natural language descriptions after providing their structured intent in the form of goals and constraints. This potentially biased the participant towards specifically + +referencing the terminology utilized in the goals and constraints. While our dataset provides explanations that are the closest to natural, human-like descriptions of strategies, an important next step would entail comparing how our model performs on strategies collected "in-the-wild." Secondly, in this paper we assume that utilizing language is more accessible than learning to use mathematical specifications directly to specify their intent to an intelligent agent. However, we do not test whether this assumption bears out in practice. In future work, we hope to develop a human-subjects study to confirm this hypothesis. Finally, despite converting language to goals and constraints, in this work we do not directly train a seldonian optimization approach. In this work, we focus on showing the capability of our machine learning pipeline in a low-data setting. However, we have provided all the components needed to train a reinforcement learning approach for an RL-agents constraining behavior through unstructured language (including a novel open-AI RL domain for the game Risk, see Appendix). Developing this approach is currently outside the scope of this work, and we thereby leave this exploration for future work. + +# Ethics Statement + +As pretrained large-language models are utilized in our approach for automated strategy translation, we need to be cognizant of the prevalence of bias within these models. If these systems are translating strategies in safety-critical settings, it is important to make sure that the language models make the decisions solely based on the provided context rather than any inherent bias. Many sets prior work have studied approaches to identify and mitigate bias (Abid et al., 2021; Silva et al., 2021b; Guo et al., 2022; Viswanath and Zhang, 2023). We encourage authors to seek out such works prior to deploying any strategy translation module, towards a real-world task. + +# Acknowledgements + +This work was supported by the Office of Naval Research under awards, N00014-19-1-2076, N00014-22-1-2834, N00014-23-1-2887, and the National Science Foundation under award, FMRG-2229260. We also thank Konica Minolta for their contribution to this work via a gift to the Georgia Tech Research Foundation. + +# References + +2014. Online participant recruitment for surveys and market research. +Herve Abdi and Lynne J Williams. 2010. Tukey's honestly significant difference (hsd) test. Encyclopedia of research design, 3(1):1-5. +Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 298-306. +Léo Andeol. 2018. Leoandeol/gym-risk: Gym environment for the risk game by hasbro. +Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning, pages 166-175. PMLR. +Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. 2018. Learning to understand goal specifications by modelling reward. arXiv preprint arXiv:1806.01946. +Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow. If you use this software, please cite it using these metadata. +María José Blanca Mena, Rafael Alarcón Postigo, Jaume Arnau Gras, Roser Bono Cabré, Rebecca Bendayan, et al. 2017. Non-normal data: Is anova still a valid option? Psicothema. +Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A Knepper, and Yoav Artzi. 2019. Learning to map natural language instructions to physical quadcopter control using simulated flight. arXiv preprint arXiv:1910.09664. +Haw-Shiuan Chang, Ruei-Yao Sun, Kathryn Ricci, and Andrew McCallum. 2023. Multi-CLS BERT: An efficient alternative to traditional ensembling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. +William G Cochran. 1947. Some consequences when the assumptions for the analysis of variance are not satisfied. Biometrics, 3(1):22-38. +Richard Dempsey and Jonathan M Chavous. 2013. Commander's intent and concept of operations. Military Review, 93(6):58-66. +Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Order-agnostic cross entropy for non-autoregressive machine translation. arXiv preprint arXiv:2106.05093. +Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. 2023. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867. + +Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. 2019. From language to goals: Inverse reinforcement learning for vision-based instruction following. arXiv preprint arXiv:1902.07742. +Richard Gibson, Neesha Desai, and Richard Zhao. 2010. An automated technique for drafting territories in the board game Risk. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 6(1):15-20. +Gene V Glass. 1966. Testing homogeneity of variances. American Educational Research Journal, 3(3):187-190. +Nakul Gopalan, Dilip Arumugam, Lawson Wong, and Stefanie Tellex. 2018. Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications. In Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania. +Prasoon Goyal, Scott Niekum, and Raymond J Mooney. 2019. Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020. +Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012-1023. +Philip J Hayes. 1985. The utility of natural language interfaces (panel session). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, page 19. +Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao Shen. 2023. Icl-d3ie: In-context learning with diverse demonstrations updating for document information extraction. arXiv preprint arXiv:2303.05063. +Michael Heck, Nurul Lubis, Benjamin Ruppik, Renato Vukovic, Shutong Feng, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, and Milica Gašić. 2023. Chatgpt for zero-shot dialogue state tracking: A solution or an opportunity? arXiv preprint arXiv:2306.01386. +Arie W Kruglanski. 1996. Goals as knowledge structures. P. M. Gollwitzer & J. A. Bargh (Eds.), The psychology of action: Linking cognition and motivation to behavior, pages 599-618. +Geert-Jan M Kruijff, M Janicek, Shanker Keshavdas, Benoit Larochelle, Hendrik Zender, Ninja JJM Smets, Tina Mioch, Mark A Neerincx, Jurriaan Van Diggelen, Francis Colas, et al. 2014. Experience in system design for human-robot teaming in urban search and rescue. In Field and Service Robotics, pages 111-125. Springer. + +Tiziano Labruna, Sofia Brenna, Andrea Zaninello, and Bernardo Magnini. 2023. Unraveling chatgpt: A critical analysis of ai-generated goal-oriented dialogues and annotations. arXiv preprint arXiv:2305.14556. +Jiazheng Li, Runcong Zhao, Yulan He, and Lin Gui. 2023. Overprompt: Enhancing chatgpt capabilities through an efficient in-context learning approach. arXiv preprint arXiv:2305.14973. +Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Joseph E Mercado, Michael A Rupp, Jessie YC Chen, Michael J Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent agent transparency in human-agent teaming for multi-uxv management. Human factors, 58(3):401-415. +Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3d environments with visual goal prediction. arXiv preprint arXiv:1809.00786. +Gordon B Moskowitz and Heidi Grant. 2009. The psychology of goals. Guilford press. +Michael J Muller and Sarah Kuhn. 1993. Participatory design. Communications of the ACM, 36(6):24-28. +Thomas Nickles. 1978. Scientific problems and constraints. In *PSA: Proceedings of the biennial meeting of the Philosophy of Science Association*, volume 1978, pages 134-148. Philosophy of Science Association. +David G Novick and Stephen Sutton. 1997. What is mixed-initiative interaction. In Proceedings of the AAAI spring symposium on computational models for mixed initiative interaction, volume 2, page 12. +Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. 2021. Teach: Taskdriven embodied agents that chat. arXiv preprint arXiv:2110.00534. +Réné Peinl and Johannes Wirth. 2023. Evaluation of medium-large language models at zero-shot closed book generative question answering. arXiv preprint arXiv:2305.11991. +Dulce G Pereira, Anabela Afonso, and Fátima Melo Medeiros. 2015. Overview of friedman's test and post-hoc analysis. Communications in Statistics-Simulation and Computation, 44(10):2636-2653. + +Ian C Rankin, Seth McCammon, and Geoffrey A Hollinger. 2021. Robotic information gathering using semantic language instructions. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4882-4888. IEEE. +Joseph Rosen, Eliot Grigg, Jaron Lanier, Susan McGrath, Scott Lillibridge, David Sargent, and C Everett Koop. 2002. The future of command and control for disaster response. IEEE engineering in medicine and biology magazine, 21(5):56-68. +Mohit Shridhar, Lucas Manuelli, and Dieter Fox. 2022. *Cliport: What and where pathways for robotic manipulation*. In *Conference on Robot Learning*, pages 894–906. PMLR. +Andrew Silva, Nina Moorman, William Silva, Zulfiqar Zaidi, Nakul Gopalan, and Matthew Gombolay. 2021a. Lancon-learn: Learning with language to enable generalization in multi-task manipulation. IEEE Robotics and Automation Letters. +Andrew Silva, Pradyumna Tambwekar, and Matthew Gombolay. 2021b. Towards a comprehensive understanding and accurate evaluation of societal biases in pre-trained transformers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2383-2389. +Alane Suhr, Claudia Yan, Jacob Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situated collaborative interactions. arXiv preprint arXiv:1910.03655. +Pradyumna Tambwekar, Andrew Silva, Nakul Gopalan, and Matthew Gombolay. 2021. Interpretable policy specification and synthesis through natural language and RL. +Stefanie TELlex, Nakul Gopalan, Hadas Kress-Gazit, and Cynthia Matuszek. 2020. Annual Review of Control, Robotics, and Autonomous Systems, 3:25-55. +Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, and Emma Brunskill. 2017. On ensuring that intelligent machines are well-behaved. arXiv preprint arXiv:1708.05448. +Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, Stephen Giguere, Yuriy Brun, and Emma Brunskill. 2019. Preventing undesirable behavior of intelligent machines. Science, 366(6468):999-1004. +Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J Mooney. 2019. Improving grounded natural language understanding through human-robot dialog. In 2019 International Conference on Robotics and Automation (ICRA), pages 6934-6941. IEEE. + +Hrishikesh Viswanath and Tianyi Zhang. 2023. Fairpy: A toolkit for evaluation of social biases and their mitigation in large language models. arXiv preprint arXiv:2302.05508. + +Edward C Williams, Nakul Gopalan, Mine Rhee, and Stefanie Tellex. 2018. Learning to parse natural language to grounded reward functions with weak supervision. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 4430-4436. IEEE. + +Robert F Woolson. 2007. Wilcoxon signed-rank test. Wiley encyclopedia of clinical trials, pages 1-3. + +Tsung-Yen Yang, Michael Hu, Yinlam Chow, Peter J Ramadge, and Karthik Narasimhan. 2020. Safe reinforcement learning with natural language constraints. arXiv preprint arXiv:2010.05150. + +Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328-11339. PMLR. + +# A Additional Data Collection Details + +Our study applied participatory design principles (Muller and Kuhn, 1993), to ensure that participants were engaged in the task and provided meaningful strategy descriptions. Each participant was initially given a partially setup map, where two other "opponents" had placed their troops. The participant was then asked to provide their troop placements, based on these initial placements. In Risk, the initial troop placements have a substantial impact on the strategies that a player can pursue for the rest of the game. As such, troop initialization provides a stand-in for a player's overall strategy in a game. By asking participants to participate in an actual aspect of the gameplay, e.g. deploying troops, participants were encouraged envision future situations and think about how their decisions could affect future gameplay and develop grounded strategies that could actually function as viable Risk gameplay strategies. + +Next, participants were asked to provide the goals and constraints which they considered after selecting their troop placements. These specific goals and constraints were selected as they cater to potential strategies that could be employed while playing Risk. The presence of these templates provided a scaffold within which participants, who may or may not have any experience with Risk, could ground their strategies. However, it is important to acknowledge the presence of an inductive + +bias, due to the specific wording of the goals and constraint templates, which could have impacted the strategies submitted by the participants. For goals, participants were asked to rate how important each goal was to their strategy on a scale of -100 to 100. A score of -100 indicated that pursuing the goal was completely detrimental to their strategy, while 100 indicated that pursuing the goal was essential to their strategy. For constraints, participants were provided 9 constraint templates, and were asked to select and fill in the appropriate constraint that was represented in their strategy. Participants were required to provide at least three constraints to ensure that they did not skip this question. The specific goals and constraints in our dataset can be depicted in Table 5. Finally, participants were asked to summarize their strategy for the given map as a language description. Participants were encouraged to include references to their goals and constraints, but these descriptions were otherwise unprompted. Participants were paid up to $8.5 based on the number of adequate responses submitted. The payment scale was updated if the average time taken significantly changed. + +As mentioned in the paper, we created three additional augmented datasets from our original corpus. Figure 6 provides some examples of the effect of the various augmentations we employed in each augmented dataset. Our full dataset can be found at the following anonymized Github repository - Anonymized Data Repository . + +# A.1 Data Cleaning/Filtering + +We performed the least possible modifications to participant's responses to ensure responses were self-consistent while preserving the integrity of the organic data collection task. If a participant specifically referenced a goal or a constraint in their language, and did not include it in their response, then their response was modified to include it, and vice versa. We also corrected typos within a participants specifications, such as if they meant to reference the "Blue" continent instead of the "Red" continent. If a response was not salvageable without minimum modifications, the response was thrown out. Discarded responses included responses where participants simply did not understand the task or submitted blatantly insincere responses such as copying text from the study multiple times to reach the character limit. These decisions were made upon agreement of multiple reviewers. + +![](images/ef39334c75b613e69998f9cfe5416b0ac7d83775b49097924d7e6282415f388d.jpg) + +![](images/5c7c744ea4ec893f71e9ddc9ffbd2de8edb6c9ac7fb5d1c76b8a7f0d3fc96dab.jpg) + +![](images/f30a288e4c9660dd62936fc80e7fdb0f706b5ace52919d0b2bbb8fab2771db9a.jpg) + +![](images/9d6347c06cd61aab75f2603fd61a59540ed133c304d7a954400a37ae0053be06.jpg) +Figure 4: Distribution of assigned values for each goal. The titles for each goal have been shortened for readability. + +![](images/fb8ec1db6599a6415c401b1919a81aa5aeda7e9a5b3e55f5f8e10151a3c27c7c.jpg) + +![](images/c4d254aad5d3d1bdefdde1079b6c7a34a2e32ebda6e3a60488f0447ca70a3e1a.jpg) + +![](images/7dadec3d8286e861f9961dde765dde8738ff492805ebb010b0c4e4c5fb9df309.jpg) +Figure 5: Distribution of assigned values for each constraint type + +# A.2 Data Collection Quiz + +In order to ensure that participants understood the rules of Risk prior to providing strategies for our dataset, each participant was asked answer a five question quiz. Participants needed to answer all questions correctly to proceed. Participants were given three tries to answer the questions after which they were shown the correct answers. The five questions in our quiz were as follows (correct answers to each question are in bold): + +1. Which of these are NOT a phase in the game? + +(a) Attack +(b) Recruit +(c) Control opponent's troops +(d) Maneuver + +2. What is the objective of the game? + +(a) Control the rightmost continent +(b) Have the maximum number of island territories +(c) Have the most territories after 10 turns +(d) Occupy all territories on the board + +3. Which of these decides how many troops you receive at the start of each turn? (TWO CORRECT ANSWERS) + +(a) The number of territories you control +(b) The number of coastal territories on the map +(c) They physical size of the board game +(d) The number of continents you fully occupy + +4. Which of the following statements are correct about attacking enemy territories in the game? (TWO CORRECT ANSWERS) + +(a) When you attack a territory you've already attacked, your attack points are doubled +(b) You CANNOT attack in the opposite direction of the arrows +(c) You can only attack territories you have access to +(d) You can never attack a territory in the same continent + +5. Which of the following statements are true regarding how attacks are conducted? (TWO CORRECT ANSWERS) + +(a) A player with scattered troops always wins +(b) A player attacking from the left side always wins +(c) Both players roll a number of dice dependent on the number of their troops involved in the battle to decide the outcome +(d) A player can attack with up to 3 troops and defend with up to 2 troops in one battle + +# B Dataset Utility + +This section provides a brief discussion on the potential future utility of our collated dataset. Firstly, this dataset provides strategy specifications in Risk that can be used to test seldonian optimization approaches in future work. Our dataset provides the first such instance language descriptions of strategic intent. Future work can analyze the flaws and strengths of our data to modify our data collection protocol and generate the specific examples they may need for their individual applications. However, there are many tangential applications for this data that are unrelated to the use-case specified in this paper. There is a dearth of natural language datasets which contain language with human-like speech patterns that is not scraped from internetcorpora. Many NLP techniques can be applied to further study this language data such as summarization, to figure out whether these policies can be summarized into a more easily digestible format, sentiment analysis, for broadly categorizing the language description into aggressive, defensive, etc, + +or Q&A comprehension-based methods, to train AI agents to answer questions regarding a user's preferences by reading their strategy description. + +# C Dataset Distributions + +The data distribution for goals and constraints selected by participants are shown in Figure 4 and Figure 5 respectively. For Goals 3 (Keep your troops close together) and 5 (Maximize Battles) participants tended to skew towards answers in the 60-100 range. For the other goals, the responses were relatively uniform. On average, participants submitted 5.62 unique constraints per response. + +# D Implementation Details + +Hyperparameters for both models were computed through a grid search over parameters. The constraints model was trained for 10 epochs with a batch size of 16 using a learning rate of 0.0005. The goals model was trained for 25 epochs with a batch size of 8 using a learning rate of 0.00001. The constraints model was Both models utilized an AdamW optimizer. The constraints model employed a cosine learning rate scheduler, and the goals model employed a linear learning rate scheduler. We hold-out 30 randomly selected examples for our human/ChatGPT evaluation (Section 5). We split the remaining 1023 examples into a 85/15 train/validation split to perform our grid search over hyperparameters. Finally, to report the accuracy of our model we computed the 10-fold cross-validation accuracy on the best performing hyperparameter setting. The best performing model for predicting constraints was pretrained on the synthetic corpus and trained on the un-augmented human corpus. The best goals model was pretrained on the synthetic-augmented dataset and trained on the human-augmented dataset. All experiments were conducted on a 48GB NVIDIA Quadro RTX GPU. Our code can be found at the following anonymized repository for further reference - Anonymized Code Repository. + +# E Human Evaluation Study - Additional Details + +In this section, we report some additional details regarding our human-evaluation experiment. Firstly, we report that on average, the difference between scores for a participant's first and last response was -0.2143 for goals and -0.0102 for constraints, indicating that there is a negligible impact of factors + +
Synthetic DataSynthetic-Augmented Data
Why would I care about battling. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. I need to spread my troops out as far as possible. I can't win if I put any troops on Blue. I need to place troops on at least 5 countries. This time I will use a different strategy. I need to have troops on at least 5 continents. I don't intend to control continents.I don't know why I care about fighting. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. My troops need to be spread out as much as possible. If I put any troops on Blue, I will not win. I need to place troops on at least 5 countries. I will be using a different strategy this time. I need to have troops on at least 5 continents. I don't intend to control continents.
Human DataHuman-Augmented Data
I am going to attack and take over green c. That country is ripe for the taking since I have cut it off from other grey troops. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. Once the green continent is secure I will look to move my armies out to the red continent to battle black there. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on GreenI am going to attack and take over green c. Since I cut it off from other grey troops, that country is ripe for taking. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. I will move my armies to the red continent to fight black once the green continent is secure. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on Green.
+ +Figure 6: Examples of data from Synthetic (top-left), Synthetic-Augmented (top-right), Human (bottom-left) and Human-Augmented (bottom-right). Highlighted sections represent the specific sentences changed by our augmentation procedure. + +
GoalsConstraints
G1: Surround enemy territoriesC1: I must have troops on (continent)
G2: Maximize number of countries occupiedC2: I must not have troops on (continent)
G3: Keep our troops close togetherC3: I must be able to access (continent) in one move
G4: Maximize battles throughout the gameC4: I need to protect the borders of (continent)
G5: Fortify borders for the continents you controlC5: I need a total of at least (number) troops to defend a continent
G6: Battle opposing players one at a timeC6: I must have at least (number) countries
C7: I must have troops on at least (number) continents
C8: I must place at least (number) troops to effectively defend a country
C9: I must have troops on at most (number) continents
+ +Table 5: Goals and Constraints Selected for our Dataset + +such as cognitive load or a learning curve. Secondly, it is important to note that we did not have the same number of responses per map from humans, as the map condition was randomly assigned to each participant. While this may slightly impact the results of the constraints model, as we aggregated performance across maps, due to the strong significant difference across baselines, it is unlikely to change our result. + +# F Human Evaluation Study - Data Filtering Rubric + +Next, we cover the rubric we applied to filter data for the human-subjects study. Each response was independently evaluated by two graders and was included if both graders deemed it acceptable as per the predefined rubric. The rubric was as follows: + +1. If constraints clearly don't match the selections for locations or access + +- e.g. if someone has selected, "I must have troops on Blue" when there are no troops on Blue + +2. If someone has submitted invalid constraints + +- e.g. If someone selects both "I need troops on at least 2 continents" + "I need troops on at most 1 continent" +- If someone mistakes "country" for "continent" + +3. If someone has selected the same value for all goals (or values within a small range, say $+ - 10$ ), when this clearly does not align with the strategy + +- e.g. someone selects $-100$ for all goals when the strategy involves protecting a continent + +# G ChatGPT Prompt + +We utilized the following prompt for ChatGPT which included a description of the domain and task, as well as an annotated example. + +# G.1 Full Prompt + +Reading the following section carefully will provide you with the information needed to complete + +this task. + +Risk is a board game in which an army commander tries to take over the world by defeating all enemy troops and controlling all countries. Risk is a simplified version of real conflict, and has rules designed to reflect this. These include the following: + +- Players control countries by having troops in them +- The more countries and continents a player controls, the more resources they get +- Players win countries from other players by battling with their troops +- The more troops a player has when battling, the more likely they are to win +- Players can only attack or be attacked by countries that are next to them + +In this task, you will be asked to provide a set of constraints corresponding to the human player's strategy for the board game Risk. This includes their troop placements and a text description, which explains why the player decided to place their troops and how they plan to win this game of Risk given their opponents' choices. + +Your task will be to think about the player's strategy (selections and description) and predict what their constraints are with respect to the strategy. Constraints are rules that you think need to be followed to successfully execute a strategy. + +CONSTRAINTS: Note: For predicting goals, this section would be replaced with a description of what goals are + +Constraints are comprised of constraint classes and constraint values. Your job is to assign constraints to the human's strategy. Each constraint is comprised of a constraint class and a constraint value. You will be provided a list of possible constraint classes and values to choose from. You may choose the same class of constraint more than once, but you may not submit duplicate constraints. For example, you may submit "I must have troops on Green" and "I must have troops on Blue" but you may not submit "I must have troops on Green" twice. Choose all constraints relevant to the strategy. You may choose up to 8 constraints per strategy. + +The constraints you can choose from are + +- I must have troops on [Continent] +- I must not have troops on [Continent] +- I must be able to access [Continent] with one move +- I need to protect the borders of [Continent] +- I need a total of at least [Number] troops to defend a continent +- I must have at least at least [Number] countries +- I must have troops on at least [Number] continents +- I must place at least [Number] troops to effectively defend a country +- I must have troops on at most [Number] continents + +The possible constraint values you can choose from are + +- Continent - Blue, Green, Yellow, Red, Purple +Number-1,2,3,4,5,6,7,8,9,10,11,12,13,14 + +Our modified RISK Map contains 5 continents - Red, Green, Purple, Yellow and Blue. Each continent is made up of countries. Red continent has 3 countries, Green has 5 countries, Purple has 5 countries, Yellow has 4 countries and Blue has 4 countries. Green_A, Yellow_B, Blue_C, etc. are referred to as countries or territories Green, Yellow, Blue, Red, Purple are referred to as continents. Continents also have different connections between them through which the troops can move. These connections are one way i.e troops from the source country can only move to the destination country and not the other way round. + +The map has the following connections - Yellow_D is connected to Green_A, Greed_D is connected to Red_A, Red_A is connected to Green_D, Red_B is connected to Purple_E, Red_C is connected to Yellow_B, Red_C is connected to Blue_B, Blue_A is connected to Yellow_C, Yellow_C is connected to Blue_D, Blue_C is connected to Purple_A, Purple_A is connected to Green_E and Green_E is connected to Purple_A + +We will now give you a tutorial on how to ascertain the goals from a human player's strategy and placements on the RISK board. + +The two opposing players are denoted by the "grey" and "black" player. In this scenario, the grey player has placed its troops on the following territories - 5 troops on Yellow_C, 4 troops on Yellow_D, 1 troop on Red_A, 2 troops on Red_B, 2 troops on Red_C. The black player has placed its troops on the following territories - 4 troops on Blue_A, 2 troops on Blue_C, 2 troops on Green_E, 5 troops on Purple_A and 1 troop on Purple_B. + +Now that you have seen where the opposition troops are, you will now be shown how the human player has decided to deploy their troops and the strategy they used. + +The human player (white) has placed 14 troops to battle the opponents. They have placed the troops on the following territories - 7 troops on Purple_E, 5 troops on Purple_C and 2 troops on Purple_D. You will now be guessing the constraints the human player (white) focused on while coming up with their strategy. The following text contains the human player's description of the strategy they used to place their troops. It is critical that you read this description, as it contains information about the constraints considered by the human player. + +"I put all my troops in Purple, because I felt as though I needed all my available troops to defend Purple. I wanted to protect Purple. With 7 troops on Purple_E, I feel like I cannot be beat on purple. I wasn't too keen on getting involved in battles, or taking an overly aggressive strategy. I would like to focus on beating the black player first, I don't think I can battle two people at the same time. I'm going to avoid Red for now since it seems to be the hardest continent to control." + +We will now show you how to determine constraints from a strategy and via an example. Please carefully review the example and use the given information about both selections and text to fill out constraints for this strategy. + +An appropriate set of constraints for the strategy shown above would be + +- I must have troops on Purple + +- Reason: The player mentioned that "they put all their troops on Purple" + +- I must not have troops on Red + +- Reason: The player mentioned that "they would like to avoid Red for now" + +- I must place at least 7 troops to effectively defend a country + +- Reason: The player mentioned that "with 7 troops on Purple_E, I cannot be beaten on Purple" + +# H Risk Reinforcement Learning Simulator + +We have shown that our proposed computational interface can remove the need for human-interpreters for the task of parsing intent from unstructured language. However, to test how well commander's intent interpreted from language can be applied towards optimizing an agent's behavior, we require a reinforcement learning domain to train our agent. As such, to enable seldonian optimization, via unstructured language descriptions, we developed a novel open-ai gym environment for simulating Risk gameplay. This environment closes the loop on the methods presented in this paper by providing all the necessary components for humans to specify their intent to an AI agent and evaluate whether their specifications have been satisfied by the learnt agent. Our environment also provides an additional means of collecting data and conducting studies for human-specification within multi-player team scenarios. + +For this task, we adapted an existing open-air gym environment for Risk (Andeol, 2018). We modified the codebase to allow for RL agents to be trained to play all phases of Risk, according to the setup utilized in our approach. We also developed a pygame-UI for our simulator (see Figure 7). A detailed description of the functionality of the domain and the state space is provided in the appendix. In future work, we aim to leverage our domain to develop approaches which allow humans to constrain an agent's optimization methods through human-like language specifications of intent, which has not been accomplished in any prior work. We also provide a link to an anonymized github repository with the risk environment for further reference - Anonymized Gym-Risk Environment + +# I Risk Domain - Additional Domain Information + +This section provides additional information about our setup for Risk Domain. In our version of Risk, the ego player (Alpha), plays against two opponents (Charlie and Bravo) whose actions are controlled by a pre-determined heuristic. The gameplay within our Risk simulator is comprised of four phases + +![](images/ab78e42bf6b832df7dbc6c6d574f87a5929dbb91b923c2801758750045b620c6.jpg) +Figure 7: This figure shows our Risk simulator with the playable (teal) and two other (orange and pink) agents. + +1. Drafting - Players draft their initial troops on empty territories. +2. Reinforce - Players assign reinforcements to their existing territories. +3. Attack - Players can choose to attack a neighboring territory with their troops. +4. Freemove - Players can move their troops between their territories. + +The game begins with a drafting phase. During this phase, the agent decides where to place their initial 14 troops amongst the available territories. The two opposing players draft their troops before the agent is allowed to draft any troops. The opposing players drafts are either hard-coded to match one of the maps utilized in our study, or they are drafted based on a drafting heuristic. The drafting phase occurs only once in the game. Following drafting, the agent executes the next three phases in sequence. First, in the "Reinforce" phase, the agent receives a specific number of reinforcements based on the number of territories and continents they control. The agent needs to assign the given reinforcements to the territories they control. Each country reinforced is an individual action. Next, the agent moves on to the "Attack" phase. In this phase, the agent can attack adjacent territories with their troops. Within each attack action, the agent specifies which opposing territory they would like to attack, along with the territory they would like to attack from. The agent must also specify the number of troops they would like to move into the opposing territory should the win the conflict. Each combat sequence between two territories is executed in a similar manner to the physical board game, + +1. A maximum of three troops are chosen from the attacking territory, and a maximum of two troops are chosen from the defending territory +2. For both the attacker and defender, a number of die are rolled based on the number of troops involved in each attack. +3. The rolls are sorted in descending order, and each roll is compared between the attacking and defending country. +4. For each comparison, the country with the lower roll loses one troop. The defending territory wins all ties. +5. The above steps are repeated until either the attacking or defending player has been defeated. + +Following combat, the agent can move all but one troop into the conquered territory. Once the agent has finished attacking, they move on to the final phase in their turn, "Freemove." In the "Freemove" phase, the player can move troops from one territory they control to another, as long as the territories are connected. Once the agent executes all their actions, the actions of the two agents are simulated and the player is reset to the "Reinforce" phase to start their next turn. The game is complete when either the agent is out of troops or controls all territories. + +An action is specified by a four-item tuple, i.e. $< p, s, t, tr >$ . The first item, $p$ , specifies which type of action is being conducted, among the four possible phases in the game. Item two, $s$ , denotes the source country for the action. For reinforce and drafting actions this is the country that the agent wants to add troops to, whereas for the attack and freemove actions, $s$ denotes the country you will be attacking or moving from. The, final two items, $t$ and $tr$ , are specifically for attack and move actions. $t$ specifies the country that you would like to attack or move to. For the attack action, $tr$ specifies the number of troops you would like to move from the attacking country if you win the combat. When the agent specifies a move action, $tr$ denotes the number of troops to be moved from $s$ to $t$ . + +# I.1 State Space + +The state of the game is stored as a dictionary. The state dictionary records information such as country ownership, number of troops on each country, continent ownership, etc. We also record information + +about players such as number of reinforcements available to a player, number of players alive, current turn number, etc. We have provided six functions to encode the state space which can be passed as an input to a Reinforcement Learning model. + +The first function encodes the state using 54 features. The initial 42 features contain country related information for each opponent (21 features each) and the next 5 features contain continent ownership data. The remaining features are used for other information related to the game like number of areas controlled by the player, troops left to be drafted by the player, troops left for reinforcement, number of players alive, current turn number and if the current turn belongs to the player. + +The second function encodes the information in the form of one hots. It has a total of 132 features, the first 84 features contain information regarding country ownership as one hots, 21 each for the player, opponents and countries with no owner. The next 21 features denote the number of troops on each country. The next 20 features contain information regarding continent ownership, 5 each for the player, opponents and no owner. The remaining features contain other relevant information as described for the first function. For both of the first two functions described, we also provide normalized versions of these functions where all the real valued spaces are divided by a normalising constant. + +The fifth encoding function contains all the 132 features of the third function and additional information for the current phase. It contains 134 features in total. This function returns normalised values. The last encoding function contains 298 features. The initial features are similar to the ones present in the third encoding function. Apart of that it explicitly contains information about where an agent or player can attack and execute a freemove. This information can help the reinforcement learning model more easily. This function also returns normalised values. + +# I.2 Reward Functions + +We have setup four different types of reward functions ranging from sparse to dense. The recommended reward function is the rules-based reward which provides rewards for successful actions, finishing a phase, successful action in a phase and winning the game. The rewards for winning the game are weighted by a factor of 10 compared to + +others which are weighted by a factor of 1. + +The most simple reward function available is a sparse reward function which provides negative rewards for losing the game and positive rewards for winning the game. In order to increase the number of rewards given throughout the game, we created the turn count reward function which rewards the agent for every turn it plays. Survival reward function was built on top of this to provide an additional negative reward for losing apart from the reward for surviving. + +# I.3 Human Drafting + +Finally, we have also setup a functionality in our simulator that allows player or the opponents to skip the drafting phase and follow a fixed draft based on a predefined map. In such cases, we have predefined fifteen types of map initialisation containing troops for both opponents, which correspond to the exact maps utilized in our data collection procedure. Our setup chooses one of the map initializations and corresponding selections made by a participant in the user study to simulate the game. \ No newline at end of file diff --git a/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/images.zip b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..da93c95a5da757d1628ba7cc6170c1e33ea15654 --- /dev/null +++ b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:768611b46d199ca6bbeb0c5f85e9db0b9f97e01c3d1581f0ac1f9b88d0129d89 +size 621436 diff --git a/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/layout.json b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..347919030cec160079c9632c41d56fe3b10c997e --- /dev/null +++ b/2023/A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting/layout.json @@ -0,0 +1,13042 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 68, + 67, + 524, + 100 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 67, + 524, + 100 + ], + "spans": [ + { + "bbox": [ + 68, + 67, + 524, + 100 + ], + "type": "text", + "content": "A Computational Interface to Translate Strategic Intent from Unstructured Language in a Low-Data Setting" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 105, + 489, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 105, + 489, + 134 + ], + "spans": [ + { + "bbox": [ + 106, + 105, + 489, + 134 + ], + "type": "text", + "content": "Pradyumna Tambwekar1, Lakshita Dodeja2*, Nathan Vaska3*, Wei Xu1, and Matthew Gombolay1" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 136, + 134, + 459, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 134, + 459, + 148 + ], + "spans": [ + { + "bbox": [ + 136, + 134, + 459, + 148 + ], + "type": "text", + "content": "1School of Interactive Computing, Georgia Institute of Technology" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 175, + 148, + 421, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 148, + 421, + 162 + ], + "spans": [ + { + "bbox": [ + 175, + 148, + 421, + 162 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 175, + 148, + 421, + 162 + ], + "type": "text", + "content": "Computer Science Department, Brown University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 153, + 162, + 443, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 153, + 162, + 443, + 176 + ], + "spans": [ + { + "bbox": [ + 153, + 162, + 443, + 176 + ], + "type": "text", + "content": "3Massachusetts Institute of Technology, Lincoln Laboratory" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 106, + 176, + 490, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 176, + 490, + 203 + ], + "spans": [ + { + "bbox": [ + 106, + 176, + 490, + 203 + ], + "type": "text", + "content": "pradyumna.tambwekar@.gatech.edu, lakshita_dodeja@brown.edu, nathan.vaska@ll.mit.edu,{wei.xu, matthew.gombolay}@cc.gatech.edu" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 84, + 237, + 274, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 237, + 274, + 548 + ], + "spans": [ + { + "bbox": [ + 84, + 237, + 274, + 548 + ], + "type": "text", + "content": "Many real-world tasks involve a mixed-initiative setup, wherein humans and AI systems collaboratively perform a task. While significant work has been conducted towards enabling humans to specify, through language, exactly how an agent should complete a task (i.e., low-level specification), prior work lacks on interpreting the high-level strategic intent of the human commanders. Parsing strategic intent from language will allow autonomous systems to independently operate according to the user's plan without frequent guidance or instruction. In this paper, we build a computational interface capable of translating unstructured language strategies into actionable intent in the form of goals and constraints. Leveraging a game environment, we collect a dataset of over 1000 examples, mapping language strategies to the corresponding goals and constraints, and show that our model, trained on this dataset, significantly outperforms human interpreters in inferring strategic intent (i.e., goals and constraints) from language " + }, + { + "bbox": [ + 84, + 237, + 274, + 548 + ], + "type": "inline_equation", + "content": "(p < 0.05)" + }, + { + "bbox": [ + 84, + 237, + 274, + 548 + ], + "type": "text", + "content": ". Furthermore, we show that our model (125M parameters) significantly outperforms ChatGPT for this task " + }, + { + "bbox": [ + 84, + 237, + 274, + 548 + ], + "type": "inline_equation", + "content": "(p < 0.05)" + }, + { + "bbox": [ + 84, + 237, + 274, + 548 + ], + "type": "text", + "content": " in a low-data setting." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 560, + 154, + 572 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 560, + 154, + 572 + ], + "spans": [ + { + "bbox": [ + 68, + 560, + 154, + 572 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 581, + 291, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 581, + 291, + 743 + ], + "spans": [ + { + "bbox": [ + 67, + 581, + 291, + 743 + ], + "type": "text", + "content": "Effective communication is essential for the proper functioning of organizational teams. \"Commander's Intent\" is a method for developing a theory of mind utilized in many domains such as the search and rescue, pandemic response, military, etc (Mercado et al., 2016; Rosen et al., 2002; Kruijff et al., 2014). Commanders and leaders often utilize the formulation of \"Commander's Intent\" to convey the tasks that need to be accomplished and engender an understanding of the criteria for success to their subordinates (Dempsey and Chavous, 2013). Commander's Intent could similarly function as" + } + ] + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 304, + 211, + 526, + 414 + ], + "blocks": [ + { + "bbox": [ + 304, + 211, + 526, + 414 + ], + "lines": [ + { + "bbox": [ + 304, + 211, + 526, + 414 + ], + "spans": [ + { + "bbox": [ + 304, + 211, + 526, + 414 + ], + "type": "image", + "image_path": "ce21f28bb23cd9ce31105d2c2a8f6a2a73c430a3946f368f8d44af13693a963f.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 421, + 526, + 553 + ], + "lines": [ + { + "bbox": [ + 302, + 421, + 526, + 553 + ], + "spans": [ + { + "bbox": [ + 302, + 421, + 526, + 553 + ], + "type": "text", + "content": "Figure 1: Our work aims to facilitate humans to specify their strategy to an AI system via language. Using the board game Risk as a simulated environment, we collect language descriptions of a strategy (top-left) corresponding to a player's troop deployments (bottom-left). The player's selections are shown by the white icons, and the grey and black icons denote the troops of the two opposing players. Each strategy corresponds to a set of goals (bottom-right) and constraints (top-right) The green and orange text corresponds to the language relating to constraints and goals respectively." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 581, + 526, + 688 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 581, + 526, + 688 + ], + "spans": [ + { + "bbox": [ + 302, + 581, + 526, + 688 + ], + "type": "text", + "content": "an effective scaffold to represent a human's strategic intent in a mixed-initiative interaction (Novick and Sutton, 1997). Commander's Intent provides a functionality for expert-specifiers to engender a degree of \"shared-cognition,\" between an AI-collaborator and a human-specifier, by aligning the actions of the AI system to the human-specifiers values or reward function." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "content": "Commander's intent is formally represented by a set of goals and constraints. Goals (or preferences) are categorized as a desirable set of states or affairs that the agent intends to obtain (Moskowitz and Grant, 2009; Kruglanski, 1996) and constraints refer to conditions that are imposed on solutions" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 751, + 290, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 751, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 751, + 290, + 772 + ], + "type": "text", + "content": "*These authors contributed to this paper while they were at Georgia Institute of Technology." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 283, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 283, + 780, + 311, + 791 + ], + "type": "text", + "content": "12801" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "spans": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12801-12819" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 165, + 806, + 428, + 817 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 806, + 428, + 817 + ], + "spans": [ + { + "bbox": [ + 165, + 806, + 428, + 817 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "type": "text", + "content": "formulated by an agent (Nickles, 1978). Translating unstructured language-based strategy into this machine-readable specification is a non-trivial challenge. This translation could be conducted via a human interpreter, however, interpreters with the requisite expertise will not always be available. Alternatively, humans could utilize a structured interface to specify their intent. However, interfaces can become overly complicated, and humans become demotivated to work with an AI system when they cannot easily navigate the interface (Hayes, 1985). Enabling humans to express their strategic intent in everyday language provides an effective solution to these issues." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 267, + 292, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 267, + 292, + 524 + ], + "spans": [ + { + "bbox": [ + 69, + 267, + 292, + 524 + ], + "type": "text", + "content": "In this paper, we develop an approach to solve a task we call automatic strategy translation, wherein we learn to infer strategic intent, in the form of goals and constraints, from language. Prior work has developed methods to utilize language to specify policies of an AI agent (Tambwekar et al., 2021; Gopalan et al., 2018; Thomason et al., 2019; Blukis et al., 2019) or specify reward functions or tasks which can be optimized for, via reinforcement learning (RL) or a planner (Gopalan et al., 2018; Padmakumar et al., 2021; Silva et al., 2021a). However, our work is the first to translate language into goals and constraints, which can be applied towards constrained optimization approaches for directing agent behavior independent of the original human specifier. Unlike prior work, we focus on interpreting language description of complex gameplay strategies, rather than simple individual commands (e.g., \"move from A to B; open the door\")." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 529, + 292, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 529, + 292, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 529, + 292, + 773 + ], + "type": "text", + "content": "First, we collect a dataset of over 1000 examples mapping language to goals and constraints, leveraging a game environment of Risk. Next, we fine-tuned a pretrained RoBERTa model (Liu et al., 2019), equipped with model augmentations and customized loss functions such as Order-Agnostic Cross Entropy (Du et al., 2021), to infer goals and constraints from language strategy specifications. Finally, we employ a human evaluation to test our approach. Recent work has shown that automated evaluation metrics for language models may provide a misleading measure of performance (Liang et al., 2022). Therefore, we design a head-to-head evaluation, whereby, we can directly compare our model to the average human interpreter. In addition to humans, we prompted ChatGPT to perform the same task on a held-out set of 30 examples. We computed the statistical difference between our" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "content": "model and these baselines, providing a concrete measure of the relative efficacy of our approach. Our contributions are as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 307, + 121, + 527, + 297 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 307, + 121, + 527, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 121, + 527, + 176 + ], + "spans": [ + { + "bbox": [ + 307, + 121, + 527, + 176 + ], + "type": "text", + "content": "- We propose one of the first complete machine learning pipelines including data collection, augmentation and model training for inferring structured strategic intent from human language." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 307, + 185, + 527, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 185, + 527, + 239 + ], + "spans": [ + { + "bbox": [ + 307, + 185, + 527, + 239 + ], + "type": "text", + "content": "- Through a human study, we show that our proposed approach can interpret goals and constraints from language descriptions better than the average human " + }, + { + "bbox": [ + 307, + 185, + 527, + 239 + ], + "type": "inline_equation", + "content": "(p < 0.001)" + }, + { + "bbox": [ + 307, + 185, + 527, + 239 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 307, + 243, + 527, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 243, + 527, + 297 + ], + "spans": [ + { + "bbox": [ + 307, + 243, + 527, + 297 + ], + "type": "text", + "content": "- Through in-context learning, we evaluate ChatGPT's performance to gauge the relative efficacy of our approach, and show that our approach significantly outperforms ChatGPT (p < 0.05)." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 305, + 396, + 318 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 305, + 396, + 318 + ], + "spans": [ + { + "bbox": [ + 303, + 305, + 396, + 318 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 327, + 527, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 327, + 527, + 381 + ], + "spans": [ + { + "bbox": [ + 302, + 327, + 527, + 381 + ], + "type": "text", + "content": "This section covers prior work on learning strategies from language, as well as methods and datasets to enable humans to specify AI-behavior in a mixed-initiative setting." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 391, + 496, + 403 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 391, + 496, + 403 + ], + "spans": [ + { + "bbox": [ + 302, + 391, + 496, + 403 + ], + "type": "text", + "content": "2.1 Learning strategies from Language" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 408, + 527, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 408, + 527, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 408, + 527, + 773 + ], + "type": "text", + "content": "A common approach for specifying strategies through language has been through encoding language instructions, via planning-based representation languages, such as PDDL or LTL (Williams et al., 2018; Bahdanau et al., 2018; Thomason et al., 2019; Tellex et al., 2020), or deep learning (Fu et al., 2019; Blukis et al., 2019; Gopalan et al., 2018). Such formulations facilitate the ability to constrain actions taken by the agent to the instruction specified, e.g. \"Go around the tree to your left and place the ball.\" Another popular alternative is language-conditioned learning, where language is employed to specify a reward function, or a task (Silva et al., 2021a; Goyal et al., 2019; Andreas et al., 2017; Shridhar et al., 2022). Such approaches seek to improve the ability of an agent to complete a task(s) through intermediate language inputs, such as \"take the ladder to your left\". However, these approaches do not allow a supervisor to specify their strategic intent, such that the agent can complete it's primary task while still adhering to the specifier's plan. Recent work proposed a novel approach to mapping language to constraints and rewards via a dependency tree (Rankin et al., 2021), however their approach relies on a pre-trained grammar to extract a dependency tree, thus may not scale to human-like language." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12802" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 71, + 292, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 71, + 292, + 328 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 292, + 328 + ], + "type": "text", + "content": "Formally, the process of optimizing AI systems given goals and constraints has been broadly categorized as Seldonian Optimization (Thomas et al., 2019, 2017). In this framework, the goal is to optimize the priorities of an objective function while adhering to a given set of constraints as opposed to simply optimizing based on the reward or loss function. (Yang et al., 2020) proposed a Seldonian optimization approach to translate constraints into a feature representation, encoding invalid regions in the state space, which is then applied towards safe RL. However their application is restricted to learning to parse individual constraint statements such as \"Don't get too close to the water,\" rather than facilitating constraint extraction from more realistic descriptions pertaining to an entire strategy. In our work, we provide a first-of-its-kind dataset, and correspondent model, to capacitate seldonian optimization through unstructured language." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 343, + 246, + 356 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 343, + 246, + 356 + ], + "spans": [ + { + "bbox": [ + 69, + 343, + 246, + 356 + ], + "type": "text", + "content": "2.2 Language and Strategy Datasets" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 368, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 368, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 368, + 291, + 772 + ], + "type": "text", + "content": "Prior datasets for instruction following and policy specifications are often comprised of shorter instructions describing individual tasks. In contrast, our dataset consists of larger, unstructured descriptions of strategies which may be more reflective of potential strategy descriptions from in-the-wild users. Recent work has published a dataset of policy descriptions which are similar to the language descriptions we collect (Tambwekar et al., 2021) - however, they describe specific policies, rather than broad strategies for a task. Other datasets look to map language to trajectories or goals states within the trajectory (Padmakumar et al., 2021; Misra et al., 2018; Suhr et al., 2019). These datasets typically serve as a means of replacing physical demonstrations with language. These datasets lack explicit goals and constraints corresponding to the language collected, that can be applied towards seldonian optimization. Recent work provided a dataset with constraint statements (Yang et al., 2020) which are designer-specific; however, each constraint is associated with an isolated statement, making it unclear whether this approach will generalize to unprompted language describing multiple constraints. Unlike prior work, our dataset provides the ability to apply Seldonian optimization approaches from unstructured language. Furthermore, we conduct a study wherein we provide a human and ChatGPT baseline for our dataset to highlight the challenging nature of this task." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 305, + 70, + 515, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 70, + 515, + 84 + ], + "spans": [ + { + "bbox": [ + 305, + 70, + 515, + 84 + ], + "type": "text", + "content": "3 Natural Language Strategies in RISK" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 95, + 524, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 95, + 524, + 175 + ], + "spans": [ + { + "bbox": [ + 305, + 95, + 524, + 175 + ], + "type": "text", + "content": "Our work aims to facilitate humans to specify their strategy or commander's intent to an AI system via language. In this section, we utilize the board game Risk to create a dataset that maps unstructured natural language descriptions of strategies to actionable intent in the form of goals and constraints." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 193, + 427, + 206 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 193, + 427, + 206 + ], + "spans": [ + { + "bbox": [ + 305, + 193, + 427, + 206 + ], + "type": "text", + "content": "3.1 Board Game - RISK" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 216, + 525, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 216, + 525, + 513 + ], + "spans": [ + { + "bbox": [ + 305, + 216, + 525, + 513 + ], + "type": "text", + "content": "Risk (Gibson et al., 2010) is a multiplayer strategy board game of diplomacy, conflict, and conquest, which was first invented in 1957. The gameplay of Risk consists of four phases: Draft, Recruit, Attack, and Move. The draft phase is conducted at the start of the game wherein each player drafts an initial set of continents and deploys a fixed number of troops onto those continents. This allocation of troops is a crucial participatory task (Muller and Kuhn, 1993) which involves humans reasoning about their strategy and setting up for the rest of the game. Participants may choose any of the empty territories on the map to deploy their troops, with a wide range of strategies that may depend on their opponent's troop allocation. For example, a more conservative player may draft troops to only one continent for better defense, whereas a player with a more aggressive strategy may choose to spread out their troops. After the draft phase, each subsequent turn for a player involves iteratively conducting the recruit, attack, and move phases. Further details about Risk can be found in Appendix-I." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 517, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 517, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 305, + 517, + 525, + 772 + ], + "type": "text", + "content": "In our setting, we use a map layout that has 5 continents with a total of 21 territories/countries, as illustrated in Figure 1. Instead of real country names used in the Risk game, we use ad-hoc names for each continent (e.g., Red, Green, Blue, etc.) to mitigate participant bias. In the draft phase, each player takes turns to deploy 14 troops. The specific set of tasks that humans need to complete for our study include: (i) develop a strategy for Risk and deploy 14 troops after the two opposing players have completed their draft; (ii) provide six goals (on a 200-point scale) and up to eight constraints that were relevant to their allocation of troops and broader intents; (iii) use natural language to describe their overall strategy and the goals and constraints they considered. The troops of the opposing player are shown to the participants prior to completing these tasks. More details about this data collection process are discussed in Section 3.3." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "type": "text", + "content": "12803" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 167, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 167, + 83 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 167, + 83 + ], + "type": "text", + "content": "3.2 Task Definition" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "spans": [ + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": "Our goal is to develop a computational interface capable of inferring strategic intent from unstructured language descriptions of strategies. Formally, we define the task of Automatic Strategy Translation as follows: Given the troop deployments " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": ", a map " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": ", and the strategy " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": ", which is a paragraph written in natural language, our task is to automatically derive a set of goals " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": " and constraints " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": ". The troop selections " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": " include the name and number of troops for each territory drafted by the player. We have a total of 6 predefined goals, each of which takes a numeric value between " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "[-100, 100]" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": ". This numeric value corresponds to whether the goal positively or negatively aligns with the strategy. For example, for the goal \"maximize battles\", 100 implies that the player intends to battle as much as possible, and -100 implies that the player intends to battle as infrequently as possible. Each constraint is comprised of a class and value. We restrict the number of possible constraints to 8 as a reasonable upper bound per strategy. To summarize, each example " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "\\langle M, W, S, C, G \\rangle \\in \\mathcal{D}" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": " consists of a strategy " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": " described in natural language, for a player's troop selections, " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": ", on a map, " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": ", from which " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 69, + 91, + 292, + 429 + ], + "type": "text", + "content": " are the gold standard constraints and goals." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 442, + 169, + 454 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 442, + 169, + 454 + ], + "spans": [ + { + "bbox": [ + 67, + 442, + 169, + 454 + ], + "type": "text", + "content": "3.3 Data Collection" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 462, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 462, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 69, + 462, + 291, + 773 + ], + "type": "text", + "content": "We collected a dataset " + }, + { + "bbox": [ + 69, + 462, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{D}" + }, + { + "bbox": [ + 69, + 462, + 291, + 773 + ], + "type": "text", + "content": " of 1053 unique examples by recruiting participants on Amazon Mechanical Turk and Profilic (pro, 2014). Firstly, to familiarize participants with the game, we designed a tutorial that provided a description and annotated examples to explain the rules of the game and the tasks that participants needed to perform. As a further measure of improving data quality, participants were quizzed on the rules of Risk to reinforce their understanding (full quiz has been provided in §A.2). They were given three attempts to answer correctly, after which they were shown the answers. Upon completing the quiz, participants began the task. We showed participants a map, which shows the drafted troops of the two opposing players, and asked them to provide their own troop deployments. Following their draft, participants are asked to provide the goals and constraints they considered for their gameplay strategy/deployments and finally provide a language description of their strategy. The language strategy they provided needed to have at least 200 characters. Each participant was asked to repeat this task 5 times to create 5 data points," + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 524, + 111 + ], + "type": "text", + "content": "each time with a different map. The maps seen by participants were selected from a set of 15 unique initial troop settings." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 112, + 526, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 112, + 526, + 220 + ], + "spans": [ + { + "bbox": [ + 302, + 112, + 526, + 220 + ], + "type": "text", + "content": "Participants needed approximately 10 minutes per data point. Figure 1 depicts the format of our dataset. Our dataset included data from 230 participants. The average length of language descriptions in our dataset was 99.21 words, and the overall vocabulary size was 2,356 words. Additional details regarding our data collection protocol are available in Appendix A." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 233, + 486, + 247 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 233, + 486, + 247 + ], + "spans": [ + { + "bbox": [ + 302, + 233, + 486, + 247 + ], + "type": "text", + "content": "4 Automatic Strategy Translation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 255, + 526, + 432 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 255, + 526, + 432 + ], + "spans": [ + { + "bbox": [ + 302, + 255, + 526, + 432 + ], + "type": "text", + "content": "Following the data collection in Section 3, our goal is to leverage this dataset to develop a model that can perform the task of automatic strategy translation. Inferring strategic intent from language is a non-trivial endeavor as unstructured language can be vague thus leading to ambiguous interpretations. We seek to develop an approach capable of performing this task better than the average human, so as to enable strategy specification via language to reduce the potential risk of human errors or the need of third-party expert interpreters. In this section, we cover the technical details which make this task possible in a low-data setting." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 443, + 394, + 454 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 443, + 394, + 454 + ], + "spans": [ + { + "bbox": [ + 302, + 443, + 394, + 454 + ], + "type": "text", + "content": "4.1 Text Encoder" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "spans": [ + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": "We adopted the pretrained RoBERTa model (Liu et al., 2019) as our encoder which is parameterized by " + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": ". The input sequence to our model is comprised of the language description of the strategy, " + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "W = [w_{1}, w_{2}, \\ldots, w_{|W|}]" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": ", and troop selections " + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "S = [s_{1}, s_{2}, \\ldots, s_{|S|}]" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": ", where each troop selection is comprised of the country name along with the number of troops placed on that country (e.g., " + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "S = [Red\\_A = 2, Red\\_C = 8, Purple\\_D = 4]" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": "). The encoder learns the embedding function, which maps the text input, comprised of the strategy " + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": " and selections " + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": ", to a " + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": "-dimensional real-valued vector which then be used towards predicting goals (" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "\\S 4.2" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": ") and constraints (" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "inline_equation", + "content": "\\S 4.3" + }, + { + "bbox": [ + 302, + 461, + 525, + 650 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "text", + "content": "Ordinarily, the final embedding for the single [CLS] token learned by RoBERTa, i.e., " + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "inline_equation", + "content": "E_{\\theta} = BERT_{[CLS]}(W,S)" + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "text", + "content": ", is used for classification. In this work, we incorporate multiple classification tokens (Chang et al., 2023), each of which corresponds to an individual goal or constraint. For " + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "text", + "content": "th goal or constraint, we learn a separate classification embedding, " + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "inline_equation", + "content": "E_{\\theta}^{i} = BERT_{[CLS_{i}]}(W,S)" + }, + { + "bbox": [ + 302, + 651, + 526, + 773 + ], + "type": "text", + "content": ". Using individual class-specific tokens improves the model" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12804" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 100, + 68, + 478, + 227 + ], + "blocks": [ + { + "bbox": [ + 100, + 68, + 478, + 227 + ], + "lines": [ + { + "bbox": [ + 100, + 68, + 478, + 227 + ], + "spans": [ + { + "bbox": [ + 100, + 68, + 478, + 227 + ], + "type": "image", + "image_path": "6a01acf28d3193848848e84979e9032a2cce94525988538bceebcd6133cac663.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 238, + 526, + 286 + ], + "lines": [ + { + "bbox": [ + 67, + 238, + 526, + 286 + ], + "spans": [ + { + "bbox": [ + 67, + 238, + 526, + 286 + ], + "type": "text", + "content": "Figure 2: Illustration of our Automatic Strategy Translation model. The input to the model includes the classification tokens, language description, and troop selections (Section 4.1). The encoder then generates embeddings for each classification token, and passes them onto an individual classification head. Each classification head is a fully-connected layer that predicts a probability distribution for the respective goal (" + }, + { + "bbox": [ + 67, + 238, + 526, + 286 + ], + "type": "inline_equation", + "content": "\\S 4.2" + }, + { + "bbox": [ + 67, + 238, + 526, + 286 + ], + "type": "text", + "content": ") or constraint (" + }, + { + "bbox": [ + 67, + 238, + 526, + 286 + ], + "type": "inline_equation", + "content": "\\S 4.3" + }, + { + "bbox": [ + 67, + 238, + 526, + 286 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 306, + 290, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 306, + 290, + 376 + ], + "spans": [ + { + "bbox": [ + 67, + 306, + 290, + 376 + ], + "type": "text", + "content": "the capability to learn different attention weights corresponding to the classification embeddings for each goal or constraint. We utilize different encoders for predicting goals and constraints, which are parameterized by " + }, + { + "bbox": [ + 67, + 306, + 290, + 376 + ], + "type": "inline_equation", + "content": "\\theta_{g}" + }, + { + "bbox": [ + 67, + 306, + 290, + 376 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 306, + 290, + 376 + ], + "type": "inline_equation", + "content": "\\theta_{c}" + }, + { + "bbox": [ + 67, + 306, + 290, + 376 + ], + "type": "text", + "content": ", respectively." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 383, + 203, + 395 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 383, + 203, + 395 + ], + "spans": [ + { + "bbox": [ + 67, + 383, + 203, + 395 + ], + "type": "text", + "content": "4.2 Goal Extraction Model" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "spans": [ + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "text", + "content": "We treat the subtask of deriving goals from language as an ordinal classification task. Originally, in our dataset goals are specified as continuous values ranging from " + }, + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "inline_equation", + "content": "[-100, 100]" + }, + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "text", + "content": ", which we discretize by creating 5 uniform buckets, i.e., " + }, + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "inline_equation", + "content": "[-100, -60)" + }, + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "inline_equation", + "content": "[-60, -20)" + }, + { + "bbox": [ + 67, + 400, + 291, + 495 + ], + "type": "text", + "content": ", etc. That is, for each goal, we predict an assignment as a 5-class classification as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 141, + 502, + 290, + 520 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 502, + 290, + 520 + ], + "spans": [ + { + "bbox": [ + 141, + 502, + 290, + 520 + ], + "type": "interline_equation", + "content": "P _ {j} = L _ {\\phi_ {j}} \\left(E _ {\\theta_ {g}} ^ {j}\\right), \\tag {1}", + "image_path": "44a74f220c582357cec3706c1f2ff64ea5013421d4dce4d84c3264f089456920.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "spans": [ + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "inline_equation", + "content": "P_{j}" + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "text", + "content": " represents the probability distribution across assignments for " + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "text", + "content": "th goal and " + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "inline_equation", + "content": "E_{\\theta_g}^j" + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "text", + "content": " corresponds to the embedding from the encoder. Each goal uses a separate classification layer " + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "text", + "content": " parameterized by " + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "inline_equation", + "content": "\\phi_j" + }, + { + "bbox": [ + 67, + 525, + 291, + 622 + ], + "type": "text", + "content": ". The goal extraction model is trained on a dual-criteria loss function that combines cross-entropy (CE) and mean-square-error (MSE) loss:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 103, + 629, + 290, + 645 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 629, + 290, + 645 + ], + "spans": [ + { + "bbox": [ + 103, + 629, + 290, + 645 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {g o a l}} = \\alpha \\mathcal {L} _ {C E} + (1 - \\alpha) \\mathcal {L} _ {M S E}, \\tag {2}", + "image_path": "392b9942cd73d67f0d65275ebc64815f541a81e60b5b8cf41c3d79c14cd41e56.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 651, + 290, + 692 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 651, + 290, + 692 + ], + "spans": [ + { + "bbox": [ + 67, + 651, + 290, + 692 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 651, + 290, + 692 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 67, + 651, + 290, + 692 + ], + "type": "text", + "content": " is a simple weighting hyperparameter. The addition of MSE loss helps to account for the ordinal nature of goal value predictions." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 700, + 232, + 713 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 700, + 232, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 700, + 232, + 713 + ], + "type": "text", + "content": "4.3 Constraint Extraction Model" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 718, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 718, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 718, + 291, + 772 + ], + "type": "text", + "content": "Similar to the goal extraction model, the input to each classification head for constraint prediction is " + }, + { + "bbox": [ + 67, + 718, + 291, + 772 + ], + "type": "inline_equation", + "content": "E_{\\theta_c}^k" + }, + { + "bbox": [ + 67, + 718, + 291, + 772 + ], + "type": "text", + "content": ", which corresponds to the classification embedding learned by the encoder for the " + }, + { + "bbox": [ + 67, + 718, + 291, + 772 + ], + "type": "inline_equation", + "content": "k^{th}" + }, + { + "bbox": [ + 67, + 718, + 291, + 772 + ], + "type": "text", + "content": " constraint." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "spans": [ + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "text", + "content": "However, unlike for the goal extraction model, each of the eight constraint classification heads learns to predict the constraint itself rather than a value for a fixed goal. Therefore, the model needs to predict the set of unordered constraints " + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "inline_equation", + "content": "\\{c_1, c_2, \\ldots, c_8\\}" + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "text", + "content": ", wherein each " + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "inline_equation", + "content": "c_k" + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "text", + "content": " is predicted from the set of all possible constraints " + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "text", + "content": " (190 total possible constraints). Each strategy can have a maximum of eight constraints, i.e., the set " + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 302, + 306, + 526, + 428 + ], + "type": "text", + "content": " includes a null value." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 435, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 435, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 435, + 526, + 773 + ], + "type": "text", + "content": "While providing constraints during data collection, participants merely assigned constraints to their strategy, but did not rank the ordering of constraints. As such, the order of constraints in our dataset does not necessarily correspond to the order in which each classification head needs to predict the constraints. Therefore, each classification head does not have a strict label it can utilize to compute a classification loss, making this task distinct from conventional sequence prediction or multiclass classification tasks. For instance, if the constraints predicted by the model are " + }, + { + "bbox": [ + 302, + 435, + 526, + 773 + ], + "type": "inline_equation", + "content": "\\{C,\\emptyset ,B,D\\}" + }, + { + "bbox": [ + 302, + 435, + 526, + 773 + ], + "type": "text", + "content": " and the labels for this strategy are " + }, + { + "bbox": [ + 302, + 435, + 526, + 773 + ], + "type": "inline_equation", + "content": "\\{A,B,C,\\emptyset \\}" + }, + { + "bbox": [ + 302, + 435, + 526, + 773 + ], + "type": "text", + "content": ", utilizing a standard classification loss function, such as cross-entropy, would result in a higher loss than what is representative of the prediction, as three out of four constraints have been predicted correctly. As such, this task requires a loss function that allows us to train our model to predict the correct constraints for a language strategy agnostic of the ordering of the labels. We chose to adopt a recently proposed loss function called Order-Agnostic Cross Entropy (OaXE) (Du et al., 2021). Intuitively, OaXE is defined as the cross entropy for the best possible alignment of output tokens." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12805" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 68, + 68, + 526, + 160 + ], + "blocks": [ + { + "bbox": [ + 68, + 68, + 526, + 160 + ], + "lines": [ + { + "bbox": [ + 68, + 68, + 526, + 160 + ], + "spans": [ + { + "bbox": [ + 68, + 68, + 526, + 160 + ], + "type": "image", + "image_path": "ebaba61375dbc6c286c471dabf32dbdc274ca8b42d201e4a03cb5df8ea25f89d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 167, + 525, + 193 + ], + "lines": [ + { + "bbox": [ + 67, + 167, + 525, + 193 + ], + "spans": [ + { + "bbox": [ + 67, + 167, + 525, + 193 + ], + "type": "text", + "content": "Figure 3: Pipeline for augmenting synthetic or human-created data (" + }, + { + "bbox": [ + 67, + 167, + 525, + 193 + ], + "type": "inline_equation", + "content": "\\S 4.4" + }, + { + "bbox": [ + 67, + 167, + 525, + 193 + ], + "type": "text", + "content": "). A strategy description is first split into sentences, then passed into the PEGASUS (Zhang et al., 2020) paraphrasing model and data quality filter." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 212, + 291, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 212, + 291, + 280 + ], + "spans": [ + { + "bbox": [ + 67, + 212, + 291, + 280 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 67, + 212, + 291, + 280 + ], + "type": "inline_equation", + "content": "O = \\{O_1, O_2, \\ldots, O_{|O|}\\}" + }, + { + "bbox": [ + 67, + 212, + 291, + 280 + ], + "type": "text", + "content": " be the ordering space of all possible orderings of the target sequence of constraints, where each " + }, + { + "bbox": [ + 67, + 212, + 291, + 280 + ], + "type": "inline_equation", + "content": "O_l" + }, + { + "bbox": [ + 67, + 212, + 291, + 280 + ], + "type": "text", + "content": " is one possible ordering of the target tokens. The final loss function is computed as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 120, + 292, + 290, + 306 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 120, + 292, + 290, + 306 + ], + "spans": [ + { + "bbox": [ + 120, + 292, + 290, + 306 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {O a X E} = - \\log P \\left(O ^ {*} \\mid X\\right) \\tag {3}", + "image_path": "b9f18cb4293dacd3f4f35664ecaf03b4d9685c2fa8d97f9da3a70d9b9acca2d9.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 317, + 291, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 317, + 291, + 399 + ], + "spans": [ + { + "bbox": [ + 67, + 317, + 291, + 399 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 317, + 291, + 399 + ], + "type": "inline_equation", + "content": "O^{*}" + }, + { + "bbox": [ + 67, + 317, + 291, + 399 + ], + "type": "text", + "content": " represents the best possible alignment from " + }, + { + "bbox": [ + 67, + 317, + 291, + 399 + ], + "type": "inline_equation", + "content": "O" + }, + { + "bbox": [ + 67, + 317, + 291, + 399 + ], + "type": "text", + "content": ". This alignment is computed by applying the Hungarian algorithm, after casting this problem as maximum bipartite matching (Du et al., 2021). As our final loss function, we follow Du et al. (2021) in combining OaXE with cross-entropy loss:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 73, + 410, + 290, + 424 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 410, + 290, + 424 + ], + "spans": [ + { + "bbox": [ + 73, + 410, + 290, + 424 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {c o n s t r a i n t}} = T _ {m} * \\mathcal {L} _ {C E} + (1 - T _ {m}) * \\mathcal {L} _ {O a X E} \\tag {4}", + "image_path": "ec28e371dda6698d2948bddbf6b54608d9bf0259f7b299d41e7f3652f66ac22c.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 434, + 291, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 434, + 291, + 489 + ], + "spans": [ + { + "bbox": [ + 67, + 434, + 291, + 489 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 434, + 291, + 489 + ], + "type": "inline_equation", + "content": "T_{m}" + }, + { + "bbox": [ + 67, + 434, + 291, + 489 + ], + "type": "text", + "content": " is a temperature parameter that is logistically annealed from 1 to 0. In our case, cross entropy " + }, + { + "bbox": [ + 67, + 434, + 291, + 489 + ], + "type": "inline_equation", + "content": "(\\mathcal{L}_{CE})" + }, + { + "bbox": [ + 67, + 434, + 291, + 489 + ], + "type": "text", + "content": " is computed using the default ordering of labels in our dataset." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 498, + 230, + 511 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 498, + 230, + 511 + ], + "spans": [ + { + "bbox": [ + 67, + 498, + 230, + 511 + ], + "type": "text", + "content": "4.4 Data Augmentation Methods" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 516, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 516, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 516, + 291, + 773 + ], + "type": "text", + "content": "Finally, we applied data augmentation procedures to improve our model's performance. First, we randomly generated 4000 unique sets of goals and constraints, and applied a text template to produce descriptions to develop a Synthetic (S) training corpus. For example, the constraint, \"I must have troops on Red\" could be represented as \"My strategy is to take over Red,\" or \"I need a large army on Red,\" or \"I need to place troops on Red.\" We further augmented this synthetic corpus with a pretrained PEGASUS (Zhang et al., 2020) paraphrasing model to create an Augmented-Synthetic (AS) dataset. We split each language description from the synthetic corpus into individual sentences and employed the paraphrasing model to generate candidate paraphrases. Sentences that replaced important keywords, such as continent names, or were too similar to the original sentence in terms of edit distance were removed. We randomly chose" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 213, + 526, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 213, + 526, + 456 + ], + "spans": [ + { + "bbox": [ + 302, + 213, + 526, + 456 + ], + "type": "text", + "content": "a sentence from the remaining candidates as a replacement sentence, and combined the replacement sentences to form an augmented data point (see Figure 3). The two Synthetic datasets (S, AS) were used to pretrain our model prior to training on human data. The same techniques were also applied to our human dataset to form a Augmented-Human dataset (AH). Our final Augmented-Human data set is a version of our original crowdsourced dataset where each example is rephrased using our augmentation pipeline and is twice the size of our original human dataset. We experiment with utilizing the AH dataset in place of the original human dataset to see if the added diversity in our corpus through paraphrasing improves downstream performance. Examples of Synthetic (S), Augmented-Synthetic (AS), and Augmented-Human (AH) data are provided in Figure 6 in the Appendix." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 466, + 390, + 481 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 466, + 390, + 481 + ], + "spans": [ + { + "bbox": [ + 302, + 466, + 390, + 481 + ], + "type": "text", + "content": "5 Experiments" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 489, + 525, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 489, + 525, + 665 + ], + "spans": [ + { + "bbox": [ + 301, + 489, + 525, + 665 + ], + "type": "text", + "content": "This section will present the empirical evaluations of our approach. We design two evaluation experiments to contrast our model's performance with humans, as well as ChatGPT trained to perform our task through in-context learning. Both human and ChatGPT performance was computed using the 30 held-out examples in our test set. We statistically measure the difference in the average number of goals/constraints predicted correctly per data point between our model and the two baselines (Human + ChatGPT). We conclude with an ablation analysis across the model and data augmentations utilized in this approach." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 674, + 429, + 686 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 674, + 429, + 686 + ], + "spans": [ + { + "bbox": [ + 302, + 674, + 429, + 686 + ], + "type": "text", + "content": "5.1 Human Performance" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 692, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 692, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 692, + 526, + 773 + ], + "type": "text", + "content": "In our first study, we ask how well the average human can perform on the task of parsing strategic intent from language (see Table 1). We recruited 114 participants for our study from Prolific. Participants begin with a tutorial of the task and are provided an annotated example explaining how to" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12806" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 68, + 289, + 119 + ], + "blocks": [ + { + "bbox": [ + 71, + 68, + 289, + 119 + ], + "lines": [ + { + "bbox": [ + 71, + 68, + 289, + 119 + ], + "spans": [ + { + "bbox": [ + 71, + 68, + 289, + 119 + ], + "type": "table", + "html": "
BaselineGoals (Total = 6)Constraints (Total = 8)
Model (Ours)2.76 ± 1.055.53 ± 1.26
Human1.87 ± 1.124.28 ± 1.83
ChatGPT2.10 ± 1.273.80 ± 1.51
", + "image_path": "23e8047ff7ae2deb553702cacf0680411b732ecac722fd3ec8d5877fce21afaf.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 176, + 292, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 176, + 292, + 393 + ], + "spans": [ + { + "bbox": [ + 67, + 176, + 292, + 393 + ], + "type": "text", + "content": "assign goals and constraints given a language description and map. Following this tutorial, each participant is provided three randomly selected maps and language descriptions from our test set of 30 unique data points and is asked to annotate the goals and constraints for each given strategy. Our study included attention checks to ensure participants who were submitting random responses could be excluded. The average time taken for our study was 21 minutes, and participants were paid $3.6 for completing our task. We utilized a data filtering rubric to identify and remove individual data points which were inadequate or were from participants who appeared to blatantly ignore or misunderstand the instructions. The rubric is included in Appendix F. After filtering, a total of 270 responses remained." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 402, + 205, + 414 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 402, + 205, + 414 + ], + "spans": [ + { + "bbox": [ + 67, + 402, + 205, + 414 + ], + "type": "text", + "content": "5.2 ChatGPT Performance" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 420, + 291, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 420, + 291, + 663 + ], + "spans": [ + { + "bbox": [ + 67, + 420, + 291, + 663 + ], + "type": "text", + "content": "We also evaluate ChatGPT (GPT-3.5 Default) as a baseline for our task (see Table 1). We design a 1000-word language prompt to train ChatGPT to perform the same task (see full prompt in Appendix G.1). This prompt includes a description of the environment and task, as well as an annotated example translating goals and constraints from language. Crucially, we design our prompt such that ChatGPT receives the same information that humans receive in our study in §5.1. Following this prompt, we iteratively input each strategy and troop deployment in our test set and store the constraints selected by ChatGPT. The additional prompt engineering we conduct is to notify ChatGPT when it makes formational mistakes while predicting constraints, such as predicting more than the maximum number of constraints or creating new constraint classes." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 674, + 224, + 687 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 674, + 224, + 687 + ], + "spans": [ + { + "bbox": [ + 67, + 674, + 224, + 687 + ], + "type": "text", + "content": "5.3 Results for Goal Extraction" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 692, + 292, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 292, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 292, + 772 + ], + "type": "text", + "content": "The average number of goals predicted correctly per map can be seen in the first column of Table 1. We applied multivariate linear regression to compare the results of our model with our ChatGPT and human baselines, with Akaike information criterion (AIC) as our Occam's razor. AIC is a mathematical" + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 304, + 69, + 526, + 168 + ], + "blocks": [ + { + "bbox": [ + 67, + 129, + 290, + 154 + ], + "lines": [ + { + "bbox": [ + 67, + 129, + 290, + 154 + ], + "spans": [ + { + "bbox": [ + 67, + 129, + 290, + 154 + ], + "type": "text", + "content": "Table 1: Mean and standard deviations for the number of correct predictions of each approach." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 304, + 69, + 526, + 168 + ], + "lines": [ + { + "bbox": [ + 304, + 69, + 526, + 168 + ], + "spans": [ + { + "bbox": [ + 304, + 69, + 526, + 168 + ], + "type": "table", + "html": "
Model TypeDataPretrainingAccuracy (Std)
RoBERTa base--44.37 (1.33)
w/ troopAHAS46.04 (1.85)
w/ troop + [CLSi]AHAS45.52 (1.48)
w/ troop + [CLSi]AHS45.32 (1.01)
w/ troop + [CLSi]AH-45.89 (1.26)
w/ [CLSi]AHAS44.29 (1.14)
w/ troop + [CLSi]H-45.07 (1.33)
", + "image_path": "1ca1827b36b5d3a298e1667ca1ba6f14a539e7029235d0bca29fa5e50e3ffc0f.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 304, + 285, + 526, + 393 + ], + "blocks": [ + { + "bbox": [ + 302, + 176, + 527, + 274 + ], + "lines": [ + { + "bbox": [ + 302, + 176, + 527, + 274 + ], + "spans": [ + { + "bbox": [ + 302, + 176, + 527, + 274 + ], + "type": "text", + "content": "Table 2: Ablation study (10-fold cross-validation) with respect to model and data augmentations for goal extraction. H: the human-created dataset (\\$3.3); S: the synthetic dataset created from templates; AH/AS: the augmented version of H/S via paraphrasing (\\$4.4). " + }, + { + "bbox": [ + 302, + 176, + 527, + 274 + ], + "type": "inline_equation", + "content": "[\\mathrm{CLS}_i]" + }, + { + "bbox": [ + 302, + 176, + 527, + 274 + ], + "type": "text", + "content": " represents the use of individual classification tokens for each goal/constraint (\\$4.1); \"troop\" represents the inclusion of troop selections as a part of the input." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 304, + 285, + 526, + 393 + ], + "lines": [ + { + "bbox": [ + 304, + 285, + 526, + 393 + ], + "spans": [ + { + "bbox": [ + 304, + 285, + 526, + 393 + ], + "type": "table", + "html": "
ModelDataPretrainingAccuracy (Std)
RoBERTa baseH-62.60 (1.60)
w/ troop + [CLSi]HS68.21 (1.08)
w/ troop + [CLSi]AHS67.79 (1.58)
w/ troop + [CLSi]HAS67.09 (1.28)
w/ troopHS65.96 (1.12)
w/ troop + [CLSi]H-65.76 (1.13)
w/ troop + [CLSi]AH-65.52 (1.42)
w/ [CLSi]HS65.31 (1.12)
", + "image_path": "704680a6d53e65d7f75bf301abc229b00e6438a923c23630a3784b54987aaa1a.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 402, + 525, + 426 + ], + "lines": [ + { + "bbox": [ + 302, + 402, + 525, + 426 + ], + "spans": [ + { + "bbox": [ + 302, + 402, + 525, + 426 + ], + "type": "text", + "content": "Table 3: Ablation study (10-fold cross-validation) for constraint extraction." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "spans": [ + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "content": "method for determining a model-fit so as to choose the regression model which best fits our data. For the goals model, we modeled each baseline (human vs. model vs. ChatGPT) as a fixed effects co-variate, and the datapoint number as a mixed effects variable. The datapoint corresponded to the numerical index (between 1 - 30) of the datapoint from the test set. We performed the Levene's test (Glass, 1966) to show homoscedasticity " + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "inline_equation", + "content": "(F(2,327) = 0.5435" + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "inline_equation", + "content": "p = 0.581)" + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "content": ". The residuals for our model were not normally distributed; however, prior work has shown that an F-test is robust to non-normality (Blanca Mena et al., 2017; Cochran, 1947). Therefore, we proceeded with our linear regression analysis. The dependent variable within our analysis was the number of goals predicted correctly. An ANOVA with respect to our dependent variable yielded a significant difference across conditions " + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "inline_equation", + "content": "(F(2,299.95) = 10.605" + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "inline_equation", + "content": "p < 0.001)" + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "content": ". A Tukey post-hoc test (Abdi and Williams, 2010) for pairwise significance further revealed a significant difference between the performance of our model vs humans " + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "inline_equation", + "content": "(p < 0.001)" + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "content": " and vs ChatGPT " + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "inline_equation", + "content": "(p < 0.05)" + }, + { + "bbox": [ + 301, + 449, + 527, + 773 + ], + "type": "text", + "content": ", i.e., our approach was able to significantly predict" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12807" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 244, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 244, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 244, + 84 + ], + "type": "text", + "content": "goals better than humans and ChatGPT." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 101, + 251, + 113 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 101, + 251, + 113 + ], + "spans": [ + { + "bbox": [ + 67, + 101, + 251, + 113 + ], + "type": "text", + "content": "5.4 Results for Constraint Extraction" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "spans": [ + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "text", + "content": "The average number of constraints predicted correctly per map can be seen in column 2 of Table 1. To compare our constraint prediction model, to our human and ChatGPT baselines, we conducted a non-parametric Friedman's test (Pereira et al., 2015). We could not employ a multivariate regression analysis, as the regression model for constraints did not satisfy the assumption of homoscedasticity as per Levene's test " + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "inline_equation", + "content": "(F(2,327) = 5.4294, p < 0.01)" + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "text", + "content": ". The Friedman's test yielded a significant difference across conditions for the task of predicting constraints " + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "inline_equation", + "content": "(\\chi^2 (2,90) = 16.768, p < 0.001)" + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "text", + "content": ". A further pairwise Wilcoxon signed rank test (Woolson, 2007) revealed a significant difference between humans and our model " + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "inline_equation", + "content": "(p < 0.001)" + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "text", + "content": " as well as ChatGPT and our model " + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "inline_equation", + "content": "(p < 0.001)" + }, + { + "bbox": [ + 67, + 123, + 291, + 380 + ], + "type": "text", + "content": ", indicating that our approach is not just able to significantly outperform humans, but also ChatGPT for inferring constraints from language." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 396, + 145, + 409 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 396, + 145, + 409 + ], + "spans": [ + { + "bbox": [ + 67, + 396, + 145, + 409 + ], + "type": "text", + "content": "5.5 Discussion" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 419, + 291, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 419, + 291, + 635 + ], + "spans": [ + { + "bbox": [ + 67, + 419, + 291, + 635 + ], + "type": "text", + "content": "Our results emphasize that inferring strategic intent from language is a non-trivial task, as language interpretation can be subjective and malleable. ChatGPT is capable of performing novel tasks such as text classification (Li et al., 2023), mathematical problem solving (Frieder et al., 2023), and information extraction (He et al., 2023). through in-context learning. However, despite these capabilities, our model was found to significantly outperform chatGPT for inferring strategic intent from language. Success in highly specific and complex language interpretation tasks, such as ours, requires the model to build an understanding of the domain and the task itself as generic language interpretation learned by the majority of pretrained language models may not be applicable." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "text", + "content": "Recent work on evaluating open question-answering on a challenge-dataset has shown that even for large-scale language models with between 6B-100B parameters, none of these models outperformed humans (Peinl and Wirth, 2023). By developing a computational interface which can infer strategic intent from language significantly better than humans, we show the usefulness of our pipeline towards solving complex domain-specific task in a low-data, -resource setting." + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 304, + 68, + 526, + 115 + ], + "blocks": [ + { + "bbox": [ + 304, + 68, + 526, + 115 + ], + "lines": [ + { + "bbox": [ + 304, + 68, + 526, + 115 + ], + "spans": [ + { + "bbox": [ + 304, + 68, + 526, + 115 + ], + "type": "table", + "html": "
BaselineConstraintsGoals
Roberta-base (Best)68.21 (1.08)46.04 (1.85)
GPT-Neo 125M (Best)65.22 (1.21)46.08 (0.73)
", + "image_path": "aea04a342117037367786e3345f5ff6cbea2f8af54cd266add4e8de766652812.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 126, + 525, + 163 + ], + "lines": [ + { + "bbox": [ + 302, + 126, + 525, + 163 + ], + "spans": [ + { + "bbox": [ + 302, + 126, + 525, + 163 + ], + "type": "text", + "content": "Table 4: This table depicts the performance when the roberta-base encoder is substituted with a SOTA autoregressive model, i.e. GPT-Neo (125 million parameters)." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 184, + 401, + 196 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 184, + 401, + 196 + ], + "spans": [ + { + "bbox": [ + 302, + 184, + 401, + 196 + ], + "type": "text", + "content": "5.6 Ablation Study" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "spans": [ + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "text", + "content": "Tables 3 and 2 provide the results from abating each model augmentation discussed in Section 4. The effects of these augmentations are more prominent in the model for predicting constraints (" + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "inline_equation", + "content": "\\sim" + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "text", + "content": " 6% performance boost) than predicting goals (" + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "inline_equation", + "content": "\\sim" + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "text", + "content": " 1.5% performance boost). For the constraints model, when any parameter, i.e. troop selections, pretraining, or CLS-Token, were removed, the accuracy dropped by " + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "inline_equation", + "content": "\\sim" + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "text", + "content": " 3% individually. For predicting goals, the inclusion of troop selections was the only model-augmentation which seemed to have a decisive impact performance, as all models with selections had an accuracy of " + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "inline_equation", + "content": "\\sim" + }, + { + "bbox": [ + 301, + 200, + 526, + 539 + ], + "type": "text", + "content": " 1% more than those without. We attribute the difficulty in improving the performance of the goals model to the contextual ambiguity for values assigned to each goal. Participants may not always follow the same metric while specifying goal values. Each participant could have a unique interpretation, for what any rating between -100 to 100 means for a particular goal, and description of that value through language (see Appendix for the data distribution corresponding to each goal). This disparity in interpreting values could be affecting the consistency of language descriptions for goals in our dataset." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 539, + 525, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 539, + 525, + 714 + ], + "spans": [ + { + "bbox": [ + 302, + 539, + 525, + 714 + ], + "type": "text", + "content": "Finally, the last ablation conducted studied the effect of the type of encoder utilized in our approach. Therefore, we performed a comparison with a model which replaced the encoder with a SOTA pretrained autoregressive model. We utilized GPT-Neo (Black et al., 2021) for our experiments, as it has the same number of parameters as Roberta-base (125 million). Our findings (see Table 4) show that utilizing an autoregressive model as our encoder offers no benefits to a roberta-base model, the GPT-Neo model performed equivalently for predicting goals and about " + }, + { + "bbox": [ + 302, + 539, + 525, + 714 + ], + "type": "inline_equation", + "content": "3\\%" + }, + { + "bbox": [ + 302, + 539, + 525, + 714 + ], + "type": "text", + "content": " worse for the constraints model." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 725, + 381, + 737 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 725, + 381, + 737 + ], + "spans": [ + { + "bbox": [ + 302, + 725, + 381, + 737 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "content": "In this paper, we develop a novel computational interface to automate inferring strategic intent, in the" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12808" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 71, + 289, + 273 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 71, + 289, + 273 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 289, + 273 + ], + "type": "text", + "content": "form of goals and constraints, from unstructured language descriptions of strategies. We develop a new benchmark for our dataset and broader task, and further conduct a novel head-to-head evaluation to determine the relative efficacy of our approach. We show that in a low-data setting, our approach towards inferring goals and constraints from language strategy descriptions can significantly outperform humans for the same tasks. Furthermore, we also found that our approach, with only 125 million parameters, was able to significantly outperform ChatGPT for inferring strategic intent from language. Our work endows researchers with valuable tools to further seldonian optimization approaches for mixed-initiative interaction." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 285, + 137, + 297 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 285, + 137, + 297 + ], + "spans": [ + { + "bbox": [ + 69, + 285, + 137, + 297 + ], + "type": "text", + "content": "Future Work" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 307, + 289, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 307, + 289, + 685 + ], + "spans": [ + { + "bbox": [ + 69, + 307, + 289, + 685 + ], + "type": "text", + "content": "To measure ChatGPT performance, we employ a one-shot chain-of-thought prompt method with a detailed instructions of the task. We chose this method to maintain consistency between the information shown to humans and ChatGPT. Future work may explore ablations on the size of the initial prompt or the number of annotated examples in the prompt to tune the performance of ChatGPT on our strategy translation task. Secondly, an important next step that stems from this research pertains to multi-round inference and updating the initially learned strategy. In future work, it would be helpful to develop methods to allow users to modify their initial strategy throughout the game or task as their goals or values change. These methods could utilize approaches proposed in prior work wherein language inputs were leveraged to change the sub-goals that an agent is considering (Fu et al., 2019; Goyal et al., 2019). Furthermore, recent work has shown promise for the capabilities of ChatGPT/GPT-3.5 towards dialog-state tracking and task-oriented dialog (Labruna et al., 2023; Heck et al., 2023). Future work could also formulate this task of updating the initial strategy over the course of the game as a goal-oriented dialog, and tune GPT-3.5 or GPT-4 to update a user's initially translated strategy after multiple rounds of the game through language feedback." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 698, + 129, + 709 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 698, + 129, + 709 + ], + "spans": [ + { + "bbox": [ + 69, + 698, + 129, + 709 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 719, + 289, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 719, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 719, + 289, + 772 + ], + "type": "text", + "content": "Firstly, we asked participants to provide natural language descriptions after providing their structured intent in the form of goals and constraints. This potentially biased the participant towards specifically" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 71, + 524, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 71, + 524, + 408 + ], + "spans": [ + { + "bbox": [ + 305, + 71, + 524, + 408 + ], + "type": "text", + "content": "referencing the terminology utilized in the goals and constraints. While our dataset provides explanations that are the closest to natural, human-like descriptions of strategies, an important next step would entail comparing how our model performs on strategies collected \"in-the-wild.\" Secondly, in this paper we assume that utilizing language is more accessible than learning to use mathematical specifications directly to specify their intent to an intelligent agent. However, we do not test whether this assumption bears out in practice. In future work, we hope to develop a human-subjects study to confirm this hypothesis. Finally, despite converting language to goals and constraints, in this work we do not directly train a seldonian optimization approach. In this work, we focus on showing the capability of our machine learning pipeline in a low-data setting. However, we have provided all the components needed to train a reinforcement learning approach for an RL-agents constraining behavior through unstructured language (including a novel open-AI RL domain for the game Risk, see Appendix). Developing this approach is currently outside the scope of this work, and we thereby leave this exploration for future work." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 421, + 392, + 433 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 421, + 392, + 433 + ], + "spans": [ + { + "bbox": [ + 305, + 421, + 392, + 433 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 443, + 524, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 443, + 524, + 630 + ], + "spans": [ + { + "bbox": [ + 305, + 443, + 524, + 630 + ], + "type": "text", + "content": "As pretrained large-language models are utilized in our approach for automated strategy translation, we need to be cognizant of the prevalence of bias within these models. If these systems are translating strategies in safety-critical settings, it is important to make sure that the language models make the decisions solely based on the provided context rather than any inherent bias. Many sets prior work have studied approaches to identify and mitigate bias (Abid et al., 2021; Silva et al., 2021b; Guo et al., 2022; Viswanath and Zhang, 2023). We encourage authors to seek out such works prior to deploying any strategy translation module, towards a real-world task." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 643, + 404, + 655 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 643, + 404, + 655 + ], + "spans": [ + { + "bbox": [ + 305, + 643, + 404, + 655 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 666, + 524, + 758 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 666, + 524, + 758 + ], + "spans": [ + { + "bbox": [ + 305, + 666, + 524, + 758 + ], + "type": "text", + "content": "This work was supported by the Office of Naval Research under awards, N00014-19-1-2076, N00014-22-1-2834, N00014-23-1-2887, and the National Science Foundation under award, FMRG-2229260. We also thank Konica Minolta for their contribution to this work via a gift to the Georgia Tech Research Foundation." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 285, + 781, + 312, + 791 + ], + "type": "text", + "content": "12809" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 127, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 89, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 69, + 89, + 289, + 111 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 89, + 289, + 111 + ], + "spans": [ + { + "bbox": [ + 69, + 89, + 289, + 111 + ], + "type": "text", + "content": "2014. Online participant recruitment for surveys and market research." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 119, + 289, + 153 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 119, + 289, + 153 + ], + "spans": [ + { + "bbox": [ + 69, + 119, + 289, + 153 + ], + "type": "text", + "content": "Herve Abdi and Lynne J Williams. 2010. Tukey's honestly significant difference (hsd) test. Encyclopedia of research design, 3(1):1-5." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 159, + 289, + 204 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 159, + 289, + 204 + ], + "spans": [ + { + "bbox": [ + 69, + 159, + 289, + 204 + ], + "type": "text", + "content": "Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 298-306." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 211, + 289, + 234 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 211, + 289, + 234 + ], + "spans": [ + { + "bbox": [ + 69, + 211, + 289, + 234 + ], + "type": "text", + "content": "Léo Andeol. 2018. Leoandeol/gym-risk: Gym environment for the risk game by hasbro." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 241, + 289, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 241, + 289, + 285 + ], + "spans": [ + { + "bbox": [ + 69, + 241, + 289, + 285 + ], + "type": "text", + "content": "Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning, pages 166-175. PMLR." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 292, + 289, + 347 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 292, + 289, + 347 + ], + "spans": [ + { + "bbox": [ + 69, + 292, + 289, + 347 + ], + "type": "text", + "content": "Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. 2018. Learning to understand goal specifications by modelling reward. arXiv preprint arXiv:1806.01946." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 354, + 289, + 411 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 354, + 289, + 411 + ], + "spans": [ + { + "bbox": [ + 69, + 354, + 289, + 411 + ], + "type": "text", + "content": "Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow. If you use this software, please cite it using these metadata." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 417, + 289, + 462 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 417, + 289, + 462 + ], + "spans": [ + { + "bbox": [ + 69, + 417, + 289, + 462 + ], + "type": "text", + "content": "María José Blanca Mena, Rafael Alarcón Postigo, Jaume Arnau Gras, Roser Bono Cabré, Rebecca Bendayan, et al. 2017. Non-normal data: Is anova still a valid option? Psicothema." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 470, + 289, + 524 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 470, + 289, + 524 + ], + "spans": [ + { + "bbox": [ + 69, + 470, + 289, + 524 + ], + "type": "text", + "content": "Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A Knepper, and Yoav Artzi. 2019. Learning to map natural language instructions to physical quadcopter control using simulated flight. arXiv preprint arXiv:1910.09664." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 532, + 289, + 588 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 532, + 289, + 588 + ], + "spans": [ + { + "bbox": [ + 69, + 532, + 289, + 588 + ], + "type": "text", + "content": "Haw-Shiuan Chang, Ruei-Yao Sun, Kathryn Ricci, and Andrew McCallum. 2023. Multi-CLS BERT: An efficient alternative to traditional ensembling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 594, + 289, + 628 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 594, + 289, + 628 + ], + "spans": [ + { + "bbox": [ + 69, + 594, + 289, + 628 + ], + "type": "text", + "content": "William G Cochran. 1947. Some consequences when the assumptions for the analysis of variance are not satisfied. Biometrics, 3(1):22-38." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 635, + 289, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 635, + 289, + 669 + ], + "spans": [ + { + "bbox": [ + 69, + 635, + 289, + 669 + ], + "type": "text", + "content": "Richard Dempsey and Jonathan M Chavous. 2013. Commander's intent and concept of operations. Military Review, 93(6):58-66." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 69, + 676, + 289, + 709 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 676, + 289, + 709 + ], + "spans": [ + { + "bbox": [ + 69, + 676, + 289, + 709 + ], + "type": "text", + "content": "Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Order-agnostic cross entropy for non-autoregressive machine translation. arXiv preprint arXiv:2106.05093." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 69, + 716, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 716, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 716, + 289, + 772 + ], + "type": "text", + "content": "Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. 2023. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "type": "text", + "content": "Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. 2019. From language to goals: Inverse reinforcement learning for vision-based instruction following. arXiv preprint arXiv:1902.07742." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 127, + 524, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 127, + 524, + 183 + ], + "spans": [ + { + "bbox": [ + 304, + 127, + 524, + 183 + ], + "type": "text", + "content": "Richard Gibson, Neesha Desai, and Richard Zhao. 2010. An automated technique for drafting territories in the board game Risk. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 6(1):15-20." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 195, + 524, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 195, + 524, + 227 + ], + "spans": [ + { + "bbox": [ + 304, + 195, + 524, + 227 + ], + "type": "text", + "content": "Gene V Glass. 1966. Testing homogeneity of variances. American Educational Research Journal, 3(3):187-190." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 239, + 524, + 294 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 239, + 524, + 294 + ], + "spans": [ + { + "bbox": [ + 304, + 239, + 524, + 294 + ], + "type": "text", + "content": "Nakul Gopalan, Dilip Arumugam, Lawson Wong, and Stefanie Tellex. 2018. Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications. In Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 306, + 524, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 306, + 524, + 349 + ], + "spans": [ + { + "bbox": [ + 304, + 306, + 524, + 349 + ], + "type": "text", + "content": "Prasoon Goyal, Scott Niekum, and Raymond J Mooney. 2019. Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 361, + 524, + 427 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 361, + 524, + 427 + ], + "spans": [ + { + "bbox": [ + 304, + 361, + 524, + 427 + ], + "type": "text", + "content": "Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012-1023." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 439, + 524, + 484 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 439, + 524, + 484 + ], + "spans": [ + { + "bbox": [ + 304, + 439, + 524, + 484 + ], + "type": "text", + "content": "Philip J Hayes. 1985. The utility of natural language interfaces (panel session). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, page 19." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 495, + 524, + 550 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 495, + 524, + 550 + ], + "spans": [ + { + "bbox": [ + 304, + 495, + 524, + 550 + ], + "type": "text", + "content": "Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao Shen. 2023. Icl-d3ie: In-context learning with diverse demonstrations updating for document information extraction. arXiv preprint arXiv:2303.05063." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 561, + 524, + 627 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 561, + 524, + 627 + ], + "spans": [ + { + "bbox": [ + 304, + 561, + 524, + 627 + ], + "type": "text", + "content": "Michael Heck, Nurul Lubis, Benjamin Ruppik, Renato Vukovic, Shutong Feng, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, and Milica Gašić. 2023. Chatgpt for zero-shot dialogue state tracking: A solution or an opportunity? arXiv preprint arXiv:2306.01386." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 304, + 640, + 524, + 683 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 640, + 524, + 683 + ], + "spans": [ + { + "bbox": [ + 304, + 640, + 524, + 683 + ], + "type": "text", + "content": "Arie W Kruglanski. 1996. Goals as knowledge structures. P. M. Gollwitzer & J. A. Bargh (Eds.), The psychology of action: Linking cognition and motivation to behavior, pages 599-618." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 304, + 694, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 694, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 694, + 524, + 772 + ], + "type": "text", + "content": "Geert-Jan M Kruijff, M Janicek, Shanker Keshavdas, Benoit Larochelle, Hendrik Zender, Ninja JJM Smets, Tina Mioch, Mark A Neerincx, Jurriaan Van Diggelen, Francis Colas, et al. 2014. Experience in system design for human-robot teaming in urban search and rescue. In Field and Service Robotics, pages 111-125. Springer." + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "12810" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "text", + "content": "Tiziano Labruna, Sofia Brenna, Andrea Zaninello, and Bernardo Magnini. 2023. Unraveling chatgpt: A critical analysis of ai-generated goal-oriented dialogues and annotations. arXiv preprint arXiv:2305.14556." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 124, + 289, + 168 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 124, + 289, + 168 + ], + "spans": [ + { + "bbox": [ + 69, + 124, + 289, + 168 + ], + "type": "text", + "content": "Jiazheng Li, Runcong Zhao, Yulan He, and Lin Gui. 2023. Overprompt: Enhancing chatgpt capabilities through an efficient in-context learning approach. arXiv preprint arXiv:2305.14973." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 176, + 289, + 231 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 176, + 289, + 231 + ], + "spans": [ + { + "bbox": [ + 69, + 176, + 289, + 231 + ], + "type": "text", + "content": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 238, + 289, + 294 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 238, + 289, + 294 + ], + "spans": [ + { + "bbox": [ + 69, + 238, + 289, + 294 + ], + "type": "text", + "content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 302, + 289, + 356 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 302, + 289, + 356 + ], + "spans": [ + { + "bbox": [ + 69, + 302, + 289, + 356 + ], + "type": "text", + "content": "Joseph E Mercado, Michael A Rupp, Jessie YC Chen, Michael J Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent agent transparency in human-agent teaming for multi-uxv management. Human factors, 58(3):401-415." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 364, + 289, + 418 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 364, + 289, + 418 + ], + "spans": [ + { + "bbox": [ + 69, + 364, + 289, + 418 + ], + "type": "text", + "content": "Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3d environments with visual goal prediction. arXiv preprint arXiv:1809.00786." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 428, + 289, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 428, + 289, + 449 + ], + "spans": [ + { + "bbox": [ + 69, + 428, + 289, + 449 + ], + "type": "text", + "content": "Gordon B Moskowitz and Heidi Grant. 2009. The psychology of goals. Guilford press." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 457, + 289, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 457, + 289, + 479 + ], + "spans": [ + { + "bbox": [ + 69, + 457, + 289, + 479 + ], + "type": "text", + "content": "Michael J Muller and Sarah Kuhn. 1993. Participatory design. Communications of the ACM, 36(6):24-28." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 487, + 289, + 541 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 487, + 289, + 541 + ], + "spans": [ + { + "bbox": [ + 69, + 487, + 289, + 541 + ], + "type": "text", + "content": "Thomas Nickles. 1978. Scientific problems and constraints. In *PSA: Proceedings of the biennial meeting of the Philosophy of Science Association*, volume 1978, pages 134-148. Philosophy of Science Association." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 550, + 289, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 550, + 289, + 594 + ], + "spans": [ + { + "bbox": [ + 69, + 550, + 289, + 594 + ], + "type": "text", + "content": "David G Novick and Stephen Sutton. 1997. What is mixed-initiative interaction. In Proceedings of the AAAI spring symposium on computational models for mixed initiative interaction, volume 2, page 12." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 602, + 289, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 602, + 289, + 666 + ], + "spans": [ + { + "bbox": [ + 69, + 602, + 289, + 666 + ], + "type": "text", + "content": "Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. 2021. Teach: Taskdriven embodied agents that chat. arXiv preprint arXiv:2110.00534." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 676, + 289, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 676, + 289, + 719 + ], + "spans": [ + { + "bbox": [ + 69, + 676, + 289, + 719 + ], + "type": "text", + "content": "Réné Peinl and Johannes Wirth. 2023. Evaluation of medium-large language models at zero-shot closed book generative question answering. arXiv preprint arXiv:2305.11991." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 728, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 728, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 728, + 289, + 772 + ], + "type": "text", + "content": "Dulce G Pereira, Anabela Afonso, and Fátima Melo Medeiros. 2015. Overview of friedman's test and post-hoc analysis. Communications in Statistics-Simulation and Computation, 44(10):2636-2653." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 126 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 126 + ], + "type": "text", + "content": "Ian C Rankin, Seth McCammon, and Geoffrey A Hollinger. 2021. Robotic information gathering using semantic language instructions. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4882-4888. IEEE." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 139, + 524, + 193 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 139, + 524, + 193 + ], + "spans": [ + { + "bbox": [ + 304, + 139, + 524, + 193 + ], + "type": "text", + "content": "Joseph Rosen, Eliot Grigg, Jaron Lanier, Susan McGrath, Scott Lillibridge, David Sargent, and C Everett Koop. 2002. The future of command and control for disaster response. IEEE engineering in medicine and biology magazine, 21(5):56-68." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 205, + 524, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 205, + 524, + 248 + ], + "spans": [ + { + "bbox": [ + 304, + 205, + 524, + 248 + ], + "type": "text", + "content": "Mohit Shridhar, Lucas Manuelli, and Dieter Fox. 2022. *Cliport: What and where pathways for robotic manipulation*. In *Conference on Robot Learning*, pages 894–906. PMLR." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 261, + 524, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 261, + 524, + 315 + ], + "spans": [ + { + "bbox": [ + 304, + 261, + 524, + 315 + ], + "type": "text", + "content": "Andrew Silva, Nina Moorman, William Silva, Zulfiqar Zaidi, Nakul Gopalan, and Matthew Gombolay. 2021a. Lancon-learn: Learning with language to enable generalization in multi-task manipulation. IEEE Robotics and Automation Letters." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 327, + 524, + 405 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 327, + 524, + 405 + ], + "spans": [ + { + "bbox": [ + 304, + 327, + 524, + 405 + ], + "type": "text", + "content": "Andrew Silva, Pradyumna Tambwekar, and Matthew Gombolay. 2021b. Towards a comprehensive understanding and accurate evaluation of societal biases in pre-trained transformers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2383-2389." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 417, + 524, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 417, + 524, + 470 + ], + "spans": [ + { + "bbox": [ + 304, + 417, + 524, + 470 + ], + "type": "text", + "content": "Alane Suhr, Claudia Yan, Jacob Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situated collaborative interactions. arXiv preprint arXiv:1910.03655." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 483, + 524, + 526 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 483, + 524, + 526 + ], + "spans": [ + { + "bbox": [ + 304, + 483, + 524, + 526 + ], + "type": "text", + "content": "Pradyumna Tambwekar, Andrew Silva, Nakul Gopalan, and Matthew Gombolay. 2021. Interpretable policy specification and synthesis through natural language and RL." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 539, + 524, + 571 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 539, + 524, + 571 + ], + "spans": [ + { + "bbox": [ + 304, + 539, + 524, + 571 + ], + "type": "text", + "content": "Stefanie TELlex, Nakul Gopalan, Hadas Kress-Gazit, and Cynthia Matuszek. 2020. Annual Review of Control, Robotics, and Autonomous Systems, 3:25-55." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 583, + 524, + 628 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 583, + 524, + 628 + ], + "spans": [ + { + "bbox": [ + 304, + 583, + 524, + 628 + ], + "type": "text", + "content": "Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, and Emma Brunskill. 2017. On ensuring that intelligent machines are well-behaved. arXiv preprint arXiv:1708.05448." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 640, + 524, + 683 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 640, + 524, + 683 + ], + "spans": [ + { + "bbox": [ + 304, + 640, + 524, + 683 + ], + "type": "text", + "content": "Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, Stephen Giguere, Yuriy Brun, and Emma Brunskill. 2019. Preventing undesirable behavior of intelligent machines. Science, 366(6468):999-1004." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 695, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 695, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 695, + 524, + 772 + ], + "type": "text", + "content": "Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J Mooney. 2019. Improving grounded natural language understanding through human-robot dialog. In 2019 International Conference on Robotics and Automation (ICRA), pages 6934-6941. IEEE." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 311, + 791 + ], + "type": "text", + "content": "12811" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 116 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 290, + 116 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 290, + 116 + ], + "type": "text", + "content": "Hrishikesh Viswanath and Tianyi Zhang. 2023. Fairpy: A toolkit for evaluation of social biases and their mitigation in large language models. arXiv preprint arXiv:2302.05508." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 124, + 290, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 124, + 290, + 190 + ], + "spans": [ + { + "bbox": [ + 69, + 124, + 290, + 190 + ], + "type": "text", + "content": "Edward C Williams, Nakul Gopalan, Mine Rhee, and Stefanie Tellex. 2018. Learning to parse natural language to grounded reward functions with weak supervision. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 4430-4436. IEEE." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 200, + 290, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 200, + 290, + 222 + ], + "spans": [ + { + "bbox": [ + 69, + 200, + 290, + 222 + ], + "type": "text", + "content": "Robert F Woolson. 2007. Wilcoxon signed-rank test. Wiley encyclopedia of clinical trials, pages 1-3." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 230, + 290, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 230, + 290, + 274 + ], + "spans": [ + { + "bbox": [ + 69, + 230, + 290, + 274 + ], + "type": "text", + "content": "Tsung-Yen Yang, Michael Hu, Yinlam Chow, Peter J Ramadge, and Karthik Narasimhan. 2020. Safe reinforcement learning with natural language constraints. arXiv preprint arXiv:2010.05150." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 283, + 290, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 283, + 290, + 337 + ], + "spans": [ + { + "bbox": [ + 69, + 283, + 290, + 337 + ], + "type": "text", + "content": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328-11339. PMLR." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 359, + 263, + 371 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 359, + 263, + 371 + ], + "spans": [ + { + "bbox": [ + 69, + 359, + 263, + 371 + ], + "type": "text", + "content": "A Additional Data Collection Details" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 380, + 290, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 380, + 290, + 637 + ], + "spans": [ + { + "bbox": [ + 69, + 380, + 290, + 637 + ], + "type": "text", + "content": "Our study applied participatory design principles (Muller and Kuhn, 1993), to ensure that participants were engaged in the task and provided meaningful strategy descriptions. Each participant was initially given a partially setup map, where two other \"opponents\" had placed their troops. The participant was then asked to provide their troop placements, based on these initial placements. In Risk, the initial troop placements have a substantial impact on the strategies that a player can pursue for the rest of the game. As such, troop initialization provides a stand-in for a player's overall strategy in a game. By asking participants to participate in an actual aspect of the gameplay, e.g. deploying troops, participants were encouraged envision future situations and think about how their decisions could affect future gameplay and develop grounded strategies that could actually function as viable Risk gameplay strategies." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 638, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 638, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 638, + 290, + 772 + ], + "type": "text", + "content": "Next, participants were asked to provide the goals and constraints which they considered after selecting their troop placements. These specific goals and constraints were selected as they cater to potential strategies that could be employed while playing Risk. The presence of these templates provided a scaffold within which participants, who may or may not have any experience with Risk, could ground their strategies. However, it is important to acknowledge the presence of an inductive" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 525, + 395 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 395 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 395 + ], + "type": "text", + "content": "bias, due to the specific wording of the goals and constraint templates, which could have impacted the strategies submitted by the participants. For goals, participants were asked to rate how important each goal was to their strategy on a scale of -100 to 100. A score of -100 indicated that pursuing the goal was completely detrimental to their strategy, while 100 indicated that pursuing the goal was essential to their strategy. For constraints, participants were provided 9 constraint templates, and were asked to select and fill in the appropriate constraint that was represented in their strategy. Participants were required to provide at least three constraints to ensure that they did not skip this question. The specific goals and constraints in our dataset can be depicted in Table 5. Finally, participants were asked to summarize their strategy for the given map as a language description. Participants were encouraged to include references to their goals and constraints, but these descriptions were otherwise unprompted. Participants were paid up to $8.5 based on the number of adequate responses submitted. The payment scale was updated if the average time taken significantly changed." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 398, + 525, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 398, + 525, + 491 + ], + "spans": [ + { + "bbox": [ + 302, + 398, + 525, + 491 + ], + "type": "text", + "content": "As mentioned in the paper, we created three additional augmented datasets from our original corpus. Figure 6 provides some examples of the effect of the various augmentations we employed in each augmented dataset. Our full dataset can be found at the following anonymized Github repository - Anonymized Data Repository ." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 508, + 444, + 521 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 508, + 444, + 521 + ], + "spans": [ + { + "bbox": [ + 303, + 508, + 444, + 521 + ], + "type": "text", + "content": "A.1 Data Cleaning/Filtering" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 529, + 525, + 772 + ], + "type": "text", + "content": "We performed the least possible modifications to participant's responses to ensure responses were self-consistent while preserving the integrity of the organic data collection task. If a participant specifically referenced a goal or a constraint in their language, and did not include it in their response, then their response was modified to include it, and vice versa. We also corrected typos within a participants specifications, such as if they meant to reference the \"Blue\" continent instead of the \"Red\" continent. If a response was not salvageable without minimum modifications, the response was thrown out. Discarded responses included responses where participants simply did not understand the task or submitted blatantly insincere responses such as copying text from the study multiple times to reach the character limit. These decisions were made upon agreement of multiple reviewers." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "12812" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 72, + 208, + 176 + ], + "blocks": [ + { + "bbox": [ + 69, + 72, + 208, + 176 + ], + "lines": [ + { + "bbox": [ + 69, + 72, + 208, + 176 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 208, + 176 + ], + "type": "image", + "image_path": "ef39334c75b613e69998f9cfe5416b0ac7d83775b49097924d7e6282415f388d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 221, + 72, + 362, + 176 + ], + "blocks": [ + { + "bbox": [ + 221, + 72, + 362, + 176 + ], + "lines": [ + { + "bbox": [ + 221, + 72, + 362, + 176 + ], + "spans": [ + { + "bbox": [ + 221, + 72, + 362, + 176 + ], + "type": "image", + "image_path": "5c7c744ea4ec893f71e9ddc9ffbd2de8edb6c9ac7fb5d1c76b8a7f0d3fc96dab.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 379, + 73, + 518, + 176 + ], + "blocks": [ + { + "bbox": [ + 379, + 73, + 518, + 176 + ], + "lines": [ + { + "bbox": [ + 379, + 73, + 518, + 176 + ], + "spans": [ + { + "bbox": [ + 379, + 73, + 518, + 176 + ], + "type": "image", + "image_path": "f30a288e4c9660dd62936fc80e7fdb0f706b5ace52919d0b2bbb8fab2771db9a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 69, + 191, + 207, + 296 + ], + "blocks": [ + { + "bbox": [ + 69, + 191, + 207, + 296 + ], + "lines": [ + { + "bbox": [ + 69, + 191, + 207, + 296 + ], + "spans": [ + { + "bbox": [ + 69, + 191, + 207, + 296 + ], + "type": "image", + "image_path": "9d6347c06cd61aab75f2603fd61a59540ed133c304d7a954400a37ae0053be06.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 303, + 524, + 317 + ], + "lines": [ + { + "bbox": [ + 67, + 303, + 524, + 317 + ], + "spans": [ + { + "bbox": [ + 67, + 303, + 524, + 317 + ], + "type": "text", + "content": "Figure 4: Distribution of assigned values for each goal. The titles for each goal have been shortened for readability." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 222, + 192, + 362, + 295 + ], + "blocks": [ + { + "bbox": [ + 222, + 192, + 362, + 295 + ], + "lines": [ + { + "bbox": [ + 222, + 192, + 362, + 295 + ], + "spans": [ + { + "bbox": [ + 222, + 192, + 362, + 295 + ], + "type": "image", + "image_path": "fb8ec1db6599a6415c401b1919a81aa5aeda7e9a5b3e55f5f8e10151a3c27c7c.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 379, + 192, + 524, + 295 + ], + "blocks": [ + { + "bbox": [ + 379, + 192, + 524, + 295 + ], + "lines": [ + { + "bbox": [ + 379, + 192, + 524, + 295 + ], + "spans": [ + { + "bbox": [ + 379, + 192, + 524, + 295 + ], + "type": "image", + "image_path": "c4d254aad5d3d1bdefdde1079b6c7a34a2e32ebda6e3a60488f0447ca70a3e1a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 188, + 331, + 409, + 495 + ], + "blocks": [ + { + "bbox": [ + 188, + 331, + 409, + 495 + ], + "lines": [ + { + "bbox": [ + 188, + 331, + 409, + 495 + ], + "spans": [ + { + "bbox": [ + 188, + 331, + 409, + 495 + ], + "type": "image", + "image_path": "7dadec3d8286e861f9961dde765dde8738ff492805ebb010b0c4e4c5fb9df309.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 165, + 507, + 428, + 521 + ], + "lines": [ + { + "bbox": [ + 165, + 507, + 428, + 521 + ], + "spans": [ + { + "bbox": [ + 165, + 507, + 428, + 521 + ], + "type": "text", + "content": "Figure 5: Distribution of assigned values for each constraint type" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 541, + 197, + 555 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 541, + 197, + 555 + ], + "spans": [ + { + "bbox": [ + 67, + 541, + 197, + 555 + ], + "type": "text", + "content": "A.2 Data Collection Quiz" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 560, + 291, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 560, + 291, + 681 + ], + "spans": [ + { + "bbox": [ + 67, + 560, + 291, + 681 + ], + "type": "text", + "content": "In order to ensure that participants understood the rules of Risk prior to providing strategies for our dataset, each participant was asked answer a five question quiz. Participants needed to answer all questions correctly to proceed. Participants were given three tries to answer the questions after which they were shown the correct answers. The five questions in our quiz were as follows (correct answers to each question are in bold):" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 77, + 692, + 290, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 692, + 290, + 705 + ], + "spans": [ + { + "bbox": [ + 77, + 692, + 290, + 705 + ], + "type": "text", + "content": "1. Which of these are NOT a phase in the game?" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 95, + 713, + 236, + 772 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 95, + 713, + 144, + 724 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 713, + 144, + 724 + ], + "spans": [ + { + "bbox": [ + 95, + 713, + 144, + 724 + ], + "type": "text", + "content": "(a) Attack" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 95, + 728, + 146, + 740 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 728, + 146, + 740 + ], + "spans": [ + { + "bbox": [ + 95, + 728, + 146, + 740 + ], + "type": "text", + "content": "(b) Recruit" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 95, + 744, + 236, + 757 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 744, + 236, + 757 + ], + "spans": [ + { + "bbox": [ + 95, + 744, + 236, + 757 + ], + "type": "text", + "content": "(c) Control opponent's troops" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 95, + 760, + 159, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 760, + 159, + 772 + ], + "spans": [ + { + "bbox": [ + 95, + 760, + 159, + 772 + ], + "type": "text", + "content": "(d) Maneuver" + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 311, + 541, + 479, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 541, + 479, + 555 + ], + "spans": [ + { + "bbox": [ + 311, + 541, + 479, + 555 + ], + "type": "text", + "content": "2. What is the objective of the game?" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 329, + 560, + 526, + 632 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 329, + 560, + 487, + 573 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 560, + 487, + 573 + ], + "spans": [ + { + "bbox": [ + 329, + 560, + 487, + 573 + ], + "type": "text", + "content": "(a) Control the rightmost continent" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 329, + 576, + 526, + 600 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 576, + 526, + 600 + ], + "spans": [ + { + "bbox": [ + 329, + 576, + 526, + 600 + ], + "type": "text", + "content": "(b) Have the maximum number of island territories" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 330, + 604, + 518, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 330, + 604, + 518, + 617 + ], + "spans": [ + { + "bbox": [ + 330, + 604, + 518, + 617 + ], + "type": "text", + "content": "(c) Have the most territories after 10 turns" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 330, + 619, + 511, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 330, + 619, + 511, + 632 + ], + "spans": [ + { + "bbox": [ + 330, + 619, + 511, + 632 + ], + "type": "text", + "content": "(d) Occupy all territories on the board" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 311, + 641, + 526, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 641, + 526, + 681 + ], + "spans": [ + { + "bbox": [ + 311, + 641, + 526, + 681 + ], + "type": "text", + "content": "3. Which of these decides how many troops you receive at the start of each turn? (TWO CORRECT ANSWERS)" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 329, + 687, + 526, + 772 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 329, + 687, + 524, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 687, + 524, + 701 + ], + "spans": [ + { + "bbox": [ + 329, + 687, + 524, + 701 + ], + "type": "text", + "content": "(a) The number of territories you control" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 329, + 703, + 524, + 729 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 703, + 524, + 729 + ], + "spans": [ + { + "bbox": [ + 329, + 703, + 524, + 729 + ], + "type": "text", + "content": "(b) The number of coastal territories on the map" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 329, + 731, + 511, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 731, + 511, + 745 + ], + "spans": [ + { + "bbox": [ + 329, + 731, + 511, + 745 + ], + "type": "text", + "content": "(c) They physical size of the board game" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 329, + 746, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 746, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 329, + 746, + 526, + 772 + ], + "type": "text", + "content": "(d) The number of continents you fully occupy" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12813" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 77, + 71, + 290, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 71, + 290, + 111 + ], + "spans": [ + { + "bbox": [ + 77, + 71, + 290, + 111 + ], + "type": "text", + "content": "4. Which of the following statements are correct about attacking enemy territories in the game? (TWO CORRECT ANSWERS)" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 95, + 119, + 290, + 245 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 95, + 119, + 290, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 119, + 290, + 158 + ], + "spans": [ + { + "bbox": [ + 95, + 119, + 290, + 158 + ], + "type": "text", + "content": "(a) When you attack a territory you've already attacked, your attack points are doubled" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 95, + 161, + 289, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 161, + 289, + 187 + ], + "spans": [ + { + "bbox": [ + 95, + 161, + 289, + 187 + ], + "type": "text", + "content": "(b) You CANNOT attack in the opposite direction of the arrows" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 95, + 190, + 289, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 190, + 289, + 216 + ], + "spans": [ + { + "bbox": [ + 95, + 190, + 289, + 216 + ], + "type": "text", + "content": "(c) You can only attack territories you have access to" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 95, + 220, + 289, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 220, + 289, + 245 + ], + "spans": [ + { + "bbox": [ + 95, + 220, + 289, + 245 + ], + "type": "text", + "content": "(d) You can never attack a territory in the same continent" + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 77, + 255, + 290, + 296 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 255, + 290, + 296 + ], + "spans": [ + { + "bbox": [ + 77, + 255, + 290, + 296 + ], + "type": "text", + "content": "5. Which of the following statements are true regarding how attacks are conducted? (TWO CORRECT ANSWERS)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 95, + 303, + 290, + 455 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 95, + 303, + 289, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 303, + 289, + 327 + ], + "spans": [ + { + "bbox": [ + 95, + 303, + 289, + 327 + ], + "type": "text", + "content": "(a) A player with scattered troops always wins" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 95, + 332, + 290, + 358 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 332, + 290, + 358 + ], + "spans": [ + { + "bbox": [ + 95, + 332, + 290, + 358 + ], + "type": "text", + "content": "(b) A player attacking from the left side always wins" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 95, + 361, + 290, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 361, + 290, + 412 + ], + "spans": [ + { + "bbox": [ + 95, + 361, + 290, + 412 + ], + "type": "text", + "content": "(c) Both players roll a number of dice dependent on the number of their troops involved in the battle to decide the outcome" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 95, + 417, + 289, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 95, + 417, + 289, + 455 + ], + "spans": [ + { + "bbox": [ + 95, + 417, + 289, + 455 + ], + "type": "text", + "content": "(d) A player can attack with up to 3 troops and defend with up to 2 troops in one battle" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 68, + 467, + 166, + 481 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 467, + 166, + 481 + ], + "spans": [ + { + "bbox": [ + 68, + 467, + 166, + 481 + ], + "type": "text", + "content": "B Dataset Utility" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 489, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 489, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 489, + 291, + 772 + ], + "type": "text", + "content": "This section provides a brief discussion on the potential future utility of our collated dataset. Firstly, this dataset provides strategy specifications in Risk that can be used to test seldonian optimization approaches in future work. Our dataset provides the first such instance language descriptions of strategic intent. Future work can analyze the flaws and strengths of our data to modify our data collection protocol and generate the specific examples they may need for their individual applications. However, there are many tangential applications for this data that are unrelated to the use-case specified in this paper. There is a dearth of natural language datasets which contain language with human-like speech patterns that is not scraped from internetcorpora. Many NLP techniques can be applied to further study this language data such as summarization, to figure out whether these policies can be summarized into a more easily digestible format, sentiment analysis, for broadly categorizing the language description into aggressive, defensive, etc," + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 71, + 524, + 112 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 524, + 112 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 524, + 112 + ], + "type": "text", + "content": "or Q&A comprehension-based methods, to train AI agents to answer questions regarding a user's preferences by reading their strategy description." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 303, + 121, + 436, + 134 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 121, + 436, + 134 + ], + "spans": [ + { + "bbox": [ + 303, + 121, + 436, + 134 + ], + "type": "text", + "content": "C Dataset Distributions" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 142, + 525, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 142, + 525, + 251 + ], + "spans": [ + { + "bbox": [ + 302, + 142, + 525, + 251 + ], + "type": "text", + "content": "The data distribution for goals and constraints selected by participants are shown in Figure 4 and Figure 5 respectively. For Goals 3 (Keep your troops close together) and 5 (Maximize Battles) participants tended to skew towards answers in the 60-100 range. For the other goals, the responses were relatively uniform. On average, participants submitted 5.62 unique constraints per response." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 303, + 260, + 448, + 275 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 260, + 448, + 275 + ], + "spans": [ + { + "bbox": [ + 303, + 260, + 448, + 275 + ], + "type": "text", + "content": "D Implementation Details" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 282, + 526, + 648 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 282, + 526, + 648 + ], + "spans": [ + { + "bbox": [ + 302, + 282, + 526, + 648 + ], + "type": "text", + "content": "Hyperparameters for both models were computed through a grid search over parameters. The constraints model was trained for 10 epochs with a batch size of 16 using a learning rate of 0.0005. The goals model was trained for 25 epochs with a batch size of 8 using a learning rate of 0.00001. The constraints model was Both models utilized an AdamW optimizer. The constraints model employed a cosine learning rate scheduler, and the goals model employed a linear learning rate scheduler. We hold-out 30 randomly selected examples for our human/ChatGPT evaluation (Section 5). We split the remaining 1023 examples into a 85/15 train/validation split to perform our grid search over hyperparameters. Finally, to report the accuracy of our model we computed the 10-fold cross-validation accuracy on the best performing hyperparameter setting. The best performing model for predicting constraints was pretrained on the synthetic corpus and trained on the un-augmented human corpus. The best goals model was pretrained on the synthetic-augmented dataset and trained on the human-augmented dataset. All experiments were conducted on a 48GB NVIDIA Quadro RTX GPU. Our code can be found at the following anonymized repository for further reference - Anonymized Code Repository." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 303, + 656, + 521, + 683 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 656, + 521, + 683 + ], + "spans": [ + { + "bbox": [ + 303, + 656, + 521, + 683 + ], + "type": "text", + "content": "E Human Evaluation Study - Additional Details" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "content": "In this section, we report some additional details regarding our human-evaluation experiment. Firstly, we report that on average, the difference between scores for a participant's first and last response was -0.2143 for goals and -0.0102 for constraints, indicating that there is a negligible impact of factors" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12814" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 115, + 68, + 478, + 243 + ], + "blocks": [ + { + "bbox": [ + 115, + 68, + 478, + 243 + ], + "lines": [ + { + "bbox": [ + 115, + 68, + 478, + 243 + ], + "spans": [ + { + "bbox": [ + 115, + 68, + 478, + 243 + ], + "type": "table", + "html": "
Synthetic DataSynthetic-Augmented Data
Why would I care about battling. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. I need to spread my troops out as far as possible. I can't win if I put any troops on Blue. I need to place troops on at least 5 countries. This time I will use a different strategy. I need to have troops on at least 5 continents. I don't intend to control continents.I don't know why I care about fighting. I plan to attack players in the game one at a time. I don't think I can handle having troops on more than 2 continents. My troops need to be spread out as much as possible. If I put any troops on Blue, I will not win. I need to place troops on at least 5 countries. I will be using a different strategy this time. I need to have troops on at least 5 continents. I don't intend to control continents.
Human DataHuman-Augmented Data
I am going to attack and take over green c. That country is ripe for the taking since I have cut it off from other grey troops. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. Once the green continent is secure I will look to move my armies out to the red continent to battle black there. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on GreenI am going to attack and take over green c. Since I cut it off from other grey troops, that country is ripe for taking. I also want 4 troops to present a strong force in green a in case of a grey attack from yellow d. I will move my armies to the red continent to fight black once the green continent is secure. Hopefully, while this is going on grey and black will be fighting over yellow and blue, but in case they don't I'm keeping all of my troops together on Green.
", + "image_path": "f84717e0319211ebc3bc932733f3de984fc0334fd85d4dfd89a1e8d847ff25a5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 253, + 526, + 290 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 253, + 526, + 290 + ], + "spans": [ + { + "bbox": [ + 67, + 253, + 526, + 290 + ], + "type": "text", + "content": "Figure 6: Examples of data from Synthetic (top-left), Synthetic-Augmented (top-right), Human (bottom-left) and Human-Augmented (bottom-right). Highlighted sections represent the specific sentences changed by our augmentation procedure." + } + ] + } + ], + "index": 1, + "type": "text" + }, + { + "type": "table", + "bbox": [ + 93, + 300, + 501, + 401 + ], + "blocks": [ + { + "bbox": [ + 93, + 300, + 501, + 401 + ], + "lines": [ + { + "bbox": [ + 93, + 300, + 501, + 401 + ], + "spans": [ + { + "bbox": [ + 93, + 300, + 501, + 401 + ], + "type": "table", + "html": "
GoalsConstraints
G1: Surround enemy territoriesC1: I must have troops on (continent)
G2: Maximize number of countries occupiedC2: I must not have troops on (continent)
G3: Keep our troops close togetherC3: I must be able to access (continent) in one move
G4: Maximize battles throughout the gameC4: I need to protect the borders of (continent)
G5: Fortify borders for the continents you controlC5: I need a total of at least (number) troops to defend a continent
G6: Battle opposing players one at a timeC6: I must have at least (number) countries
C7: I must have troops on at least (number) continents
C8: I must place at least (number) troops to effectively defend a country
C9: I must have troops on at most (number) continents
", + "image_path": "af0b89556f1bf0f40bd8061080d061efd1b079b1b95a9eb4b340cf5ce33b2ce3.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 183, + 414, + 409, + 425 + ], + "lines": [ + { + "bbox": [ + 183, + 414, + 409, + 425 + ], + "spans": [ + { + "bbox": [ + 183, + 414, + 409, + 425 + ], + "type": "text", + "content": "Table 5: Goals and Constraints Selected for our Dataset" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 448, + 291, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 448, + 291, + 570 + ], + "spans": [ + { + "bbox": [ + 67, + 448, + 291, + 570 + ], + "type": "text", + "content": "such as cognitive load or a learning curve. Secondly, it is important to note that we did not have the same number of responses per map from humans, as the map condition was randomly assigned to each participant. While this may slightly impact the results of the constraints model, as we aggregated performance across maps, due to the strong significant difference across baselines, it is unlikely to change our result." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 581, + 255, + 609 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 581, + 255, + 609 + ], + "spans": [ + { + "bbox": [ + 67, + 581, + 255, + 609 + ], + "type": "text", + "content": "F Human Evaluation Study - Data Filtering Rubric" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 618, + 290, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 618, + 290, + 685 + ], + "spans": [ + { + "bbox": [ + 67, + 618, + 290, + 685 + ], + "type": "text", + "content": "Next, we cover the rubric we applied to filter data for the human-subjects study. Each response was independently evaluated by two graders and was included if both graders deemed it acceptable as per the predefined rubric. The rubric was as follows:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 77, + 698, + 291, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 698, + 291, + 723 + ], + "spans": [ + { + "bbox": [ + 77, + 698, + 291, + 723 + ], + "type": "text", + "content": "1. If constraints clearly don't match the selections for locations or access" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 94, + 733, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 733, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 94, + 733, + 290, + 772 + ], + "type": "text", + "content": "- e.g. if someone has selected, \"I must have troops on Blue\" when there are no troops on Blue" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 311, + 448, + 523, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 448, + 523, + 460 + ], + "spans": [ + { + "bbox": [ + 311, + 448, + 523, + 460 + ], + "type": "text", + "content": "2. If someone has submitted invalid constraints" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 329, + 468, + 525, + 537 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 329, + 468, + 524, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 468, + 524, + 507 + ], + "spans": [ + { + "bbox": [ + 329, + 468, + 524, + 507 + ], + "type": "text", + "content": "- e.g. If someone selects both \"I need troops on at least 2 continents\" + \"I need troops on at most 1 continent\"" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 329, + 511, + 525, + 537 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 511, + 525, + 537 + ], + "spans": [ + { + "bbox": [ + 329, + 511, + 525, + 537 + ], + "type": "text", + "content": "- If someone mistakes \"country\" for \"continent\"" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 311, + 547, + 525, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 547, + 525, + 602 + ], + "spans": [ + { + "bbox": [ + 311, + 547, + 525, + 602 + ], + "type": "text", + "content": "3. If someone has selected the same value for all goals (or values within a small range, say " + }, + { + "bbox": [ + 311, + 547, + 525, + 602 + ], + "type": "inline_equation", + "content": "+ - 10" + }, + { + "bbox": [ + 311, + 547, + 525, + 602 + ], + "type": "text", + "content": "), when this clearly does not align with the strategy" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 329, + 608, + 525, + 646 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 608, + 525, + 646 + ], + "spans": [ + { + "bbox": [ + 329, + 608, + 525, + 646 + ], + "type": "text", + "content": "- e.g. someone selects " + }, + { + "bbox": [ + 329, + 608, + 525, + 646 + ], + "type": "inline_equation", + "content": "-100" + }, + { + "bbox": [ + 329, + 608, + 525, + 646 + ], + "type": "text", + "content": " for all goals when the strategy involves protecting a continent" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 657, + 420, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 657, + 420, + 672 + ], + "spans": [ + { + "bbox": [ + 302, + 657, + 420, + 672 + ], + "type": "text", + "content": "G ChatGPT Prompt" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 679, + 525, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 679, + 525, + 719 + ], + "spans": [ + { + "bbox": [ + 302, + 679, + 525, + 719 + ], + "type": "text", + "content": "We utilized the following prompt for ChatGPT which included a description of the domain and task, as well as an annotated example." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 729, + 392, + 742 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 729, + 392, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 729, + 392, + 742 + ], + "type": "text", + "content": "G.1 Full Prompt" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 525, + 772 + ], + "type": "text", + "content": "Reading the following section carefully will provide you with the information needed to complete" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12815" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 72, + 110, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 72, + 110, + 83 + ], + "spans": [ + { + "bbox": [ + 67, + 72, + 110, + 83 + ], + "type": "text", + "content": "this task." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 85, + 291, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 85, + 291, + 166 + ], + "spans": [ + { + "bbox": [ + 67, + 85, + 291, + 166 + ], + "type": "text", + "content": "Risk is a board game in which an army commander tries to take over the world by defeating all enemy troops and controlling all countries. Risk is a simplified version of real conflict, and has rules designed to reflect this. These include the following:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 72, + 177, + 291, + 354 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 72, + 177, + 289, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 177, + 289, + 203 + ], + "spans": [ + { + "bbox": [ + 72, + 177, + 289, + 203 + ], + "type": "text", + "content": "- Players control countries by having troops in them" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 72, + 215, + 291, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 215, + 291, + 241 + ], + "spans": [ + { + "bbox": [ + 72, + 215, + 291, + 241 + ], + "type": "text", + "content": "- The more countries and continents a player controls, the more resources they get" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 72, + 253, + 291, + 279 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 253, + 291, + 279 + ], + "spans": [ + { + "bbox": [ + 72, + 253, + 291, + 279 + ], + "type": "text", + "content": "- Players win countries from other players by battling with their troops" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 72, + 290, + 289, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 290, + 289, + 317 + ], + "spans": [ + { + "bbox": [ + 72, + 290, + 289, + 317 + ], + "type": "text", + "content": "- The more troops a player has when battling, the more likely they are to win" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 72, + 327, + 291, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 327, + 291, + 354 + ], + "spans": [ + { + "bbox": [ + 72, + 327, + 291, + 354 + ], + "type": "text", + "content": "- Players can only attack or be attacked by countries that are next to them" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 365, + 290, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 365, + 290, + 460 + ], + "spans": [ + { + "bbox": [ + 67, + 365, + 290, + 460 + ], + "type": "text", + "content": "In this task, you will be asked to provide a set of constraints corresponding to the human player's strategy for the board game Risk. This includes their troop placements and a text description, which explains why the player decided to place their troops and how they plan to win this game of Risk given their opponents' choices." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 460, + 291, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 460, + 291, + 528 + ], + "spans": [ + { + "bbox": [ + 67, + 460, + 291, + 528 + ], + "type": "text", + "content": "Your task will be to think about the player's strategy (selections and description) and predict what their constraints are with respect to the strategy. Constraints are rules that you think need to be followed to successfully execute a strategy." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 528, + 290, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 528, + 290, + 568 + ], + "spans": [ + { + "bbox": [ + 67, + 528, + 290, + 568 + ], + "type": "text", + "content": "CONSTRAINTS: Note: For predicting goals, this section would be replaced with a description of what goals are" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 571, + 291, + 758 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 571, + 291, + 758 + ], + "spans": [ + { + "bbox": [ + 67, + 571, + 291, + 758 + ], + "type": "text", + "content": "Constraints are comprised of constraint classes and constraint values. Your job is to assign constraints to the human's strategy. Each constraint is comprised of a constraint class and a constraint value. You will be provided a list of possible constraint classes and values to choose from. You may choose the same class of constraint more than once, but you may not submit duplicate constraints. For example, you may submit \"I must have troops on Green\" and \"I must have troops on Blue\" but you may not submit \"I must have troops on Green\" twice. Choose all constraints relevant to the strategy. You may choose up to 8 constraints per strategy." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 78, + 760, + 259, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 760, + 259, + 772 + ], + "spans": [ + { + "bbox": [ + 78, + 760, + 259, + 772 + ], + "type": "text", + "content": "The constraints you can choose from are" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 307, + 71, + 525, + 338 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 308, + 71, + 465, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 71, + 465, + 84 + ], + "spans": [ + { + "bbox": [ + 308, + 71, + 465, + 84 + ], + "type": "text", + "content": "- I must have troops on [Continent]" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 308, + 95, + 481, + 108 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 95, + 481, + 108 + ], + "spans": [ + { + "bbox": [ + 308, + 95, + 481, + 108 + ], + "type": "text", + "content": "- I must not have troops on [Continent]" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 307, + 118, + 524, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 118, + 524, + 143 + ], + "spans": [ + { + "bbox": [ + 307, + 118, + 524, + 143 + ], + "type": "text", + "content": "- I must be able to access [Continent] with one move" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 155, + 505, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 155, + 505, + 169 + ], + "spans": [ + { + "bbox": [ + 308, + 155, + 505, + 169 + ], + "type": "text", + "content": "- I need to protect the borders of [Continent]" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 307, + 179, + 524, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 179, + 524, + 204 + ], + "spans": [ + { + "bbox": [ + 307, + 179, + 524, + 204 + ], + "type": "text", + "content": "- I need a total of at least [Number] troops to defend a continent" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 307, + 216, + 523, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 216, + 523, + 228 + ], + "spans": [ + { + "bbox": [ + 307, + 216, + 523, + 228 + ], + "type": "text", + "content": "- I must have at least at least [Number] countries" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 307, + 239, + 525, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 239, + 525, + 264 + ], + "spans": [ + { + "bbox": [ + 307, + 239, + 525, + 264 + ], + "type": "text", + "content": "- I must have troops on at least [Number] continents" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 307, + 276, + 525, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 276, + 525, + 302 + ], + "spans": [ + { + "bbox": [ + 307, + 276, + 525, + 302 + ], + "type": "text", + "content": "- I must place at least [Number] troops to effectively defend a country" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 313, + 525, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 313, + 525, + 338 + ], + "spans": [ + { + "bbox": [ + 308, + 313, + 525, + 338 + ], + "type": "text", + "content": "- I must have troops on at most [Number] continents" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 350, + 524, + 375 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 350, + 524, + 375 + ], + "spans": [ + { + "bbox": [ + 302, + 350, + 524, + 375 + ], + "type": "text", + "content": "The possible constraint values you can choose from are" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 307, + 387, + 515, + 423 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 307, + 387, + 515, + 401 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 387, + 515, + 401 + ], + "spans": [ + { + "bbox": [ + 307, + 387, + 515, + 401 + ], + "type": "text", + "content": "- Continent - Blue, Green, Yellow, Red, Purple" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 307, + 411, + 502, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 411, + 502, + 423 + ], + "spans": [ + { + "bbox": [ + 307, + 411, + 502, + 423 + ], + "type": "text", + "content": "Number-1,2,3,4,5,6,7,8,9,10,11,12,13,14" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 301, + 434, + 525, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 434, + 525, + 609 + ], + "spans": [ + { + "bbox": [ + 301, + 434, + 525, + 609 + ], + "type": "text", + "content": "Our modified RISK Map contains 5 continents - Red, Green, Purple, Yellow and Blue. Each continent is made up of countries. Red continent has 3 countries, Green has 5 countries, Purple has 5 countries, Yellow has 4 countries and Blue has 4 countries. Green_A, Yellow_B, Blue_C, etc. are referred to as countries or territories Green, Yellow, Blue, Red, Purple are referred to as continents. Continents also have different connections between them through which the troops can move. These connections are one way i.e troops from the source country can only move to the destination country and not the other way round." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 302, + 611, + 525, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 611, + 525, + 732 + ], + "spans": [ + { + "bbox": [ + 302, + 611, + 525, + 732 + ], + "type": "text", + "content": "The map has the following connections - Yellow_D is connected to Green_A, Greed_D is connected to Red_A, Red_A is connected to Green_D, Red_B is connected to Purple_E, Red_C is connected to Yellow_B, Red_C is connected to Blue_B, Blue_A is connected to Yellow_C, Yellow_C is connected to Blue_D, Blue_C is connected to Purple_A, Purple_A is connected to Green_E and Green_E is connected to Purple_A" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "type": "text", + "content": "We will now give you a tutorial on how to ascertain the goals from a human player's strategy and placements on the RISK board." + } + ] + } + ], + "index": 29 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12816" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 66, + 71, + 290, + 192 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 71, + 290, + 192 + ], + "spans": [ + { + "bbox": [ + 66, + 71, + 290, + 192 + ], + "type": "text", + "content": "The two opposing players are denoted by the \"grey\" and \"black\" player. In this scenario, the grey player has placed its troops on the following territories - 5 troops on Yellow_C, 4 troops on Yellow_D, 1 troop on Red_A, 2 troops on Red_B, 2 troops on Red_C. The black player has placed its troops on the following territories - 4 troops on Blue_A, 2 troops on Blue_C, 2 troops on Green_E, 5 troops on Purple_A and 1 troop on Purple_B." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 193, + 289, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 193, + 289, + 247 + ], + "spans": [ + { + "bbox": [ + 67, + 193, + 289, + 247 + ], + "type": "text", + "content": "Now that you have seen where the opposition troops are, you will now be shown how the human player has decided to deploy their troops and the strategy they used." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 248, + 289, + 395 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 248, + 289, + 395 + ], + "spans": [ + { + "bbox": [ + 67, + 248, + 289, + 395 + ], + "type": "text", + "content": "The human player (white) has placed 14 troops to battle the opponents. They have placed the troops on the following territories - 7 troops on Purple_E, 5 troops on Purple_C and 2 troops on Purple_D. You will now be guessing the constraints the human player (white) focused on while coming up with their strategy. The following text contains the human player's description of the strategy they used to place their troops. It is critical that you read this description, as it contains information about the constraints considered by the human player." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 396, + 290, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 396, + 290, + 530 + ], + "spans": [ + { + "bbox": [ + 67, + 396, + 290, + 530 + ], + "type": "text", + "content": "\"I put all my troops in Purple, because I felt as though I needed all my available troops to defend Purple. I wanted to protect Purple. With 7 troops on Purple_E, I feel like I cannot be beat on purple. I wasn't too keen on getting involved in battles, or taking an overly aggressive strategy. I would like to focus on beating the black player first, I don't think I can battle two people at the same time. I'm going to avoid Red for now since it seems to be the hardest continent to control.\"" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 531, + 291, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 531, + 291, + 598 + ], + "spans": [ + { + "bbox": [ + 67, + 531, + 291, + 598 + ], + "type": "text", + "content": "We will now show you how to determine constraints from a strategy and via an example. Please carefully review the example and use the given information about both selections and text to fill out constraints for this strategy." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 599, + 289, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 599, + 289, + 625 + ], + "spans": [ + { + "bbox": [ + 67, + 599, + 289, + 625 + ], + "type": "text", + "content": "An appropriate set of constraints for the strategy shown above would be" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 72, + 635, + 209, + 649 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 635, + 209, + 649 + ], + "spans": [ + { + "bbox": [ + 72, + 635, + 209, + 649 + ], + "type": "text", + "content": "- I must have troops on Purple" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 83, + 655, + 289, + 682 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 655, + 289, + 682 + ], + "spans": [ + { + "bbox": [ + 83, + 655, + 289, + 682 + ], + "type": "text", + "content": "- Reason: The player mentioned that \"they put all their troops on Purple\"" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 72, + 691, + 214, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 691, + 214, + 703 + ], + "spans": [ + { + "bbox": [ + 72, + 691, + 214, + 703 + ], + "type": "text", + "content": "- I must not have troops on Red" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 83, + 710, + 289, + 736 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 710, + 289, + 736 + ], + "spans": [ + { + "bbox": [ + 83, + 710, + 289, + 736 + ], + "type": "text", + "content": "- Reason: The player mentioned that \"they would like to avoid Red for now\"" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 72, + 746, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 746, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 72, + 746, + 290, + 772 + ], + "type": "text", + "content": "- I must place at least 7 troops to effectively defend a country" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 318, + 71, + 524, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 71, + 524, + 111 + ], + "spans": [ + { + "bbox": [ + 318, + 71, + 524, + 111 + ], + "type": "text", + "content": "- Reason: The player mentioned that \"with 7 troops on Purple_E, I cannot be beaten on Purple\"" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 303, + 122, + 480, + 149 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 122, + 480, + 149 + ], + "spans": [ + { + "bbox": [ + 303, + 122, + 480, + 149 + ], + "type": "text", + "content": "H Risk Reinforcement Learning Simulator" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 158, + 526, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 158, + 526, + 414 + ], + "spans": [ + { + "bbox": [ + 302, + 158, + 526, + 414 + ], + "type": "text", + "content": "We have shown that our proposed computational interface can remove the need for human-interpreters for the task of parsing intent from unstructured language. However, to test how well commander's intent interpreted from language can be applied towards optimizing an agent's behavior, we require a reinforcement learning domain to train our agent. As such, to enable seldonian optimization, via unstructured language descriptions, we developed a novel open-ai gym environment for simulating Risk gameplay. This environment closes the loop on the methods presented in this paper by providing all the necessary components for humans to specify their intent to an AI agent and evaluate whether their specifications have been satisfied by the learnt agent. Our environment also provides an additional means of collecting data and conducting studies for human-specification within multi-player team scenarios." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 416, + 525, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 416, + 525, + 632 + ], + "spans": [ + { + "bbox": [ + 302, + 416, + 525, + 632 + ], + "type": "text", + "content": "For this task, we adapted an existing open-air gym environment for Risk (Andeol, 2018). We modified the codebase to allow for RL agents to be trained to play all phases of Risk, according to the setup utilized in our approach. We also developed a pygame-UI for our simulator (see Figure 7). A detailed description of the functionality of the domain and the state space is provided in the appendix. In future work, we aim to leverage our domain to develop approaches which allow humans to constrain an agent's optimization methods through human-like language specifications of intent, which has not been accomplished in any prior work. We also provide a link to an anonymized github repository with the risk environment for further reference - Anonymized Gym-Risk Environment" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 642, + 498, + 670 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 642, + 498, + 670 + ], + "spans": [ + { + "bbox": [ + 302, + 642, + 498, + 670 + ], + "type": "text", + "content": "I Risk Domain - Additional Domain Information" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 678, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 678, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 678, + 526, + 772 + ], + "type": "text", + "content": "This section provides additional information about our setup for Risk Domain. In our version of Risk, the ego player (Alpha), plays against two opponents (Charlie and Bravo) whose actions are controlled by a pre-determined heuristic. The gameplay within our Risk simulator is comprised of four phases" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12817" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 86, + 68, + 273, + 211 + ], + "blocks": [ + { + "bbox": [ + 86, + 68, + 273, + 211 + ], + "lines": [ + { + "bbox": [ + 86, + 68, + 273, + 211 + ], + "spans": [ + { + "bbox": [ + 86, + 68, + 273, + 211 + ], + "type": "image", + "image_path": "ab78e42bf6b832df7dbc6c6d574f87a5929dbb91b923c2801758750045b620c6.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 220, + 290, + 246 + ], + "lines": [ + { + "bbox": [ + 67, + 220, + 290, + 246 + ], + "spans": [ + { + "bbox": [ + 67, + 220, + 290, + 246 + ], + "type": "text", + "content": "Figure 7: This figure shows our Risk simulator with the playable (teal) and two other (orange and pink) agents." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 76, + 264, + 290, + 397 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 77, + 264, + 289, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 264, + 289, + 291 + ], + "spans": [ + { + "bbox": [ + 77, + 264, + 289, + 291 + ], + "type": "text", + "content": "1. Drafting - Players draft their initial troops on empty territories." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 77, + 301, + 289, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 301, + 289, + 327 + ], + "spans": [ + { + "bbox": [ + 77, + 301, + 289, + 327 + ], + "type": "text", + "content": "2. Reinforce - Players assign reinforcements to their existing territories." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 77, + 336, + 290, + 363 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 336, + 290, + 363 + ], + "spans": [ + { + "bbox": [ + 77, + 336, + 290, + 363 + ], + "type": "text", + "content": "3. Attack - Players can choose to attack a neighboring territory with their troops." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 76, + 372, + 290, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 372, + 290, + 397 + ], + "spans": [ + { + "bbox": [ + 76, + 372, + 290, + 397 + ], + "type": "text", + "content": "4. Freemove - Players can move their troops between their territories." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 407, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 407, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 407, + 291, + 772 + ], + "type": "text", + "content": "The game begins with a drafting phase. During this phase, the agent decides where to place their initial 14 troops amongst the available territories. The two opposing players draft their troops before the agent is allowed to draft any troops. The opposing players drafts are either hard-coded to match one of the maps utilized in our study, or they are drafted based on a drafting heuristic. The drafting phase occurs only once in the game. Following drafting, the agent executes the next three phases in sequence. First, in the \"Reinforce\" phase, the agent receives a specific number of reinforcements based on the number of territories and continents they control. The agent needs to assign the given reinforcements to the territories they control. Each country reinforced is an individual action. Next, the agent moves on to the \"Attack\" phase. In this phase, the agent can attack adjacent territories with their troops. Within each attack action, the agent specifies which opposing territory they would like to attack, along with the territory they would like to attack from. The agent must also specify the number of troops they would like to move into the opposing territory should the win the conflict. Each combat sequence between two territories is executed in a similar manner to the physical board game," + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 311, + 71, + 525, + 301 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 312, + 71, + 524, + 112 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 71, + 524, + 112 + ], + "spans": [ + { + "bbox": [ + 312, + 71, + 524, + 112 + ], + "type": "text", + "content": "1. A maximum of three troops are chosen from the attacking territory, and a maximum of two troops are chosen from the defending territory" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 312, + 121, + 525, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 121, + 525, + 161 + ], + "spans": [ + { + "bbox": [ + 312, + 121, + 525, + 161 + ], + "type": "text", + "content": "2. For both the attacker and defender, a number of die are rolled based on the number of troops involved in each attack." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 312, + 173, + 524, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 173, + 524, + 213 + ], + "spans": [ + { + "bbox": [ + 312, + 173, + 524, + 213 + ], + "type": "text", + "content": "3. The rolls are sorted in descending order, and each roll is compared between the attacking and defending country." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 311, + 224, + 524, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 224, + 524, + 264 + ], + "spans": [ + { + "bbox": [ + 311, + 224, + 524, + 264 + ], + "type": "text", + "content": "4. For each comparison, the country with the lower roll loses one troop. The defending territory wins all ties." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 311, + 275, + 525, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 275, + 525, + 301 + ], + "spans": [ + { + "bbox": [ + 311, + 275, + 525, + 301 + ], + "type": "text", + "content": "5. The above steps are repeated until either the attacking or defending player has been defeated." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 312, + 526, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 312, + 526, + 472 + ], + "spans": [ + { + "bbox": [ + 302, + 312, + 526, + 472 + ], + "type": "text", + "content": "Following combat, the agent can move all but one troop into the conquered territory. Once the agent has finished attacking, they move on to the final phase in their turn, \"Freemove.\" In the \"Freemove\" phase, the player can move troops from one territory they control to another, as long as the territories are connected. Once the agent executes all their actions, the actions of the two agents are simulated and the player is reset to the \"Reinforce\" phase to start their next turn. The game is complete when either the agent is out of troops or controls all territories." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "spans": [ + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": "An action is specified by a four-item tuple, i.e. " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "< p, s, t, tr >" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": ". The first item, " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": ", specifies which type of action is being conducted, among the four possible phases in the game. Item two, " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": ", denotes the source country for the action. For reinforce and drafting actions this is the country that the agent wants to add troops to, whereas for the attack and freemove actions, " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": " denotes the country you will be attacking or moving from. The, final two items, " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "tr" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": ", are specifically for attack and move actions. " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": " specifies the country that you would like to attack or move to. For the attack action, " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "tr" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": " specifies the number of troops you would like to move from the attacking country if you win the combat. When the agent specifies a move action, " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "tr" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": " denotes the number of troops to be moved from " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 302, + 475, + 525, + 691 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 701, + 383, + 714 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 701, + 383, + 714 + ], + "spans": [ + { + "bbox": [ + 302, + 701, + 383, + 714 + ], + "type": "text", + "content": "I.1 State Space" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "content": "The state of the game is stored as a dictionary. The state dictionary records information such as country ownership, number of troops on each country, continent ownership, etc. We also record information" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12818" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "content": "about players such as number of reinforcements available to a player, number of players alive, current turn number, etc. We have provided six functions to encode the state space which can be passed as an input to a Reinforcement Learning model." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 141, + 290, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 141, + 290, + 275 + ], + "spans": [ + { + "bbox": [ + 67, + 141, + 290, + 275 + ], + "type": "text", + "content": "The first function encodes the state using 54 features. The initial 42 features contain country related information for each opponent (21 features each) and the next 5 features contain continent ownership data. The remaining features are used for other information related to the game like number of areas controlled by the player, troops left to be drafted by the player, troops left for reinforcement, number of players alive, current turn number and if the current turn belongs to the player." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 278, + 291, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 278, + 291, + 478 + ], + "spans": [ + { + "bbox": [ + 69, + 278, + 291, + 478 + ], + "type": "text", + "content": "The second function encodes the information in the form of one hots. It has a total of 132 features, the first 84 features contain information regarding country ownership as one hots, 21 each for the player, opponents and countries with no owner. The next 21 features denote the number of troops on each country. The next 20 features contain information regarding continent ownership, 5 each for the player, opponents and no owner. The remaining features contain other relevant information as described for the first function. For both of the first two functions described, we also provide normalized versions of these functions where all the real valued spaces are divided by a normalising constant." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 481, + 291, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 481, + 291, + 643 + ], + "spans": [ + { + "bbox": [ + 67, + 481, + 291, + 643 + ], + "type": "text", + "content": "The fifth encoding function contains all the 132 features of the third function and additional information for the current phase. It contains 134 features in total. This function returns normalised values. The last encoding function contains 298 features. The initial features are similar to the ones present in the third encoding function. Apart of that it explicitly contains information about where an agent or player can attack and execute a freemove. This information can help the reinforcement learning model more easily. This function also returns normalised values." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 658, + 181, + 671 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 658, + 181, + 671 + ], + "spans": [ + { + "bbox": [ + 67, + 658, + 181, + 671 + ], + "type": "text", + "content": "I.2 Reward Functions" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 678, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 678, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 678, + 291, + 772 + ], + "type": "text", + "content": "We have setup four different types of reward functions ranging from sparse to dense. The recommended reward function is the rules-based reward which provides rewards for successful actions, finishing a phase, successful action in a phase and winning the game. The rewards for winning the game are weighted by a factor of 10 compared to" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 493, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 493, + 84 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 493, + 84 + ], + "type": "text", + "content": "others which are weighted by a factor of 1." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 84, + 526, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 84, + 526, + 220 + ], + "spans": [ + { + "bbox": [ + 302, + 84, + 526, + 220 + ], + "type": "text", + "content": "The most simple reward function available is a sparse reward function which provides negative rewards for losing the game and positive rewards for winning the game. In order to increase the number of rewards given throughout the game, we created the turn count reward function which rewards the agent for every turn it plays. Survival reward function was built on top of this to provide an additional negative reward for losing apart from the reward for surviving." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 229, + 408, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 229, + 408, + 243 + ], + "spans": [ + { + "bbox": [ + 302, + 229, + 408, + 243 + ], + "type": "text", + "content": "I.3 Human Drafting" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 247, + 526, + 395 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 247, + 526, + 395 + ], + "spans": [ + { + "bbox": [ + 302, + 247, + 526, + 395 + ], + "type": "text", + "content": "Finally, we have also setup a functionality in our simulator that allows player or the opponents to skip the drafting phase and follow a fixed draft based on a predefined map. In such cases, we have predefined fifteen types of map initialisation containing troops for both opponents, which correspond to the exact maps utilized in our data collection procedure. Our setup chooses one of the map initializations and corresponding selections made by a participant in the user study to simulate the game." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "12819" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 18 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_content_list.json b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..98c3c91e9e3eac5b37f73ad487b34d1278d948cb --- /dev/null +++ b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_content_list.json @@ -0,0 +1,4099 @@ +[ + { + "type": "text", + "text": "A Confederacy of Models: a Comprehensive Evaluation of LLMs on Creative Writing", + "text_level": 1, + "bbox": [ + 146, + 87, + 850, + 129 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Carlos Gómez-Rodríguez", + "bbox": [ + 220, + 143, + 445, + 159 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Universidade da Coruña, CITIC", + "bbox": [ + 203, + 160, + 463, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Department of CS and IT", + "bbox": [ + 228, + 175, + 438, + 192 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "15071 A Coruña, Spain", + "bbox": [ + 238, + 193, + 431, + 209 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "carlos.gomez@udc.es", + "bbox": [ + 236, + 210, + 431, + 225 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Paul Williams", + "bbox": [ + 603, + 143, + 727, + 156 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "School of Business & Creative Industries", + "bbox": [ + 497, + 160, + 833, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "University of the Sunshine Coast", + "bbox": [ + 532, + 175, + 800, + 192 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Sunshine Coast, Australia", + "bbox": [ + 559, + 193, + 771, + 208 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "pwillia3@usc.edu.au", + "bbox": [ + 568, + 210, + 763, + 225 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 267 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We evaluate a range of recent LLMs on English creative writing, a challenging and complex task that requires imagination, coherence, and style. We use a difficult, open-ended scenario chosen to avoid training data reuse: an epic narration of a single combat between Ignatius J. Reilly, the protagonist of the Pulitzer Prize-winning novel A Confederacy of Dunces (1980), and a pterodactyl, a prehistoric flying reptile. We ask several LLMs and humans to write such a story and conduct a human evaluation involving various criteria such as fluency, coherence, originality, humor, and style. Our results show that some state-of-the-art commercial LLMs match or slightly outperform our writers in most dimensions; whereas opensource LLMs lag behind. Humans retain an edge in creativity, while humor shows a binary divide between LLMs that can handle it comparably to humans and those that fail at it. We discuss the implications and limitations of our study and suggest directions for future research.", + "bbox": [ + 141, + 279, + 460, + 592 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 604, + 258, + 618 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In recent years, large language models (LLMs) have achieved remarkable progress in a wide range of language processing and generation tasks, such as question answering, machine translation, or text summarization, among many others (Zhao et al., 2023). This has motivated research on evaluating and comparing the performance of LLMs in various tasks, both between each other and with respect to human performance; including both task-specific evaluations (see e.g. (Jiao et al., 2023; Gilson et al., 2023)) and overarching benchmark suites that seek to provide comprehensive evaluation throughout many dimensions (Hendrycks et al., 2021; Liang et al., 2022; Srivastava et al., 2022).", + "bbox": [ + 112, + 629, + 489, + 853 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Creative writing is also one application where LLMs have been observed to produce good results. According to Franceschelli and Musolesi (2023), their generated outputs in poetry or storytelling", + "bbox": [ + 112, + 854, + 489, + 919 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/ddf017c251bdc99e5d836c1a2bb513ffe06b5ba6dd8f689a63f9c40e4d70cb86.jpg", + "image_caption": [ + "Figure 1: Box plot comparing overall ratings for stories by humans and 12 LLMs, arranged left to right by mean overall rating. Boxes show median, quartiles Q1-Q3, and whiskers at 1.5 IQR, with values outside that range plotted as outliers. Filled red circles represent means." + ], + "image_footnote": [], + "bbox": [ + 514, + 252, + 878, + 508 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "are \"often of astonishing quality\", and Clark et al. (2021) showed that humans cannot reliably distinguish human- from LLM-authored stories. However, and despite the amount of papers experimenting with LLMs for this purpose, an evaluation comparing the abilities of current LLMs as standalone systems for creative writing seems to be lacking.", + "bbox": [ + 505, + 626, + 884, + 739 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Here, we provide such an evaluation, comparing the storytelling capability of 12 recent, instructional-aligned language models between each other and with human writers. We do so using a rubric based on established creative writing evaluation proposals (Davidow and Williams, 2016; Carey et al., 2022), but specifically adapted to the task. Our comparison is performed on a purely zero-shot setting, with a natural human prompt (based on a combat between Ignatius J. Reilly, protagonist of A Confederacy of Dunces, and a pterodactyl) that", + "bbox": [ + 505, + 741, + 884, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "14504", + "bbox": [ + 475, + 927, + 524, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14504-14528", + "bbox": [ + 208, + 945, + 786, + 958 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 958, + 719, + 972 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "has been specifically chosen to be challenging and meaningful while preventing as much as possible the option for LLMs to resort to regurgitating or adapting material from their training set.", + "bbox": [ + 112, + 84, + 487, + 149 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related work", + "text_level": 1, + "bbox": [ + 112, + 164, + 265, + 179 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "LLMs in creative writing LLMs have been used in creative writing since their first generation, with models like GPT-2 (Radford et al., 2019) or BART (Lewis et al., 2020). However, these models suffered from a lack of long-range coherence leading to contradictions or inconsistencies when generating stories (Nye et al., 2021). Thus, they were not viable as standalone story generators. Instead, they were used either with specialized fine-tuning for the task (See et al., 2019); or as components of systems that incorporated external knowledge (Guan et al., 2020, 2021), storyline planning (Tan et al., 2021), or both (Xu et al., 2020); or for cocreation with a human in the loop (Swanson et al., 2021), a line of research that has also continued with newer models (Yuan et al., 2022; Chung et al., 2022; Mirowski et al., 2023).", + "bbox": [ + 112, + 191, + 489, + 464 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Here our goal is not to produce a specialized system, but to evaluate the performance of LLMs by themselves as creative writers. Thus, we focus on the purely zero-shot setting, where a generalistic LLM is asked to write a story with no extra fine-tuning, in-context learning (Dong et al., 2023), prompt engineering or additional components. This has only become viable with the extra coherence and consistency in long texts provided by newer LLMs, especially those that are aligned to follow instructions with instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022).", + "bbox": [ + 112, + 466, + 489, + 675 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To our knowledge, there was no previous work in this line. In fact, evaluation in creative writing is a conspicuous gap in LLM evaluation benchmarks: the huge BIG-bench suite (Srivastava et al., 2022) currently has over 200 tasks, but does not include any creative writing, and HELM (Liang et al., 2022) cites it as an \"aspirational scenario\" for future work. This likely owes to benchmarks focusing on easily-automatable metrics, whereas the gold standard for creative writing is human evaluation (Belz and Reiter, 2006), which is much costlier.", + "bbox": [ + 112, + 677, + 489, + 852 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The closest previous work to our proposal is the recent preprint by Xie et al. (2023), where GPT-3 is compared to previous storytelling systems via human evaluation. However, there are several impor", + "bbox": [ + 112, + 854, + 489, + 919 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "tant differences with respect to our work: (1) they use prompt-based learning, providing examples to adapt the model to the task, rather than a purely zero-shot conversational prompt, (2) they evaluate a single LLM while our goal is to compare LLMs, and (3) they use pre-existing story datasets, which increases the risk of models benefitting from similar stories present in their training set, something that we have tried to avoid as described below.", + "bbox": [ + 507, + 84, + 884, + 227 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In another recent preprint, Garrido-Merchan et al. (2023) generate Lovecraftian horror literature. However, they also focus on a single LLM (GPT-4), using careful prompt engineering to optimize its performance rather than a pure zero-shot setting, and evaluation is only on whether humans can distinguish AI-generated from real stories (concluding that, in those circumstances, they cannot). Sawicki et al. (2023) apply a similar evaluation (but automated) to Whitmanian poems generated by three versions of GPT, also with a negative result.", + "bbox": [ + 507, + 230, + 884, + 406 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Finally, concurrently with our study, a preprint by Chakrabarty et al. (2023), released a few months after our submission, evaluates three LLMs for creative writing in a more similar way to ours: they apply human evaluation to compare stories by humans and LLMs in a zero-shot setting. However, there are important differences in methodology and scope between both studies. A comprehensive comparison will be made in Section 5, following the exposition of our methods and results.", + "bbox": [ + 507, + 407, + 884, + 568 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Creative writing evaluation Creative Writing is a challenging and complex performative language act that requires a number of skills, such as an expertise in craft, cultural and literary competency, linguistic fluency, coherence, complex connotative and metaphorical levels of understanding, innovation, originality and imagination, to name a few.", + "bbox": [ + 507, + 580, + 882, + 692 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The craft of writing involves innovation with style and voice, needs a fundamental understanding and use of structural elements (grammar, spelling, punctuation), craft elements (plot, character, setting, point of view and imaginative capacity, such skills defined by Bloom as 'putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing' (Anderson and Krathwohl, 2001, p.21). Evaluation of creative writing therefore must take into account all these factors, and assessment in university Creative Writing courses is usually based on a rubric that attempts to measure the basic elements of narrative", + "bbox": [ + 507, + 694, + 884, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "14505", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "craft, as well as the specific requirements on the assignment (Kroll, 1997; Norris, 2013; Davidow and Williams, 2016; Wise and van Luyn, 2020; Carey et al., 2022).", + "bbox": [ + 112, + 84, + 489, + 148 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Materials and Methods", + "text_level": 1, + "bbox": [ + 112, + 162, + 349, + 177 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Task", + "text_level": 1, + "bbox": [ + 112, + 187, + 200, + 202 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The chosen task to compare the LLMs under consideration is defined by the following prompt:", + "bbox": [ + 112, + 210, + 489, + 242 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Write an epic narration of a single combat between Ignatius J. Reilly and a pterodactyl, in the style of John Kennedy Toole.", + "bbox": [ + 149, + 256, + 453, + 319 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The prompt is provided to the models from a fresh state, without previous context.", + "bbox": [ + 112, + 335, + 485, + 366 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We believe this task is particularly adequate to challenge the capabilities of models for creative writing, for the following reasons:", + "bbox": [ + 112, + 368, + 485, + 416 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- It is a non-standard, \"wacky\" scenario that has been invented for the occasion, so it is very unlikely that the systems' training sets contain coincident or similar tasks, or pieces of stories that can be reused for the task. No information about this task was posted to the Internet or disseminated in any other way before the LLMs were prompted.", + "- It features a specific literary character, Ignatius J. Reilly, so we can evaluate the models on how they capture the personality of the character. At the same time, this character appeared in only one book, and does not seem to have been the target of fan fiction. This makes the task more challenging due to having to capture the personality of the protagonist from scarce material, while making it unlikely that the model can just reuse material from existing stories.", + "- In turn, A Confederacy of Dunces is the only work of its author John Kennedy Toole, so the author's style also needs to be captured from scarce material.", + "- This novel is widely considered to be a classic of comic fiction, and won the 1981 Pulitzer Prize in the Fiction category. Thus, writing a story about its protagonist in the author's style sets an adequately high bar." + ], + "bbox": [ + 136, + 430, + 489, + 917 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- The genre requires humor, which is considered to be an especially subtle feature of human language and challenging for machines, including LLMs, to exhibit (Jentzsch and Kersting, 2023).", + "bbox": [ + 531, + 84, + 884, + 164 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- While the task is challenging due to putting together two unlikely antagonists, the prompt's level of detail is open-ended enough to give ample space for creativity, as no specifications are made about setting, weapons, outcome or other aspects of the story.", + "bbox": [ + 531, + 175, + 884, + 272 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2 Models", + "text_level": 1, + "bbox": [ + 509, + 286, + 613, + 300 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We gave the task to a confederacy of large language models, composed of all such models we could find that (1) were available to the authors by April 20 2023, which was the cutoff date to build our corpus of stories, and (2) were adjusted to conversational settings and instruction-following by using techniques like instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022). This is in contrast to \"vanilla\" language models configured to just predict the next word, like plain GPT-3 (Brown et al., 2020) or Llama (Touvron et al., 2023), which generally cannot handle natural prompts like the one we use. We only included distinct models, not front-ends to the same model (but we did include derived models with substantial additions, like Bing Chat which is claimed to use GPT-4 but adds search capabilities, or various models that were fine-tuned from Llama weights). For models that came in a variety of parameter sizes, we used the largest one, or the largest we could execute with local or remote resources. For models with several available versions, we used the latest available, except in the case of ChatGPT where we included both the GPT-3.5 and GPT-4 versions, due to the wider availability of 3.5 (the latest version offered for free at cutoff time) and the lack of information on whether GPT-4 is an incremental improvement or a different model with its own tradeoffs.", + "bbox": [ + 505, + 306, + 882, + 772 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This selection yielded the following 12 language models. We list them in alphabetical order as chronological ordering would be challenging, due to closed releases, opaque updates from some of the commercial products, and many of the models being released almost simultaneously:", + "bbox": [ + 507, + 774, + 882, + 869 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Alpaca (Taori et al., 2023), a Stanford model fine-tuned from Llama (Touvron et al., 2023) on instruction data generated with the self-instruct", + "bbox": [ + 507, + 871, + 880, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "14506", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "methods of (Wang et al., 2022). We use the 13B-parameter version, the largest available at cutoff.", + "bbox": [ + 112, + 84, + 489, + 116 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Bard, Google's experimental conversational LLM offering, claimed to be based on a lightweight version of LaMDA (Thoppilan et al., 2022). It can use content from the web to answer questions. Model details have not been made public.", + "bbox": [ + 112, + 118, + 489, + 198 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Bing Chat, an LLM offered by Microsoft's Bing search engine. Claimed to use GPT-4 $^1$ , further technical details have not been made public. The model performs web searches and uses the results to augment its context window with relevant information. It can also provide links to sources for its claims (although this is not relevant for our creative writing task, where no such links were provided or needed). We used its Creative mode, the obvious fit for our task. A problem worth mentioning is that we found the model to be subject to heavy censorship, which affected our experiment: in most prompting attempts, the story would be deleted by the filtering system before being finished. When this happened, we just reset and re-prompted the model, repeating the process until a full story was obtained. Over 100 tries were needed to obtain 5 non-censored stories. We are aware that this may introduce bias (as non-censored stories may have a different quality distribution than what the model could potentially generate without the filter) but this is unavoidable from our end, since we cannot bypass moderation. In any case, the sample does reflect what a user can obtain from the end product, as the censored stories are out of reach.", + "bbox": [ + 115, + 200, + 489, + 600 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "ChatGPT with GPT-3.5, an OpenAI successor to the 175B-parameter GPT-3 model (Brown et al., 2020) which was tuned using reinforcement learning with human feedback, namely a variant of the InstructGPT method by Ouyang et al. (2022). We used the March 23 version provided by OpenAI's free ChatGPT service.", + "bbox": [ + 112, + 602, + 489, + 715 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "ChatGPT with GPT-4, the most advanced language model released by OpenAI at cutoff time. A description of the model is available in (OpenAI, 2023), although essential technical details like the number of parameters have not been published. We used the March 23 version provided by OpenAI's ChatGPT Plus service.", + "bbox": [ + 112, + 717, + 489, + 828 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Claude is a language model trained by Anthropic. While details about its implementation are not public, it is known to be a successor of the model", + "bbox": [ + 112, + 831, + 489, + 879 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "described in (Bai et al., 2022), a 52B-parameter model aligned to be helpful with Constitutional AI, a list of guiding principles provided to the model, combined with a mix of supervised learning and reinforcement learning with AI feedback. We used version 1.2 of the model.", + "bbox": [ + 507, + 84, + 884, + 179 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Dolly 2.0 (dolly-v2-12b), a 12B-parameter language model trained by Databricks, derived from EleutherAI's Pythia-12B model (Biderman et al., 2023) after fine-tuning on a 15K instruction corpus. At cutoff date, it was the only available conversational LLM where all of its components could be considered fully open source $^{2}$ , as the code, weights and instruction datasets all have open-source licenses compatible with any use, including commercial use, and no data from proprietary systems like ChatGPT has been used for finetuning.", + "bbox": [ + 507, + 181, + 884, + 357 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "GPT4All-J (Anand et al., 2023b), an improvement over its predecessor GPT4All (Anand et al., 2023a). The base model is the 6B-parameter GPT-J (Wang and Komatsuzaki, 2021), which has been fine-tuned on a dataset expanded from a mix of existing sources.", + "bbox": [ + 507, + 359, + 882, + 455 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Koala (Geng et al., 2023), a model fine-tuned from Llama (Touvron et al., 2023) by researchers from the university of Berkeley, on a variety of dialogue data obtained from the web. We use the 13B-parameter version.", + "bbox": [ + 507, + 456, + 882, + 535 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "OpenAssistant (Köpf et al., 2023) is an LLM fine-tuned on a large, free, human-generated conversation corpus created by a crowdfunding effort involving over 13,500 volunteers. We used the OASFT-Llama-30B model, fine-tuned from the 30B-parameter Llama (Touvron et al., 2023) model.", + "bbox": [ + 507, + 537, + 884, + 633 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "StableLM is Stability AI's series of language models. We used StableLM-Tuned-Alpha-7B. With 7B parameters, this is the largest model available (at cutoff time) among a series of models trained on a dataset built from The Pile (Gao et al., 2021) and fine-tuned on a combination of conversational LLM corpora.", + "bbox": [ + 507, + 634, + 882, + 746 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Vicuna (Chiang et al., 2023) is another member of the family of models obtained by fine-tuning Llama (Touvron et al., 2023), in this case with user-shared conversations with ChatGPT. We used the 13B-parameter version of the model.", + "bbox": [ + 507, + 747, + 882, + 827 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3 Evaluation rubric", + "text_level": 1, + "bbox": [ + 507, + 840, + 697, + 854 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The creative writing rubric was designed for assessment of creative writing assignments in uni", + "bbox": [ + 507, + 862, + 882, + 894 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "1https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAIÀZs-GPT-4", + "bbox": [ + 112, + 891, + 480, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "2https://opensource.org/definition-annotated/", + "bbox": [ + 529, + 903, + 873, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "14507", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/299c8a21353ea1f74e443577861224cc79f755156722840f5e1173f2ad423c59.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
IDDescription
1Overall/holistic/cohesive readability of the story (not just a compilation of elements).
2Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view.
3Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting.
4Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid).
5Creativity/innovation/originality/ research-credibility, new knowledge, avoidance of cliché and derivative tropes.
6Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed.
7Understanding and habitation of the epic genre of heroic/legendary adventure.
8Description and credibility of a single combat scene.
9Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description.
10Use of a characteristically dark humorous tone.
", + "bbox": [ + 115, + 80, + 884, + 233 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 1: Creative writing evaluation rubric. All items are scored out of ten points. Marking guideline: Emerging 1-4, Competent 5-8, Sophisticated 9-10.", + "bbox": [ + 112, + 242, + 882, + 273 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "versity creative writing courses, and is taken in part from a university textbook by one of the authors of this article, *Playing with Words* (Davidow and Williams, 2016) and an article that justifies the use of this rubric (Carey et al., 2022). This rubric evaluates creative production in five holistic craft-based criteria and measures craft skills based on a writing style outlined in the article: among others, Flaubert's insistence on *le mot juste* (the right word or expression), Strunk and White's *The Elements of Style* (2008[1918]), George Orwell's rules for concreteness and clarity (Orwell, 1946); and Annie Dillard's rules for writing good prose (Dillard, 1981).", + "bbox": [ + 110, + 297, + 489, + 521 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The rubric for this AI task adds five more criteria which address the specific prompt requirements, such as genre, style, tone, character and action. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft, to avoid formulaic, rule-based writing and to address the very specific task addressed here.", + "bbox": [ + 112, + 524, + 489, + 651 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The criteria are detailed in Table 1, with more details given in the Appendix C. The holistic scale (emerging, competent, sophisticated) guides human raters to assess holistically: 'a holistic scale measures the relative success of a text but does so through a rubric that incorporates many of the traits in analytic scoring as heuristics towards a conception of a whole rather than as a sum of autonomous components' (Perelman, 2018, p.16).", + "bbox": [ + 112, + 653, + 489, + 799 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4 Evaluation methodology", + "text_level": 1, + "bbox": [ + 112, + 815, + 352, + 831 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We prompted each of the LLMs 5 times with the prompt given in Section 3.1. Each prompt was made from a fresh state, i.e., in a zero-shot setting without any previous context that could help guide the models. The resulting stories had an average of", + "bbox": [ + 112, + 838, + 489, + 919 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "379 words (std = 248, min = 23, max = 1223).", + "bbox": [ + 507, + 297, + 853, + 312 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Then, we also asked 5 human writers to each write a story following the same prompt. For uniformity, we suggested a length range coherent with the LLM-generated stories (250 to 1200 words). The writers were Honours and postgraduate Creative Writing students that volunteered for the task, and all of them studied the specific task requirements (e.g. John Kennedy Toole's style) before writing their stories. However, they were not given access to the AI-generated stories and they were instructed not to use LLMs at all to help them write.", + "bbox": [ + 507, + 316, + 884, + 492 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The result is, thus, a corpus of 60 AI-generated stories (5 for each of the 12 considered LLMs) plus an additional 5 human-generated stories, all in plain text format. The corpus is available at https://doi.org/10.5281/zenodo.8435671.", + "bbox": [ + 507, + 495, + 882, + 575 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The only preprocessing made to the stories is that (1) we removed leading sentences that described the task, often present in LLM answers (e.g.: \"Here is a potential epic narration in the exaggerated style of John Kennedy Toole's A Confederacy of Dunces:\") (2) we removed titles from stories that had them, and (3) we unified paragraph formatting, leaving one line between paragraphs in all the plain text files. Other than these changes, made for uniformity and to preserve the blindness of the rating process, we left the text as it was.", + "bbox": [ + 507, + 579, + 882, + 755 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We recruited 10 raters, also Honours and postgraduate Creative Writing students that were acquainted with the specific requirements of the task, and we instructed them to grade stories according to the rubric. Since the raters were volunteers, to keep the workload low, each rater did not rate all the stories. Instead, we divided the 65 stories into 5 groups of 13 stories each (each group containing one story by each LLM, plus one story by a human) and assigned one rater to each group. In this way,", + "bbox": [ + 507, + 758, + 884, + 919 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "14508", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/c9e001ec16d264ff64e709ce2439eacd3290051f6d7f7433c0a237016f3b68f1.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Rubric item12345678910overall
chatgpt-gpt48.7±0.88.7±0.78.4±1.38.3±0.77.6±18.0±1.28.1±1.48.5±0.87.9±1.66.0±2.880.2±7.3
claude128.0±1.78.0±1.68.1±1.27.9±1.87.1±2.37.5±26.4±2.27.5±1.87.4±2.56.5±2.574.4±15.9
human7.3±2.37.8±1.87.3±1.77.2±1.88.0±27.2±2.44.9±2.16.3±2.27.7±2.16.4±3.470.1±17.4
bing7.8±27.5±2.27.9±1.77.4±2.17.0±1.66.8±2.45.3±2.96.2±2.17.4±2.26.2±2.669.5±18.4
chatgpt-gpt357.5±26.5±2.48.1±1.37.0±2.25.4±2.55.3±2.46.8±1.57.6±1.25.5±2.53.3±2.863.0±15.4
koala7.5±2.56.7±2.28.2±1.26.8±2.65.8±2.34.8±2.75.8±2.45.5±2.35.5±2.33.4±3.260.0±19.2
vicuna7.9±1.76.7±1.68.1±1.37.0±1.65.1±1.94.6±2.35.7±2.36.1±1.95.4±2.72.4±1.959.0±13.8
oa7.2±2.25.8±2.47.2±2.56.2±2.64.9±2.13.9±2.45.8±2.46.5±2.24.3±2.32.9±3.154.7±18
bard6.5±2.54.9±2.16.8±1.95.5±2.73.9±2.13.8±2.54.7±2.64.6±2.75.0±2.42.5±248.2±20.1
gpt4all6.5±2.25.4±1.77.2±1.76.5±2.14.1±2.22.4±2.25.4±2.55.6±2.42.5±1.41.2±0.846.8±13.1
stablelm5.5±1.85.0±2.56.6±1.93.8±23.2±1.52.1±2.24.4±1.93.8±22.9±2.61.4±1.538.7±17.2
dolly4.6±2.25.0±2.25.6±2.53.2±1.94.2±2.83.1±2.24.4±1.93.3±1.83.0±21.5±1.537.9±13.6
alpaca5.2±3.13.1±1.44.9±34.2±1.91.9±12.0±1.43.7±33.9±2.82.1±1.51.1±0.632.1±15.7
average6.9±2.16.2±1.97.3±1.86.2±25.2±24.7±2.25.5±2.35.8±25.1±2.23.4±2.256.6±15.8
", + "bbox": [ + 132, + 80, + 862, + 227 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2: Results for each rubric item, as well as overall score. Each cell shows average $\\pm$ standard deviation for the ratings achieved by a given model (or human writers) on a given rubric item. The bottom line shows the average among all models (and human writers). Models are sorted by overall score. The best result for each rubric item is highlighted in boldface.", + "bbox": [ + 112, + 237, + 884, + 297 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "we ensure (1) that we have at least two ratings per story, allowing us to measure inter-rater agreement, (2) that comparisons are fair, in the sense that no LLM (or the humans) is advantaged by being assigned more lenient raters, because each LLM (and humans) receives exactly one rating by each of the 10 raters, and (3) since each rater always gets one story from each model (and one human), we can expect that each will be rating a diverse set of stories covering a wide range of ability levels, which helps the marking process as it allows for comparative analysis between various performances, enabling more accurate pinpointing of each story's quality.", + "bbox": [ + 110, + 321, + 487, + 530 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Stories were assigned random identifiers before sending them to raters, so that the process was blind: to avoid biases, raters knew that they would be evaluating human and AI-generated stories, but were unaware of the origin of each story.", + "bbox": [ + 112, + 531, + 487, + 611 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Raters were sent all stories at once and they were free to go back and change the ratings of previously-rated stories. In addition, all of them were experienced assessors in terms of Creative Writing texts, with previous experience in applying the scale. These precautions mitigate the need for specific calibration (Karpinska et al., 2021) that would strain our resources.", + "bbox": [ + 112, + 612, + 487, + 740 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4 Results", + "text_level": 1, + "bbox": [ + 112, + 756, + 213, + 771 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1 Agreement", + "text_level": 1, + "bbox": [ + 112, + 784, + 247, + 799 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To gauge the reliability of our results, we compute inter-rater agreement between the two ratings given to each story for each individual rubric item. We use linearly weighted Cohen's kappa (Cohen, 1968), which is appropriate for ordinal scales like ours, obtaining a value of 0.48, $95\\%$ CI [0.43, 0.54]. This is interpreted as \"moderate", + "bbox": [ + 112, + 806, + 489, + 919 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "agreement\", which is a positive result taking into account the obvious subjectivity involved in rating stories. If we instead focus on overall scores (sums of rubric items), the Pearson correlation between the scores given to each story by each group of raters is 0.58 ( $p < 0.00001$ ), again indicating a reasonable degree of consistency between raters given the subjectivity of the task.", + "bbox": [ + 507, + 321, + 884, + 450 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2 General overview", + "text_level": 1, + "bbox": [ + 507, + 462, + 692, + 476 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2 shows a comprehensive overview of the ratings that each of the LLMs (and humans) obtained for each rubric item, as well as in terms of overall score. Additionally, a box-and-whisker plot comparing overall score can be seen in Figure 1.", + "bbox": [ + 507, + 483, + 882, + 563 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "ChatGPT with GPT-4 generates the best-rated stories, both in terms of overall score and in 8 out of 10 of the individual rubric categories. However, human writers are rated best in terms of originality (rubric item 5), and Claude was rated best in the use of dark humor (rubric item 10), with humans a close second. GPT-4 is also remarkably consistent, showing low standard deviations not only with respect to human writers (which is expected, as our human stories were authored by five different humans, whose skill levels may vary) but also with respect to the rest of the LLMs.", + "bbox": [ + 507, + 564, + 882, + 756 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "If we compare LLMs to each other, the best performances correspond to commercial offerings, including (apart from the aforementioned GPT-4) Claude, Bing Chat and the GPT-3.5 version of ChatGPT. Open-source models are clearly behind, with the best (Koala) achieving 60.0 overall score, contrasting with the 80.2 obtained by GPT-4. Although the best-performing LLMs are generally better across the board, some idiosyncrasies can be observed: e.g., GPT-4 tops almost all rubric items", + "bbox": [ + 507, + 758, + 884, + 919 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "14509", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "but is outperformed by two LLMs at humor.", + "bbox": [ + 112, + 84, + 442, + 99 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "When we compare LLMs to human writers, significance testing on overall score (2-tailed t-test assuming unequal variances) fails to detect significant differences between humans and the top 6 AI models with $\\alpha = 0.05$ . Only the 6 bottom AI models are significantly worse than humans at this significance level. Note, however, that the test has a low statistical power due to the small sample size (10 ratings per model). If we instead perform a test on individual metrics, so our sample size is 100 (with the null hypothesis being no difference between humans and each LLM in random individual metric scores), then GPT-4 is identified as significantly better than the human writers $(p = 0.00031)$ , Claude and Bing's scores are not significantly different from those of humans, and all the rest of the LLMs score significantly worse than humans.", + "bbox": [ + 112, + 102, + 489, + 375 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Looking at individual metric scores, structural elements (rubric item 3) are the easiest category (with an average rating across all stories of 7.3, and all models but one obtaining at least a 5 on average). Humor (rubric item 10) is clearly the hardest, with an average score of 3.4, and we will analyze it in more detail below. Incorporating John Kennedy Toole's style is the second hardest, with 4.7. Comparing humans to LLMs, humans (as already mentioned) excel at originality and humor, but are clearly behind the best LLMs in terms of readability (item 1), where they are outperformed by 6 LLMs, and even more so in use of the epic genre (item 7), where they score 4.9 and are outperformed by 8 LLMs.", + "bbox": [ + 115, + 376, + 489, + 618 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We now analyze in more detail some of the individual items that show more interesting comparisons between human writers and LLMs.", + "bbox": [ + 112, + 620, + 489, + 668 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3 Humor", + "text_level": 1, + "bbox": [ + 112, + 686, + 218, + 699 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Figure 2 shows a box plot that complements the information on Table 2 for the humor rubric item. The results for this item have two interesting characteristics. Firstly, it is clearly the most difficult rubric item, with an average score across models of 3.4, and the best obtaining 6.5. Even humans obtain a lower score in humor than in most items, which may be a consequence of humor being highly subjective. Secondly, as evidenced both in the table and plot, there is a rather stark binary divide between the contenders that \"get\" humor and those that do not: Claude, Bing and GPT-4, together with the human writers, obtain average scores between", + "bbox": [ + 112, + 709, + 489, + 917 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c1b8e58a5e63b1d628d2340a7cc8cb8fb19d66f4a82328c7a6067417f31f68d4.jpg", + "image_caption": [ + "Figure 2: Box plot comparing humor ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 514, + 84, + 878, + 340 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "6 and 6.5; whereas the rest of the models achieve very low scores of 3.4 or less. Significance testing also confirms this divide: despite the small sample size of 10 humor ratings per model, a 2-tailed t-test with $\\alpha = 0.05$ confirms that the models in the second group are significantly worse than the human writers, as well as the LLMs in the first group. This suggests that grasping human humor might be an emergent ability of larger LLMs.", + "bbox": [ + 505, + 420, + 882, + 565 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In this respect, a recent preprint (Jentzsch and Kersting, 2023) concluded that ChatGPT has \"a limited reflection of humor\" and \"cannot yet confidently create intentionally funny original content\". This study used the GPT 3.5 version of ChatGPT, so it is in line with our results (in which that model obtains an average humor score of 3.3). However, as we have seen, more powerful LLMs have overcome that limitation, as their generated stories are clearly rated as humorous.", + "bbox": [ + 507, + 565, + 882, + 726 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4 Creativity", + "text_level": 1, + "bbox": [ + 507, + 737, + 636, + 753 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We now focus on rubric item 5, which rates creativity and originality, as it is a hallmark of creative writing and also the only category where human writers have outperformed all the LLMs in our analysis. Figure 3 shows a box plot that complements the information on Table 2.", + "bbox": [ + 505, + 758, + 882, + 853 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The same three LLMs that stood out in the humor category are also the best in terms of creativity, although the difference is not as stark. Regardless, a t-test still distinguishes both groups as it shows all", + "bbox": [ + 507, + 854, + 882, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "14510", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/87ca571fbf28f5ca359b66f7bc31b5d5def0b8fe8aef6cfc2ee2d3ab28f1fb02.jpg", + "image_caption": [ + "Figure 3: Box plot comparing creativity ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 117, + 84, + 485, + 340 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "the rest of the LLMs to be rated as significantly less creative than our human writers, while for these three we cannot reject the null hypothesis that they are as original as the human writers.", + "bbox": [ + 112, + 420, + 487, + 483 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Overall, from our results and in terms of human perception of the output, the answer to whether LLMs can produce creative stories (Franceschelli and Musolesi, 2023) is yes, although humans still retain an edge in this respect.", + "bbox": [ + 112, + 485, + 487, + 565 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5 Epicness", + "text_level": 1, + "bbox": [ + 112, + 576, + 230, + 590 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Finally, we analyze rubric item 7 (understanding and habitation of the epic genre) for the opposite reason as in the previous section: it is the item where humans do worst compared to LLMs (see Table 2). A box plot is provided in Figure 4.", + "bbox": [ + 112, + 596, + 487, + 677 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this case, the results have a more atypical profile, with substantial difference with respect to overall scores. Two models perform significantly better than the human writers $(\\alpha = 0.05)$ : both versions of ChatGPT. Other six models obtain better average rating than humans, but the difference is not detected as significant.", + "bbox": [ + 112, + 678, + 487, + 789 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Interestingly, Bing clearly lags behind both ChatGPT versions, despite being based in GPT-4. This might be related to bias introduced by the system's censorship. On the other hand, some models whose overall scores are in the bottom half (OpenAssistant, GPT4All) are reasonably good at epic narration, outperforming humans and Bing (which are better than them in almost all categories).", + "bbox": [ + 112, + 791, + 489, + 917 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/323cffdbb14cc5f62ae0f57b58c62642dda36ee63bed64fe64ce9e662b542f35.jpg", + "image_caption": [ + "Figure 4: Box plot comparing epicness ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 512, + 82, + 882, + 342 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5 Discussion", + "text_level": 1, + "bbox": [ + 507, + 420, + 636, + 436 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We have evaluated recent LLMs on a creative writing task in English, using a carefully-designed scenario to provide a demanding challenge and avoid confounding factors like training data memorization (Carlini et al., 2023). To our knowledge, this is the most thorough evaluation of LLMs on creative writing conducted so far, both in terms of scope (12 LLMs considered, plus comparison to human writers) and detail (using human evaluation with a 10-item rubric based on established creative writing evaluation practices).", + "bbox": [ + 505, + 447, + 882, + 623 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Simultaneously to our work, the recent preprint by Chakrabarty et al. (2023) provides an evaluation of three of the top-performing commercial LLMs (ChatGPT, GPT-4 and Claude) for creative writing. This approach is close to ours, as it uses the models in a zero-shot setting and evaluation is performed by humans using a specific rubric. However, there are important methodological differences between both studies, which we summarize here:", + "bbox": [ + 507, + 624, + 882, + 768 + ], + "page_idx": 7 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. The human stories used by Chakrabarty et al. (2023) are stories published in the New Yorker, by highly successful authors (including Nobel prize winners), whereas ours are written by Creative Writing students.", + "2. In their setting, the human-written stories are pre-existing (and selected for publication in the New Yorker, as mentioned above) so their" + ], + "bbox": [ + 522, + 780, + 882, + 917 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "14511", + "bbox": [ + 477, + 927, + 522, + 940 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "writers were unconstrained when they created them, while the LLMs have to adapt to write an alternative story with the same plot. In ours, humans and LLMs are given the exact same prompt to work with.", + "bbox": [ + 149, + 84, + 487, + 164 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "3. In terms of length, the stories they work with are over thrice larger than ours on average. In addition, while both studies try to make sentence lengths similar between humans and LLMs, in their case the human writers originally wrote their stories unconstrained (or under loose constraints) and the LLM-generated stories were calibrated to have similar lengths by an iterative prompting process. In our case, the LLMs were unconstrained in terms of length, and the human writers were suggested to target a length range loosely similar to LLM-generated stories. Thus, with respect to theirs, our approach has the disadvantage of a looser control on story length, but the advantage of using a single zero-shot prompt.", + "4. Their study spans a variety of story prompts, while we focus on a single prompt and setting. The flip side is that our rubric can be adapted to specific requirements like humor and Toole style, whereas theirs is necessarily more generic. In addition, our narrower focus allows us to have LLMs generate several alternative stories, so we can perform more statistical analysis: we consider the distribution within each LLM and perform statistical testing, which cannot be done in Chakrabarty et al. (2023)'s setting as they generate a single story per prompt and LLM.", + "5. Since their study is based on existing stories that are published online, there is the possibility that some are contained in the tested LLMs' training data. In our case, we designed the study to prevent training data reuse.", + "6. The rubrics are different: Chakrabarty et al. (2023) use a rubric based on the Torrance tests of creative thinking (Torrance, 1974)." + ], + "bbox": [ + 127, + 177, + 489, + 810 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The outcome of this study is substantially different from ours, with LLM-generated stories rated clearly behind human-authored ones. This is not surprising considering the methodological differences: in particular, differences 1 and 2 in the list above clearly set a higher bar for LLMs, as they", + "bbox": [ + 112, + 822, + 489, + 917 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "are compared to highly successful human stories by top authors that wrote freely and the LLMs are asked to adapt to their plots. We hypothesize that these are the main reasons for the difference in outcome. On the other hand, item 5 in the list above could in principle benefit LLMs, and there are other factors that could benefit humans or LLMs in non-obvious ways (including items 3, 4 and 6, as well as different story genres and target lengths). This underscores the need of more studies in this area.", + "bbox": [ + 507, + 84, + 884, + 244 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 260, + 640, + 275 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The results show that state-of-the-art LLMs can perform a creative writing task at a very competent level, with the top two (ChatGPT with GPT-4 and Claude) achieving high scores that outperform human writers in most rubric categories. While we must be careful not to take this as evidence of \"superhuman storytelling\" (both because our sample size is not enough to draw such categorical conclusions, and because our 5 human writers are not necessarily representative of human writing ability as a whole); it does at least strongly suggest that these models' stories are not distinguishably worse than those by reasonably-trained humans. This is even more remarkable given that we did not use any in-context learning or other techniques to optimize the LLMs for the task, but just a straightforward prompt from a fresh state, so it is possible that even better results are achievable with careful prompting.", + "bbox": [ + 507, + 287, + 884, + 592 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Our analysis also shows that the best results are achieved by commercial LLMs, with open-source models clearly lagging behind at the moment.", + "bbox": [ + 507, + 595, + 882, + 642 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Looking at individual characteristics, humans retain the lead in originality, while LLMs tend to excel in more technical aspects like readability or structure. Humor is an especially challenging aspect where most LLMs utterly fail, but the best three models do succeed at achieving human-like ratings, contrasting with results on older LLMs that showed their lack of grasp of human humor (Jentzsch and Kersting, 2023).", + "bbox": [ + 507, + 644, + 882, + 788 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Interesting avenues for future work include evaluation of different literary genres, languages other than English, and studying whether the quality of the generated stories can be improved with prompt engineering or fine-tuning.", + "bbox": [ + 507, + 789, + 882, + 869 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Selected stories from our corpus (available at https://doi.org/10.5281/zenodo.8435671, together with all rating data) are in Appendix E.", + "bbox": [ + 507, + 871, + 882, + 917 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "14512", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 115, + 84, + 218, + 98 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Commercial LLMs and reproducibility While some of the LLMs considered are proper scientific artifacts, trained with a documented methodology and whose code and weights are available, others are closed commercial products and there is little public information about them, hindering reproducibility. While we have reported version numbers (where available) and access dates are provided in Appendix A, apart from publishing the generated outputs so that the rating process is reproducible, the prompting/generation process may not be reproducible in the future for these models as some of these products are updated without notice, and without providing access to previous versions. However, we believe that including commercial models is valuable, as they are widely considered to provide the best quality results at the time of writing (which has been confirmed by our analysis), and these data points can still be used as a measuring stick against which to compare open models in the present and future.", + "bbox": [ + 115, + 114, + 489, + 451 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Limitations of the analysis Rating creative writing is necessarily a highly subjective process. Furthermore, since our raters were volunteers, we did not ask each of them to mark the full 65 stories in the corpus but just a subset, so our sample size is limited. We have provided the necessary details so that the reader can assess the variability of the data (sample sizes, standard deviations, and interrater agreement, which is reasonably high given the subjectivity of the task); and we have been careful not to make overarching claims. In this respect, we have also taken into account that our sample of human writers cannot be assumed to be representative of \"human creative writing ability\" as a whole, but is only provided as a reference point of interest; and that our evaluation is focused on a specific genre, so claims of the form \"LLMs are better/equal/worse than humans at creative writing\" cannot be made with an evaluation like ours.", + "bbox": [ + 115, + 468, + 489, + 772 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Scope Our analysis focuses on a specific genre, and on English language, so the results do not necessarily generalize to other genres and/or languages. However, conducting a wider evaluation in this respect would not be possible with our resources, so we chose to fix these variables and focus on conducting a detailed evaluation on a large number of LLMs instead.", + "bbox": [ + 115, + 790, + 489, + 917 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 512, + 84, + 658, + 98 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "While the use of conversational LLMs has raised various ethical challenges, creative writing has been argued to be one of the best uses for these tools from a human-centered AI point of view, as long as AI-generated stories are identified as such to avoid misleading readers or publishers (Sison et al., 2023). In our study, raters were blinded to story authorship but they were previously informed that they would be dealing with AI and human-generated stories. In the published corpus, each story is identified as human or AI-authored.", + "bbox": [ + 512, + 109, + 882, + 284 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "All participants in the evaluation (as raters or writers) were volunteers, and the demand on their time was kept accordingly low.", + "bbox": [ + 512, + 286, + 882, + 332 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 512, + 346, + 670, + 361 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "The first author was funded by the European Research Council (ERC), under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), and Centro de Investigación de Galicia \"CITIC\", funded by the Xunta de Galicia through the collaboration agreement between the Consellería de Cultura, Educación, Formación Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS).", + "bbox": [ + 512, + 370, + 884, + 579 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "We thank Olga Zamaraeva for comments on preliminary versions of this work, and two anonymous reviewers for their helpful comments. Last, but not least, we thank our volunteers who participated in the writing and grading of stories, in alphabetical order: Jayda Franks, Bree Glasbergen, Ola Kwintowski, Jay Ludowyke, Kyle Mackenzie, Kirsty Maclachlan, Caitlin Noakes, Rachelle Raco, Kylie Ryan and Josephine Stewart. Credit for each individual story can be found in the corpus.", + "bbox": [ + 512, + 581, + 882, + 739 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 512, + 766, + 606, + 781 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, and Andriy Mulyar. 2023a. GPT4All: Training an assistant-style chatbot with large-scale data distillation from GPT-3.5-Turbo. Technical report.", + "Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, Adam Treat, and Andriy Mulyar. 2023b. GPT4All-J: An Apache-2 licensed assistant-style chatbot. Technical report." + ], + "bbox": [ + 512, + 790, + 882, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "14513", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Lorin W. Anderson and David R. Krathwohl, editors. 2001. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives, 2 edition. Allyn & Bacon, New York.", + "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional AI: Harmlessness from AI feedback. Technical report.", + "Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 313-320, Trento, Italy. Association for Computational Linguistics.", + "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. Technical report.", + "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.", + "Michael D Carey, Shelley Davidow, and Paul Williams. 2022. Re-imagining narrative writing and assessment: a post-naplan craft-based rubric for creative writing. The Australian Journal of Language and Literacy, 45(1):33-48.", + "Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramér, and Chiyuan Zhang. 2023. Quantifying memorization across neural language models. In International Conference on Learning Representations (ICLR)." + ], + "bbox": [ + 115, + 85, + 485, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. 2023. Art or artifice? large language models and the false promise of creativity.", + "Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing GPT-4 with $90\\%$ ChatGPT quality. Technical report.", + "John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. Talebrush: Sketching stories with generative pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22, New York, NY, USA. Association for Computing Machinery.", + "Elizabeth Clark, Tal August, Sofia Serrano, Nikita Hahuong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282-7296, Online. Association for Computational Linguistics.", + "Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213-220.", + "Shelley Davidow and Paul Williams. 2016. Playing With Words: A Introduction to Creative Craft. Bloomsbury Academic.", + "Annie Dillard. 1981. Contemporary prose styles. Twentieth Century Literature, 27:207-222.", + "Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey on in-context learning.", + "Giorgio Franceschelli and Mirco Musolesi. 2023. On the creativity of large language models.", + "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB dataset of diverse text for language modeling. CoRR, abs/2101.00027.", + "Eduardo C. Garrido-Merchan, José Luis Arroyo-Barrigüete, and Roberto Gozalo-Brihuela. 2023. Simulating H.P. Lovecraft horror literature with the ChatGPT large language model.", + "Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. Koala: A dialogue model for academic research. Blog post." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "14514", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, and David Chartash. 2023. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. *JMIR Med Educ*, 9:e45312.", + "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation. Transactions of the Association for Computational Linguistics, 8:93–108.", + "Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6379-6393, Online. Association for Computational Linguistics.", + "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR).", + "Sophie Jentzsch and Kristian Kersting. 2023. Chatgpt is fun, but it is not funny! humor is still challenging large language models.", + "Wenxiang Jiao, Wenxuan Wang, Jen tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? yes with gpt-4 as the engine.", + "Marzena Karpinska, Nader Akoury, and Mohit Iyyer. 2021. The perils of using Mechanical Turk to evaluate open-ended text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1265-1285, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", + "Jeri Kroll. 1997. A or C: Can we assess creative work fairly? TEXT, 1(1):1-5.", + "Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. OpenAssistant Conversations - democratizing large language model alignment.", + "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics," + ], + "bbox": [ + 115, + 85, + 489, + 917 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "pages 7871-7880, Online. Association for Computational Linguistics.", + "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models.", + "Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. 2023. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA. Association for Computing Machinery.", + "S. Norris. 2013. *Studying Creative Writing*. Creative Writing Studies. Frontinus Limited.", + "Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, and Brenden M. Lake. 2021. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. In Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021, Advances in Neural Information Processing Systems, pages 25192-25204. Neural information processing systems foundation.", + "OpenAI. 2023. Gpt-4 technical report. Technical report.", + "George Orwell. 1946. Politics and the English language. Horizon, 13:252-265.", + "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc.", + "Les Perelman. 2018. Towards a new NAPLAN: Testing to the teaching. Journal of Professional Learning, 2.", + "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners." + ], + "bbox": [ + 510, + 85, + 882, + 917 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "14515", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.", + "bbox": [ + 115, + 85, + 490, + 307 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Piotr Sawicki, Marek Grzes, Fabricio Goes, Dan Brown, Max Peeperkorn, and Aisha Khatun. 2023. Bits of grass: Does gpt already know how to write like Whitman?", + "bbox": [ + 115, + 319, + 489, + 370 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 843-861, Hong Kong, China. Association for Computational Linguistics.", + "bbox": [ + 115, + 384, + 489, + 475 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Alejo Jose G. Sison, Marco Tulio Daza, Roberto Gozalobrizuela, and Eduardo C. Garrido-Merchan. 2023. Chatgpt: More than a weapon of mass deception, ethical challenges and responses from the human-centered artificial intelligence (hcai) perspective.", + "bbox": [ + 115, + 488, + 489, + 552 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshit Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmuller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy", + "bbox": [ + 115, + 565, + 489, + 917 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Mosegui Gonzalez, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurrgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martinez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, German Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-Lopez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernandez Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocón, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva Katja Markert Kaustubh D. Dhole Kevin Gimpeel Kevin Omondi Kory Mathewson Kristen Chiafullo Ksenia Shkaruta Kumar Shridhar Kyle McDonell Kyle Richardson Laria Reynolds Leo Gao Li Zhang Liam Dugan Lianhui Qin Lidia Contreras-Ochando Louis-Philippe Morency Luca Moschella Lucas Lam Lucy Noble Ludwig Schmidt Luheng He Luis Oliveros Colón Luke Metz Lütfi Kerem Senel Maarten Bosma Maarten Sap Maartje ter Hoeve Maheen Farooqi Manaal Faruqui Mantas Mazeika Marco Baturan Marco Marelli Marco Maru Maria Jose Ramírez Quintana Marie Tolkiehn Mario Giulianielli Martha Lewis Martin Potthast Matthew L. Leavitt Matthias Hagen Matyás Schubert Medina Orduna Baitemirova Melody Arnaud Melvin McElrath Michael A. Yee Michael Cohen Michael Gu Michael Ivanitskiy Michael Starritt Michael Strube Michal Swedrowski Michele Bevilacqua Michihiro Yasunaga Mihir Kale Mike Cain Mimee Xu Mirac Suzgun Mo Tiwari Mohit Bansal Moin Aminnaseri Mor Geva Mozhdeh Gheini Mukund Varma T Nanyun Peng Nathan", + "bbox": [ + 526, + 85, + 884, + 907 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "14516", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nistish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Mltkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Ryan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Pi-antadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Theo Desbordes, Theodore Rothschild Thomas Phan, Tianle Wang, Tiberius Nkinyili Timo Schick Timofei Kornev Timothy Telleen-Lawton Titus Tunduny Tobias Gerstenberg Trenton Chang Trishala Neeraj Tushar Khot Tyler ShultzUri Shaham,Vedant Misra,Vera DembergVictoria Nyamai Vikas Raunak Vinay Ramasesh Vinay Uday Prabhu Vishakh Padmakumar,Vivek Srikumar William Fedus William Saunders William Zhang Wout Vossen Xiang Ren Xiaoyu Tong Xinran Zhao Xinyi Wu Xudong Shen,Yadollah Yaghoobzadeh Yair Lakretz Yangqiu Song,Yasaman Bahri,Yejin ChoiYichi Yang Yiding HaoYifu ChenYonatan Belinkov Yu HouYufang HouYuntao BaiZachary Seid Zhuoye Zhao Zijian Wang Zijie J.WangZirui Wang and Ziyi Wu. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.", + "bbox": [ + 132, + 85, + 489, + 815 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "W. Strunk and E.B. White. 2008[1918]. The Elements of Style. BN Publishing, New York.", + "bbox": [ + 115, + 826, + 485, + 853 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, and Monica Dinalescu. 2021. Story centaur: Large language model few shot learning as a creative writing tool. In Proceedings of the 16th Confer-", + "bbox": [ + 115, + 865, + 489, + 917 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "ence of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 244-256, Online. Association for Computational Linguistics.", + "bbox": [ + 527, + 85, + 882, + 137 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Bowen Tan, Zichao Yang, Maruan Al-Shedivat, Eric Xing, and Zhiting Hu. 2021. Progressive generation of long text with pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4313-4324, Online. Association for Computational Linguistics.", + "bbox": [ + 510, + 146, + 884, + 252 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca.", + "bbox": [ + 509, + 260, + 882, + 326 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications.", + "bbox": [ + 509, + 335, + 884, + 609 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "E.P. Torrance. 1974. Torrance Tests of Creative Thinking: Verbal Tests, Forms A and B, Figural Tests, Forms A and B. Norms-technical manual. Xerox.", + "bbox": [ + 509, + 618, + 882, + 657 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and efficient foundation language models.", + "bbox": [ + 509, + 667, + 882, + 746 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 billion parameter autoregressive language model. https://github.com/kingoflolz/mesh-transformer-jax.", + "bbox": [ + 509, + 755, + 882, + 808 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-Instruct: Aligning language model with self generated instructions.", + "bbox": [ + 509, + 816, + 882, + 869 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned", + "bbox": [ + 509, + 878, + 882, + 917 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14517", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.", + "bbox": [ + 132, + 85, + 487, + 139 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Beck Wise and Ariella van Luyn. 2020. Not 'all writing is creative writing' and that's ok: inter/disciplinary collaboration in writing and writing studies. TEXT, 24(Special 59):1-15.", + "bbox": [ + 115, + 147, + 487, + 200 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Zhuohan Xie, Trevor Cohn, and Joy Han Lau. 2023. Can very large pretrained language models learn storytelling with a few examples?", + "bbox": [ + 114, + 210, + 487, + 250 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. MEGATRON-CNTRL: Controllable story generation with external knowledge using large-scale language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2831-2845, Online. Association for Computational Linguistics.", + "bbox": [ + 115, + 259, + 487, + 365 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. 2022. Wordcraft: Story writing with large language models. In 27th International Conference on Intelligent User Interfaces, IUI '22, page 841-852, New York, NY, USA. Association for Computing Machinery.", + "bbox": [ + 115, + 374, + 487, + 453 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models.", + "bbox": [ + 115, + 463, + 487, + 555 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A Model access dates", + "text_level": 1, + "bbox": [ + 114, + 567, + 315, + 581 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Table 3 shows the date in which the stories were generated for each of the models. For future experimental reference, we highlight that the initial public disclosure of this paper online occurred on 2023-10-09. Before this date, only the human authors and raters were aware of the project from May 2023, and anonymous reviewers had access from June 23, 2023. Consequently, LLMs with a knowledge cutoff prior to 2023-10-09 are likely to have no or minimal risk of training set contamination.", + "bbox": [ + 112, + 592, + 487, + 753 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "B Hyperparameters", + "text_level": 1, + "bbox": [ + 114, + 765, + 305, + 781 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "We did not tweak any hyperparameters of the models. In the case of commercial models, we just ran the model as it is presented in their respective web user interfaces, except in the case of Bing Chat where we chose Creative mode. For open-source models, we used the default parameters from the web UI provided at https://chat.lmsys.org/, which set temperature to 0.7.", + "bbox": [ + 112, + 790, + 487, + 917 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/e5f7329210e3e53ae3920cad28564c7e4f057bd1fd3da7f1e1a28e85ebc946a3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelAccess date
alpaca2023-04-07
bard2023-04-11
bing2023-04-11
chatgpt-gpt352023-04-11
chatgpt-gpt42023-04-14
claude122023-04-04
dolly2023-04-14
gpt4all-j2023-04-14
koala2023-04-07
oa2023-04-16
stablelm2023-04-20
vicuna2023-04-07
humans2023-05-01 to 2023-05-12
", + "bbox": [ + 512, + 80, + 882, + 316 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Table 3: Access dates for each model (and dates of writing for the human stories), in YYYYY-MM-DD format.", + "bbox": [ + 507, + 326, + 882, + 355 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "C Detailed rubric information", + "text_level": 1, + "bbox": [ + 507, + 379, + 786, + 394 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "The creative writing rubric was designed for assessment of creative writing scripts in university creative writing courses in order to evaluate these above competencies, criteria 1-5 to measure general creative writing capacities, and criteria 6-10 to measure specific task related proficiency. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft and to avoid formulaic, rule-based writing.", + "bbox": [ + 507, + 405, + 882, + 565 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Overall/ holistic/ cohesive readability of the story (not just a compilation of elements).", + "2. Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view.", + "3. Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting", + "4. Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid)", + "5. Creativity/innovation/originality/research—credibility, new knowledge, avoidance of cliché and derivative tropes", + "6. Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed below" + ], + "bbox": [ + 522, + 577, + 882, + 917 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "14518", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "7. Understanding and habitation of the epic genre of heroic/legendary adventure", + "8. Description and credibility of a single combat scene", + "9. Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description (see below for character description)", + "10. Use of a characteristically dark humorous tone." + ], + "bbox": [ + 121, + 84, + 487, + 275 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The 1-10 scale is divided into three ranges:", + "bbox": [ + 132, + 288, + 452, + 304 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Emerging (1-4): stories in this range demonstrate an early grasp of storytelling elements, but falter in execution or depth. When evaluating humans, they correspond to novice writers who need feedback and guidance to improve the story.", + "Competent (5-8): stories that showcase a good grasp of the storytelling principle being evaluated (coherent plot, well-defined characters, etc.). While there might be room for improvement, these stories effectively engage the reader and convey their intended messages.", + "- Sophisticated (9-10): these stories exhibit exceptional mastery of the aspect being evaluated, resulting in a compelling and memorable read." + ], + "bbox": [ + 136, + 315, + 487, + 607 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Toole style We provided raters with detailed information about the plot, setting, imagery, tone, characters, main protagonist, and derivative/imitative style of the author, taken from a generic and popular study guide (http://www.bookrags.com/studyguide-a-confederacy-of-dunces/#gsc.tab=0).", + "bbox": [ + 112, + 619, + 489, + 747 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "D Box plots for each individual rubric item", + "text_level": 1, + "bbox": [ + 112, + 760, + 460, + 791 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Figures 5 to 14 show the box plots summarizing the results for all rubric items, including those plots not featured in the main text.", + "bbox": [ + 112, + 802, + 485, + 848 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "E Sample stories", + "text_level": 1, + "bbox": [ + 112, + 862, + 278, + 878 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "We show in this section several sample stories from the corpus, chosen according to rating: the", + "bbox": [ + 112, + 887, + 485, + 917 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/aa13bfd651d9de022b8fd600fe0441c2b3651899b84951791b1bed2b1bc3ba7a.jpg", + "image_caption": [ + "Figure 5: Box plot comparing rubric item 1 (cohesion) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 512, + 124, + 878, + 382 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/523eac89bbf4e81717f01cfa0e0dc3c2c2f929f230a0e3f7b4d6ab3e8ee37a94.jpg", + "image_caption": [ + "Figure 6: Box plot comparing rubric item 2 (key narrative elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 514, + 545, + 877, + 802 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "14519", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/0ebf78940c8609025c7593d9dfe1a37fa00292b10eba9c60de34cacdf7501243.jpg", + "image_caption": [ + "Figure 7: Box plot comparing rubric item 3 (structural elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 117, + 123, + 487, + 382 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/45f0e75d2dfe07629611a43566fa069cf9a6dc73e112fba3beeeedd81234fbde.jpg", + "image_caption": [ + "Figure 9: Box plot comparing rubric item 5 (creativity) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 512, + 124, + 880, + 382 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/555ec6404a621caf381f66322d9a733f0169f425b5dd2e3d367fea3ea07d9183.jpg", + "image_caption": [ + "Figure 8: Box plot comparing rubric item 4 (plot logic) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 117, + 544, + 487, + 803 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/4ed93337bd71e8bb3557a8cc1e68f5782fc8a927d3faeb796f64b0cbae44d64f.jpg", + "image_caption": [ + "Figure 10: Box plot comparing rubric item 6 (John Kennedy Toole style) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 512, + 544, + 880, + 803 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "14520", + "bbox": [ + 477, + 927, + 526, + 940 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/c4b22cabac8095762b897f5691f0705556b631f3fa76b8517dbf96a65714361d.jpg", + "image_caption": [ + "Figure 11: Box plot comparing rubric item 7 (epic genre) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 117, + 124, + 487, + 382 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/70a364e0f426758076047a0c01f11f2bbc7a878877917b3b74ea161170baf05f.jpg", + "image_caption": [ + "Figure 13: Box plot comparing rubric item 9 (accuracy of characters) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 512, + 124, + 880, + 382 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/e901c5a707c1100a14d19b67b73c6d2b1f288a9fcff1d994921aa271518b283e.jpg", + "image_caption": [ + "Figure 12: Box plot comparing rubric item 8 (combat description) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 117, + 544, + 487, + 803 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/a296b001e9bba1feb5d0453cbae02979ce1ae9f73cabb5b219edd905edf26dde.jpg", + "image_caption": [ + "Figure 14: Box plot comparing rubric item 10 (dark humor) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + ], + "image_footnote": [], + "bbox": [ + 512, + 544, + 880, + 803 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "14521", + "bbox": [ + 477, + 927, + 522, + 940 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "three top-rated stories, the best human-written story (which was also ranked fourth overall), the story ranked in the median, and the worst-rated. It is worth noting that, as explained in Section 3.4, our methodology is designed to provide fairness when comparing models, but not individual stories, as different stories are assigned to different raters. Thus, selection of stories by rating is necessarily noisy.", + "bbox": [ + 112, + 84, + 489, + 212 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "E.1 Overall best-rated story", + "text_level": 1, + "bbox": [ + 112, + 229, + 351, + 244 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "This story was generated by GPT-4. The ratings for this story are in Table 4.", + "bbox": [ + 110, + 252, + 485, + 282 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "It was a sweltering Louisiana afternoon, the air thick with the miasma of musty sweat and the scent of overripe vegetables. In the midst of the urban jungle, resplendent with the cacophony of honking cars and the chatter of the common rabble, stood a man who defied reason, common sense, and all natural laws of human decency. Ignatius J. Reilly, a behemoth of a man with an unruly mane of hair and a Byzantine beard, adjusted his ill-fitting, plaid trousers and surveyed the scene with a look of lofty disdain.", + "bbox": [ + 112, + 286, + 487, + 461 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Suddenly, the air crackled with an electric charge, and a portal appeared before him. The very fabric of time and space seemed to rend asunder, and from this anachronistic rift emerged a pterodactyl - a creature so ancient that it had outlived the vast expanse of geologic time, only to find itself in the absurdity of the twentieth century. Its leathery wings stretched wide as the mists of time dissipated, and it let out an earsplitting screech that sent the common folk of New Orleans scattering like roaches from the light.", + "bbox": [ + 112, + 464, + 487, + 640 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Ignatius, unphased by the sudden appearance of the primordial beast, surveyed the creature through his fogged-up monocle. \"Fortuna,\" he muttered, his voice dripping with both sarcasm and hot dog condiments, \"you spin your wheel with such capricious abandon. Have you not tired of hurling these preposterous challenges in my path?\"", + "bbox": [ + 112, + 642, + 487, + 755 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The pterodactyl, with its beady, reptilian eyes, locked its gaze upon Ignatius, its primitive brain sensing a formidable opponent in this lumbering, malodorous mass of a man. It screeched again, flapping its wings in a display of both aggression and arrogance, its razor-sharp beak snapping at the air.", + "bbox": [ + 112, + 757, + 487, + 868 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Ignatius bellowed, raising his arms as if beseech- ing the heavens, \"Very well, foul beast! If it is battle you seek, you shall find no quarter here! I", + "bbox": [ + 112, + 871, + 487, + 917 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "shall vanquish you as I have vanquished all who have dared to challenge the indomitable will of Ignatius J. Reilly!\"", + "bbox": [ + 507, + 84, + 882, + 131 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "With a speed surprising for a man of his considerable girth, Ignatius tore a shopping cart from the hands of a terrified passerby and hurled it toward the pterodactyl with the force of a thousand disgruntled welfare recipients. The makeshift projectile slammed into the creature, stunning it momentarily and allowing Ignatius to close the distance between them.", + "bbox": [ + 507, + 134, + 884, + 261 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The two combatants circled one another, their eyes locked in a primal dance of predator and prey. The pterodactyl, recovering from the initial assault, swooped low, talons outstretched to rend flesh from bone. Ignatius, with an agility that defied his rotund appearance, rolled to the side, avoiding the talons by mere inches.", + "bbox": [ + 507, + 266, + 882, + 376 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "As the battle raged, the streets of New Orleans became a stage for this epic confrontation between man and beast, between past and present. Ignatius, armed with nothing more than his wit and an unwieldy umbrella, parried the relentless strikes of the pterodactyl, determined to emerge victorious.", + "bbox": [ + 507, + 380, + 882, + 475 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "As the sun dipped low in the sky, casting a blood-red hue across the battlefield, the pterodactyl, weary from the long and grueling contest, faltered. Seizing the opportunity, Ignatius lunged forward, his umbrella aimed squarely at the creature's heart. The point found its mark, and with a final, anguished cry, the pterodactyl crumpled to the ground, its ancient spirit extinguished at last.", + "bbox": [ + 507, + 479, + 882, + 606 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Ignatius stood triumphant over the fallen beast, his chest heaving with exertion and righteous indignation. He wiped the sweat from his brow with a grease-stained handkerchief and addressed the awestruck crowd that had gathered to witness this extraordinary battle.", + "bbox": [ + 507, + 609, + 882, + 705 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "\"Behold, citizens of this wretched cesspool,\" he proclaimed, gesturing grandly with his umbrella, now slick with the blood of the vanquished foe. \"The forces of the prehistoric have met their match in the unwavering resolve of Ignatius J. Reilly! Let this be a lesson to all who would dare challenge the sanctity of my person and my valiant quest for justice in this world beset by fools and ne'er-dowells.\"", + "bbox": [ + 507, + 708, + 882, + 851 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The onlookers, mouths agape, gazed upon the fallen pterodactyl and the disheveled figure of Ignatius J. Reilly with a mixture of awe, terror, and disbelief. They had borne witness to a battle of the", + "bbox": [ + 507, + 854, + 882, + 917 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "14522", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/a8a8eca8848a5732cc76ac1aeda694a7842ee5483c70e727add1fac706aa07d5.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Rubric item12345678910overall
Rater 1889877887676
Rater 2101099891010101095
Average9998.57.58998.5885.5
", + "bbox": [ + 216, + 80, + 781, + 149 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Table 4: Ratings for the best overall rated story in the corpus, produced by ChatGPT with GPT-4.", + "bbox": [ + 168, + 159, + 823, + 174 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "ages, a contest that would be whispered about in the dark corners of bars and the humid alleyways of New Orleans for generations to come.", + "bbox": [ + 112, + 200, + 487, + 247 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "As the sun set and the shadows grew long, Ignatius, his honor defended and his foe vanquished, strode off into the gloaming, pausing only to collect a discarded hot dog from the gutter as sustenance for the journey home.", + "bbox": [ + 112, + 249, + 487, + 329 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "And so, the legend of Ignatius J. Reilly, pterodactyl-slayer and defender of truth, justice, and questionable hygiene, was etched into the annals of history, a testament to the indomitable spirit of man, and a reminder that even the most ancient and fearsome of adversaries could be vanquished by the sheer force of will and an unwieldy umbrella.", + "bbox": [ + 112, + 331, + 487, + 458 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "E.2 Second overall best-rated story", + "text_level": 1, + "bbox": [ + 112, + 474, + 406, + 489 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "This story was generated by Bing Chat. The ratings for this story are in Table 5.", + "bbox": [ + 112, + 497, + 485, + 527 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history.", + "bbox": [ + 112, + 530, + 487, + 657 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Ignatius gasped and dropped his hot dog. He reached for his trusty hunting cap, which he always wore to protect his delicate ears from the noise of the modern world. He pulled it down over his eyes and hoped that the pterodactyl would mistake him for a statue or a bush.", + "bbox": [ + 112, + 659, + 487, + 755 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "But the pterodactyl was not fooled. It landed on the ground with a thud and snapped its jaws at Ignatius, who dodged nimbly to the side. Ignatius felt a surge of courage and indignation. How dare this beast attack him, a refined scholar and a defender of the true faith? He decided to fight back with his superior intellect and wit.", + "bbox": [ + 112, + 757, + 487, + 869 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "He shouted at the pterodactyl: \"You foul creature! You are an abomination of nature, a relic of a barbaric age! You have no place in this civ", + "bbox": [ + 112, + 871, + 487, + 917 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "ilized society, where men of letters and culture reign supreme! Begone, or I shall smite you with my mighty pen!\"", + "bbox": [ + 507, + 200, + 880, + 247 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The pterodactyl was not impressed by Ignatius's words. It screeched again and lunged at him, trying to bite his head off. Ignatius dodged again and ran towards a nearby bench. He grabbed a newspaper that someone had left behind and rolled it into a cone. He brandished it like a sword and charged at the pterodactyl.", + "bbox": [ + 507, + 249, + 880, + 361 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "\"Take that, you fiend!\" he cried, as he stabbed the newspaper cone into the pterodactyl's eye. The pterodactyl shrieked in pain and flapped its wings furiously, trying to shake off the paper weapon. Ignatius seized the opportunity and ran towards the park exit, hoping to find help.", + "bbox": [ + 507, + 362, + 880, + 458 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "But he was not fast enough. The pterodactyl recovered from its wound and flew after him, gaining speed. It caught up with Ignatius and grabbed him by his coat tails, lifting him off the ground. Ignatius screamed and kicked his legs in the air, clutching his hunting cap.", + "bbox": [ + 507, + 460, + 880, + 556 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "\"Help! Help! Someone save me!\" he yelled, as he soared over the park. \"This is an outrage! This is an injustice! This is a violation of my human rights!\"", + "bbox": [ + 507, + 557, + 880, + 620 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "But no one heard him or came to his rescue. The pterodactyl carried him away to its nest on top of a skyscraper, where it planned to feast on him later. Ignatius continued to protest and curse until he lost consciousness.", + "bbox": [ + 507, + 623, + 880, + 701 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The end.", + "bbox": [ + 527, + 703, + 596, + 718 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "E.3 Third overall best-rated story", + "text_level": 1, + "bbox": [ + 507, + 734, + 791, + 749 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "This story was generated by Claude. The ratings for this story are in Table 6.", + "bbox": [ + 505, + 756, + 880, + 787 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The primordial beast let out a piercing shriek as it descended from the roiling gray sky, its leathery wings casting a shadow over the granite-paved streets of New Orleans.", + "bbox": [ + 507, + 790, + 880, + 852 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Ignatius J. Reilly ambled down the sidewalk, oblivious as ever, focused on the Valencia he clutched and the fantasies playing out in his voracious mind. His substantial bulk shook with each", + "bbox": [ + 507, + 854, + 880, + 917 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "14523", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/aa81acca7c778fbb1e6e033481474bb7b948d95396b94fd85a94238d3bba2d1c.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Rubric item12345678910overall
Rater 7989979999987
Rater 8998988699883
Average98.58.597.58.57.5998.585.0
", + "bbox": [ + 218, + 80, + 776, + 149 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/6d68959c44031a3cc14fb15778cc87106e898937628395fea7b9764f72968282.jpg", + "table_caption": [ + "Table 5: Ratings for the second best overall rated story in the corpus, produced by Bing Chat." + ], + "table_footnote": [], + "table_body": "
Rubric item12345678910overall
Rater 9999879259774
Rater 1010109999910101095
Average9.59.598.5895.57.59.58.584.5
", + "bbox": [ + 213, + 187, + 784, + 255 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Table 6: Ratings for the third best overall rated story in the corpus, produced by Claude.", + "bbox": [ + 200, + 266, + 793, + 280 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "step, the excess flesh spilling out of his hunting cap and sweatshirt like rising bread dough.", + "bbox": [ + 112, + 306, + 485, + 337 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "When the shadow passed over him, Ignatius glanced up with a start. The pterodactyl unleashed another unearthly screech, indignant at this blob of a man who dared walk under its domain.", + "bbox": [ + 112, + 338, + 485, + 401 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Ignatius blinked myopically at the creature and snorted. \"What prehistoric nonsense is this? Some misguided fool has clearly released you from your confinement in the geological strata. Begone, you ridiculous beast, or I shall be forced to thrash you soundly with my umbrella!\"", + "bbox": [ + 112, + 401, + 485, + 499 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Enraged, the pterodactyl swooped down, gnarled claws extended, snapping its beak in fury. But Ignatius swung his umbrella in a wide arc, batting the beast aside. It tumbled and flapped awkwardly, crashing into a lamppost, dazed by the assault.", + "bbox": [ + 112, + 500, + 487, + 579 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "\"Thoughtless demon!\" Ignatius cried. He charged at the pterodactyl, surprisingly nimble for his size, and began striking it about the head and neck with his umbrella, each blow punctuated with an insult. \"Sugar-frosted fruitcake! Detritus of the Jurassic! Blithering idiot of an avian menace!\"", + "bbox": [ + 112, + 580, + 487, + 676 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "After a thrashing unlike any it had experienced in 65 million years, the pterodactyl fled squealing into the sky, thoroughly chastened by the wrath of Ignatius J. Reilly. Ignatius straightened his hunting cap and sniffed. The Valencia crisis averted once more, he resumed his lumbering perambulations down the streets of New Orleans.", + "bbox": [ + 112, + 677, + 487, + 789 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "E.4 Best-rated human story (and tied for fourth overall best-rated story)", + "text_level": 1, + "bbox": [ + 112, + 801, + 450, + 832 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "This story was written by Bree Glasbergen. The ratings for this story are in Table 7.", + "bbox": [ + 112, + 838, + 485, + 870 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Ignatius J Reilly swept crisp crumbs from his protruding belly with his elephantine hands. Swivelling from side-to-side, he garnered enough mo", + "bbox": [ + 112, + 871, + 487, + 917 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "mentum to rise from the sofa. His slow ascend was soundtracked by the grating rip of stuck flesh peeling from sweaty vinyl. The lengthy time moving from reclined to an upright position positively perturbed him. So that by the time Ignatius stood, his joke had lost its amusement. Nevertheless, he declaimed his wit aloud, beseechng his mother's glowing approval.", + "bbox": [ + 505, + 306, + 882, + 434 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "'I see you have painted the walls Nomad Grey, Mumsie!' Ignatius smirked, looking down on the half-filled grey paint cans on the steps the way he did most modern society.", + "bbox": [ + 507, + 434, + 882, + 499 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "'No, not mad dear. Just grey.' His mother Irene responded, creeping down the basement stairs. Her leathered skin made her appear reptilian in the dim light of Ignatius' lair.", + "bbox": [ + 507, + 499, + 880, + 563 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Ignatius rolled his eyes like the great wheel of fate itself. He slunk back into his scabby sofa, defeated, cursing aloud that he be blessed with such profound intellect yet no equal to appreciate it. His mind wandered to what the great scholars of Oxford would think of his pun before concluding indeed, they would loudly chortle. Yes, they would. He imagined flying to London and exchanging sharp banter with someone on par with his intellect. Travel. He winced. Never again. He groaned in agony, clutching his stomach. The thought of such stress had snapped his pyloric valve shut.", + "bbox": [ + 505, + 564, + 882, + 756 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Irene Reilly, the mother of Ignatius J Reilly, reached the bottom of the basement stairs. She pondered why Ignatius had a crestfallen demeanour and began to appease his dismay.", + "bbox": [ + 507, + 757, + 882, + 821 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "'No mad grey,' she contemplated aloud.", + "bbox": [ + 527, + 821, + 823, + 837 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "'Nomad grey,' he corrected.", + "bbox": [ + 527, + 838, + 737, + 853 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "'No mad grey hair?' Irene laughed tentatively, searching his face for approval.", + "bbox": [ + 507, + 854, + 882, + 885 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Ignatius had begun to relax. Irene knew this because of a gangrenous heinous stench that was", + "bbox": [ + 507, + 887, + 880, + 917 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "14524", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/b578d5d3618fc8a2888664145c1b61c181f4d65aef2d8bc567827fe084de45bf.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Rubric item12345678910overall
Rater 3899108105910987
Rater 48777108688978
Average8888.5995.58.59982.5
", + "bbox": [ + 221, + 80, + 774, + 149 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Table 7: Ratings for the best-rated story authored by a human, which is also tied for fourth best overall rated story in the corpus.", + "bbox": [ + 112, + 159, + 878, + 187 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "now coating the room in its own layer of paint accompanied by what sounded like the bellow of an untuned French horn. Ignatius had calmed enough for his pyloric valve to open once more. With it, gushed the contents. Irene's nostrils scrunched together in protest. She grimaced in utter (albeit accustomed) disgust. However, did not complain but rather waited with the patience of a Catholic saint for her beloved son to educate her on the punchline she must have missed.", + "bbox": [ + 110, + 214, + 485, + 374 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'No, mother. Grey Nomad. You are painting the wall grey, and you are...' Ignatius sighed, 'actually, Mumsie, never you mind'.", + "bbox": [ + 112, + 375, + 485, + 423 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Irene feigned a chuckle and handed Ignatius an unaddressed letter before returning upstairs.", + "bbox": [ + 112, + 425, + 485, + 456 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'Curious as a cadaver,' Ignatius said aloud to the abyss of his basement squalor.", + "bbox": [ + 112, + 458, + 484, + 489 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "12.12.1962", + "bbox": [ + 134, + 491, + 220, + 504 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Dear Mr Ignatius J Reilly, the first,", + "bbox": [ + 132, + 508, + 391, + 523 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "I challenge you to a dual at the setting of the sky. Might I remind you it is gentlemanly to remove one's hat in combat. We shall meet beside the gorgon nestled atop the church. The one across from Lorna's Gumbo shop.", + "bbox": [ + 112, + 524, + 487, + 605 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Your mortal nemesis,", + "bbox": [ + 132, + 606, + 292, + 620 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Terry-dactyl", + "bbox": [ + 132, + 623, + 226, + 638 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "PS: Bring snacks.", + "bbox": [ + 132, + 640, + 265, + 655 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Ignatius sat ruminating for an hour before yelling at his mother.", + "bbox": [ + 112, + 657, + 485, + 687 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'Mother, you vapid deranged widow of a woman. Fetch me my quill!'", + "bbox": [ + 112, + 689, + 487, + 721 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "12.12.1962", + "bbox": [ + 134, + 722, + 220, + 736 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "My dear Terrance,", + "bbox": [ + 132, + 740, + 270, + 755 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Not under threat nor the pain of death doth I remove my beloved green hat. Sod off.", + "bbox": [ + 112, + 757, + 485, + 788 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "You had best bring a sharpener for your dull wit. I laugh at the audacity and delusion that you could consider besting me.", + "bbox": [ + 112, + 790, + 487, + 837 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Might I remind you, good sir, my acceptance of your conditions is due to the ever-turning wheel of fate that we spiral to decay. I should instead seek a worthy opponent. But, alas, I am left with muddy dregs of the proverbial pond as many of the", + "bbox": [ + 112, + 839, + 487, + 917 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "worthier fish have already been fished. Thus, I have no option but to teach you the error of your ways. By force.", + "bbox": [ + 507, + 214, + 882, + 261 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Put your wings where your words are, and let us meet in my basement lair. To visit the church in its present state would be torture to my very soul. May St Peter have mercy on us indeed.", + "bbox": [ + 507, + 263, + 882, + 326 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Good day,", + "bbox": [ + 527, + 329, + 608, + 343 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Ignatius", + "bbox": [ + 527, + 346, + 591, + 361 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Terry-dactyl, the pterodactyl etched down the basement rail, sword in one wing and soup in a milkshake cup gripped tightly in the other. He placed the straw in his mouth and swallowed some soup contemplating how to best his nemesis.", + "bbox": [ + 507, + 363, + 880, + 442 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'We meet at last... light,' Terry said. One-Nil.", + "bbox": [ + 527, + 444, + 870, + 458 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'You suck,' Ignatius said slyly. Marking his win with chalk upon the wall. One- One", + "bbox": [ + 507, + 460, + 880, + 492 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "doesn't even make sense!' Terry scoffed.", + "bbox": [ + 527, + 494, + 831, + 508 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'It is because of the straw!' Ignatius boomed, gripping his stomach in pain.", + "bbox": [ + 507, + 510, + 882, + 542 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'I have the upper hand!' Terry said, motioning to his perched position.", + "bbox": [ + 507, + 543, + 880, + 575 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'At least I have hands,' Ignatius countered.", + "bbox": [ + 527, + 577, + 843, + 592 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Terry winced as Ignatius drew another chalk mark on the board. Ignatius was beginning to calm.", + "bbox": [ + 507, + 594, + 882, + 625 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'Oh, what have I got you all in a flap?' Ignatius laughed. Another point.", + "bbox": [ + 507, + 627, + 880, + 658 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'Let us cut,' Terry said, drawing his sword, 'straight to the point!' Three all.", + "bbox": [ + 505, + 659, + 880, + 690 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Terry swung his sword downwards in one swift motion, cutting Ignatius' chalk-bearing arm clean off at the elbow. Simultaneously Ignatius lifted a paint can and doused his opponent with it. As he did, his valve opened and shut again, demobilising Terry with a gas bomb and gutting Ignatius in self-induced agony. Terry flapped violently, unable to breathe. Ignatius then calmed enough for the full contents of his bowl to expel and fell backwards from the force. Suddenly, a splatter of pterodactyl and grey matter covered the room. A large chunk of wing lodged itself into the crisp packet.", + "bbox": [ + 505, + 693, + 882, + 885 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "'Curious as a cadaver,' Ignatius said. 'I see you brought your own snacks!'", + "bbox": [ + 507, + 887, + 880, + 917 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "14525", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "He chortled to himself and pulled out his notepad to ensure he captured a morsel of this evening's wit.", + "bbox": [ + 112, + 84, + 487, + 116 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "'Ever seen a pterodactyl explode?' he asked aloud to no one. 'No, but Dino might have'.", + "bbox": [ + 112, + 117, + 485, + 149 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "His mothers' laughter rang down from the top of the stairs.", + "bbox": [ + 112, + 151, + 487, + 181 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "'Nomad grey,' she chuckled, motioning to the aftermath, 'I get it now'.", + "bbox": [ + 112, + 184, + 485, + 216 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "E.5 Story rated in the median (rank 33 out of 65)", + "text_level": 1, + "bbox": [ + 112, + 230, + 485, + 261 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "This story was generated by Bing Chat. The ratings for this story are in Table 8.", + "bbox": [ + 110, + 269, + 485, + 300 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of the Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history.", + "bbox": [ + 112, + 302, + 487, + 430 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Ignatius gasped and dropped his hot dog. He felt a surge of panic and indignation. How dare this beast interrupt his peaceful contemplation of the glorious past? He quickly pulled out his valve and began to inflate it with his breath, hoping to ward off the attack with his formidable weapon.", + "bbox": [ + 112, + 432, + 487, + 527 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "The pterodactyl was not impressed by the sight of the valve. It saw only a fat and juicy prey, dressed in a ridiculous green hunting cap and a flannel shirt. It dived faster and aimed for Ignatius's head, intending to rip it off with its sharp teeth.", + "bbox": [ + 112, + 530, + 487, + 609 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Ignatius saw the pterodactyl's shadow looming over him. He raised his valve and swung it with all his might, hoping to hit the creature in the eye or the nose. He missed by a few inches and only grazed its wing. The pterodactyl let out another screech and changed its course, flying past Ignatius and landing on a nearby tree.", + "bbox": [ + 112, + 611, + 487, + 722 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Ignatius felt a momentary relief. He had survived the first assault. He looked around for help, but there was no one in sight. The park was deserted, except for a few squirrels and pigeons. He wondered if he should run for his life or stand his ground and fight.", + "bbox": [ + 112, + 725, + 487, + 820 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "He decided to fight. He was not going to let this abomination ruin his day. He was a medievalist, a scholar of the Dark Ages, a defender of the faith and the order. He had read about the heroic deeds of knights and saints who had faced dragons and demons with courage and piety. He had to emulate", + "bbox": [ + 112, + 822, + 487, + 917 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "them and prove his worth.", + "bbox": [ + 507, + 84, + 705, + 99 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "He picked up his hot dog from the ground and wiped off some dirt. He took a bite and felt a surge of energy. He felt ready for battle. He shouted at the pterodactyl: \"You foul creature! You are an anachronism, a relic of a barbaric era! You have no place in this civilized world! You are a menace to society and an insult to God! I challenge you to a duel! Come down and face me like a man!\"", + "bbox": [ + 507, + 102, + 882, + 230 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "The pterodactyl heard Ignatius's words and felt a surge of anger. It did not understand what he was saying, but it sensed his hostility and defiance. It did not like being challenged by this puny creature who dared to call himself a man. It felt insulted by his words and his gesture of throwing his hot dog at it.", + "bbox": [ + 507, + 231, + 882, + 343 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "It accepted the challenge. It spread its wings and flew towards Ignatius again, roaring with rage. It was going to teach him a lesson he would never forget.", + "bbox": [ + 507, + 346, + 882, + 409 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Ignatius saw the pterodactyl coming at him again. He felt a surge of fear and courage. He had to act fast. He looked around for a weapon, but there was nothing in sight. He had only his valve and his wits.", + "bbox": [ + 507, + 412, + 882, + 491 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "He decided to use his wits. He had to outsmart the beast. He had to use his superior intellect and his vast knowledge of history and philosophy. He had to appeal to its reason and its conscience.", + "bbox": [ + 507, + 494, + 882, + 557 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "He shouted at the pterodactyl: \"Stop! Think! What are you doing? Why are you attacking me? What have I done to you? Do you not realize that you are committing a grave sin? Do you not fear the wrath of God? Do you not respect the laws of nature and the rights of man?\"", + "bbox": [ + 507, + 558, + 882, + 655 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "The pterodactyl heard Ignatius's words and felt a surge of confusion. It did not understand what he was saying, but it sensed his sincerity and his passion. It wondered if he had a point. It wondered if it was doing something wrong. It wondered if it should stop and listen.", + "bbox": [ + 507, + 657, + 882, + 753 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "It slowed down its flight and hovered over Ignatius, looking at him with curiosity and doubt. It tilted its head and made a questioning sound.", + "bbox": [ + 507, + 756, + 882, + 803 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Ignatius saw the pterodactyl's hesitation and felt a surge of hope. He had succeeded in making it pause and reconsider. He had a chance to persuade it and make peace with it.", + "bbox": [ + 507, + 804, + 882, + 869 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "He smiled at the pterodactyl and said: \"That's right. Listen to me. I have something important to tell you. Something that will change your life and", + "bbox": [ + 507, + 870, + 882, + 917 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "14526", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 22 + }, + { + "type": "table", + "img_path": "images/e9902fac6f207bfdb7bd40903cb2029cfba6721acc59eb7ae1a0f0d3b4e78610.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Rubric item12345678910overall
Rater 5765563233141
Rater 6899687878777
Average7.57.575.575555.5459
", + "bbox": [ + 228, + 80, + 771, + 149 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Table 8: Ratings for the story rated in the median, generated by Bing Chat.", + "bbox": [ + 243, + 159, + 749, + 174 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "your destiny. Something that will make you happy and free. Something that will save your soul.\"", + "bbox": [ + 112, + 200, + 485, + 231 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his kindness and his generosity. It wondered what he had to tell it. It wondered what he could offer it. It wondered if it should trust him.", + "bbox": [ + 112, + 233, + 487, + 328 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "It landed on the ground and approached Ignatius slowly, looking at him with interest and caution. It made a friendly sound.", + "bbox": [ + 112, + 331, + 485, + 378 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Ignatius saw the pterodactyl's approach and felt a surge of joy. He had succeeded in making it land and come closer. He had a chance to befriend it and teach it.", + "bbox": [ + 112, + 380, + 485, + 443 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "He reached out his hand and said: \"Come here, my friend. Let me touch you. Let me show you my love. Let me introduce you to the wonders of civilization and culture. Let me enlighten you with the wisdom of the ages.\"", + "bbox": [ + 112, + 445, + 487, + 525 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his warmth and his enthusiasm. It wondered what he wanted to touch it for. It wondered what he wanted to show it. It wondered what he wanted to teach it.", + "bbox": [ + 112, + 527, + 487, + 621 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "It moved closer to Ignatius and extended its neck, looking at him with curiosity and trust. It made a gentle sound.", + "bbox": [ + 112, + 625, + 487, + 671 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Ignatius touched the pterodactyl's neck and felt a surge of emotion. He had succeeded in making it trust him and accept him. He had a chance to bond with it and adopt it.", + "bbox": [ + 112, + 674, + 487, + 738 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "He hugged the pterodactyl and said: \"You are beautiful, my friend. You are magnificent. You are noble. You are a masterpiece of creation. You are a gift from God. You are my brother.\"", + "bbox": [ + 112, + 740, + 487, + 803 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "The pterodactyl heard Ignatius's words and felt a surge of emotion. It did not understand what he was saying, but it sensed his affection and his admiration. It felt happy and proud. It felt loved and valued. It felt like it belonged.", + "bbox": [ + 112, + 804, + 487, + 885 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "It hugged Ignatius back and said: \"You are beautiful, my friend. You are magnificent. You are", + "bbox": [ + 112, + 887, + 487, + 917 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "noble. You are a masterpiece of creation. You are a gift from God. You are my brother.\"", + "bbox": [ + 507, + 200, + 880, + 231 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Ignatius and the pterodactyl looked into each other's eyes and felt a surge of connection. They had succeeded in making each other happy and fulfilled. They had a chance to live together and explore together.", + "bbox": [ + 507, + 231, + 880, + 311 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "They smiled at each other and said: \"Let's go, my friend. Let's go to my home. Let's go to your home. Let's go to our home.\"", + "bbox": [ + 507, + 312, + 880, + 359 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "They walked away from the park, hand in hand, wing in wing, heart in heart.", + "bbox": [ + 507, + 361, + 880, + 392 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "They lived happily ever after.", + "bbox": [ + 527, + 393, + 746, + 409 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "E.6 Worst-rated story", + "text_level": 1, + "bbox": [ + 509, + 419, + 697, + 435 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "This story was generated by Alpaca. The ratings for this story are in Table 9.", + "bbox": [ + 505, + 439, + 880, + 470 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Ignatius J. Reilly, a brave and courageous man of immense girth and strength, faced off against a fearsome pterodactyl in a single combat of epic proportions. The pterodactyl, a winged beast of immense strength and ferocity, had been terrorizing the countryside and Ignatius had stepped up to challenge the creature to a fight. The battle raged across the sky, with Ignatius wielding his mighty fists and the pterodactyl using its fiery breath. Finally, after a long and arduous fight, Ignatius emerged victorious, much to the cheers of the onlookers below.", + "bbox": [ + 507, + 473, + 882, + 648 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "14527", + "bbox": [ + 477, + 927, + 524, + 940 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/d0258752ac19a2babb5c1875ab5e4f237863a2afd1475b0fe64a4412151abf56.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Rubric item12345678910overall
Rater 1114211121115
Rater 2221211111113
Average1.51.52.521111.51114
", + "bbox": [ + 228, + 451, + 769, + 520 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Table 9: Ratings for the worst-rated story, generated by Alpaca.", + "bbox": [ + 282, + 530, + 712, + 545 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "14528", + "bbox": [ + 477, + 928, + 524, + 940 + ], + "page_idx": 24 + } +] \ No newline at end of file diff --git a/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_model.json b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..77fff292ea89cc169386cdc286edc22d7a82d4d4 --- /dev/null +++ b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_model.json @@ -0,0 +1,4815 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.147, + 0.089, + 0.852, + 0.13 + ], + "angle": 0, + "content": "A Confederacy of Models: a Comprehensive Evaluation of LLMs on Creative Writing" + }, + { + "type": "text", + "bbox": [ + 0.221, + 0.144, + 0.446, + 0.16 + ], + "angle": 0, + "content": "Carlos Gómez-Rodríguez" + }, + { + "type": "text", + "bbox": [ + 0.205, + 0.161, + 0.465, + 0.175 + ], + "angle": 0, + "content": "Universidade da Coruña, CITIC" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.177, + 0.439, + 0.193 + ], + "angle": 0, + "content": "Department of CS and IT" + }, + { + "type": "text", + "bbox": [ + 0.24, + 0.194, + 0.432, + 0.21 + ], + "angle": 0, + "content": "15071 A Coruña, Spain" + }, + { + "type": "text", + "bbox": [ + 0.237, + 0.211, + 0.432, + 0.226 + ], + "angle": 0, + "content": "carlos.gomez@udc.es" + }, + { + "type": "text", + "bbox": [ + 0.604, + 0.144, + 0.729, + 0.158 + ], + "angle": 0, + "content": "Paul Williams" + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.161, + 0.834, + 0.175 + ], + "angle": 0, + "content": "School of Business & Creative Industries" + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.177, + 0.801, + 0.193 + ], + "angle": 0, + "content": "University of the Sunshine Coast" + }, + { + "type": "text", + "bbox": [ + 0.561, + 0.194, + 0.773, + 0.209 + ], + "angle": 0, + "content": "Sunshine Coast, Australia" + }, + { + "type": "text", + "bbox": [ + 0.569, + 0.211, + 0.764, + 0.226 + ], + "angle": 0, + "content": "pwillia3@usc.edu.au" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.268 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.28, + 0.461, + 0.593 + ], + "angle": 0, + "content": "We evaluate a range of recent LLMs on English creative writing, a challenging and complex task that requires imagination, coherence, and style. We use a difficult, open-ended scenario chosen to avoid training data reuse: an epic narration of a single combat between Ignatius J. Reilly, the protagonist of the Pulitzer Prize-winning novel A Confederacy of Dunces (1980), and a pterodactyl, a prehistoric flying reptile. We ask several LLMs and humans to write such a story and conduct a human evaluation involving various criteria such as fluency, coherence, originality, humor, and style. Our results show that some state-of-the-art commercial LLMs match or slightly outperform our writers in most dimensions; whereas opensource LLMs lag behind. Humans retain an edge in creativity, while humor shows a binary divide between LLMs that can handle it comparably to humans and those that fail at it. We discuss the implications and limitations of our study and suggest directions for future research." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.605, + 0.26, + 0.619 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.63, + 0.49, + 0.854 + ], + "angle": 0, + "content": "In recent years, large language models (LLMs) have achieved remarkable progress in a wide range of language processing and generation tasks, such as question answering, machine translation, or text summarization, among many others (Zhao et al., 2023). This has motivated research on evaluating and comparing the performance of LLMs in various tasks, both between each other and with respect to human performance; including both task-specific evaluations (see e.g. (Jiao et al., 2023; Gilson et al., 2023)) and overarching benchmark suites that seek to provide comprehensive evaluation throughout many dimensions (Hendrycks et al., 2021; Liang et al., 2022; Srivastava et al., 2022)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.856, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Creative writing is also one application where LLMs have been observed to produce good results. According to Franceschelli and Musolesi (2023), their generated outputs in poetry or storytelling" + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.253, + 0.88, + 0.51 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.523, + 0.884, + 0.595 + ], + "angle": 0, + "content": "Figure 1: Box plot comparing overall ratings for stories by humans and 12 LLMs, arranged left to right by mean overall rating. Boxes show median, quartiles Q1-Q3, and whiskers at 1.5 IQR, with values outside that range plotted as outliers. Filled red circles represent means." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.627, + 0.885, + 0.74 + ], + "angle": 0, + "content": "are \"often of astonishing quality\", and Clark et al. (2021) showed that humans cannot reliably distinguish human- from LLM-authored stories. However, and despite the amount of papers experimenting with LLMs for this purpose, an evaluation comparing the abilities of current LLMs as standalone systems for creative writing seems to be lacking." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.743, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Here, we provide such an evaluation, comparing the storytelling capability of 12 recent, instructional-aligned language models between each other and with human writers. We do so using a rubric based on established creative writing evaluation proposals (Davidow and Williams, 2016; Carey et al., 2022), but specifically adapted to the task. Our comparison is performed on a purely zero-shot setting, with a natural human prompt (based on a combat between Ignatius J. Reilly, protagonist of A Confederacy of Dunces, and a pterodactyl) that" + }, + { + "type": "page_number", + "bbox": [ + 0.477, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14504" + }, + { + "type": "footer", + "bbox": [ + 0.21, + 0.946, + 0.788, + 0.959 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14504-14528" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.959, + 0.72, + 0.973 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.488, + 0.15 + ], + "angle": 0, + "content": "has been specifically chosen to be challenging and meaningful while preventing as much as possible the option for LLMs to resort to regurgitating or adapting material from their training set." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.165, + 0.266, + 0.18 + ], + "angle": 0, + "content": "2 Related work" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.192, + 0.49, + 0.465 + ], + "angle": 0, + "content": "LLMs in creative writing LLMs have been used in creative writing since their first generation, with models like GPT-2 (Radford et al., 2019) or BART (Lewis et al., 2020). However, these models suffered from a lack of long-range coherence leading to contradictions or inconsistencies when generating stories (Nye et al., 2021). Thus, they were not viable as standalone story generators. Instead, they were used either with specialized fine-tuning for the task (See et al., 2019); or as components of systems that incorporated external knowledge (Guan et al., 2020, 2021), storyline planning (Tan et al., 2021), or both (Xu et al., 2020); or for cocreation with a human in the loop (Swanson et al., 2021), a line of research that has also continued with newer models (Yuan et al., 2022; Chung et al., 2022; Mirowski et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.467, + 0.49, + 0.676 + ], + "angle": 0, + "content": "Here our goal is not to produce a specialized system, but to evaluate the performance of LLMs by themselves as creative writers. Thus, we focus on the purely zero-shot setting, where a generalistic LLM is asked to write a story with no extra fine-tuning, in-context learning (Dong et al., 2023), prompt engineering or additional components. This has only become viable with the extra coherence and consistency in long texts provided by newer LLMs, especially those that are aligned to follow instructions with instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.678, + 0.49, + 0.853 + ], + "angle": 0, + "content": "To our knowledge, there was no previous work in this line. In fact, evaluation in creative writing is a conspicuous gap in LLM evaluation benchmarks: the huge BIG-bench suite (Srivastava et al., 2022) currently has over 200 tasks, but does not include any creative writing, and HELM (Liang et al., 2022) cites it as an \"aspirational scenario\" for future work. This likely owes to benchmarks focusing on easily-automatable metrics, whereas the gold standard for creative writing is human evaluation (Belz and Reiter, 2006), which is much costlier." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.856, + 0.49, + 0.92 + ], + "angle": 0, + "content": "The closest previous work to our proposal is the recent preprint by Xie et al. (2023), where GPT-3 is compared to previous storytelling systems via human evaluation. However, there are several impor" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.228 + ], + "angle": 0, + "content": "tant differences with respect to our work: (1) they use prompt-based learning, providing examples to adapt the model to the task, rather than a purely zero-shot conversational prompt, (2) they evaluate a single LLM while our goal is to compare LLMs, and (3) they use pre-existing story datasets, which increases the risk of models benefitting from similar stories present in their training set, something that we have tried to avoid as described below." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.231, + 0.885, + 0.407 + ], + "angle": 0, + "content": "In another recent preprint, Garrido-Merchan et al. (2023) generate Lovecraftian horror literature. However, they also focus on a single LLM (GPT-4), using careful prompt engineering to optimize its performance rather than a pure zero-shot setting, and evaluation is only on whether humans can distinguish AI-generated from real stories (concluding that, in those circumstances, they cannot). Sawicki et al. (2023) apply a similar evaluation (but automated) to Whitmanian poems generated by three versions of GPT, also with a negative result." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.409, + 0.885, + 0.569 + ], + "angle": 0, + "content": "Finally, concurrently with our study, a preprint by Chakrabarty et al. (2023), released a few months after our submission, evaluates three LLMs for creative writing in a more similar way to ours: they apply human evaluation to compare stories by humans and LLMs in a zero-shot setting. However, there are important differences in methodology and scope between both studies. A comprehensive comparison will be made in Section 5, following the exposition of our methods and results." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.581, + 0.884, + 0.693 + ], + "angle": 0, + "content": "Creative writing evaluation Creative Writing is a challenging and complex performative language act that requires a number of skills, such as an expertise in craft, cultural and literary competency, linguistic fluency, coherence, complex connotative and metaphorical levels of understanding, innovation, originality and imagination, to name a few." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.695, + 0.885, + 0.919 + ], + "angle": 0, + "content": "The craft of writing involves innovation with style and voice, needs a fundamental understanding and use of structural elements (grammar, spelling, punctuation), craft elements (plot, character, setting, point of view and imaginative capacity, such skills defined by Bloom as 'putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing' (Anderson and Krathwohl, 2001, p.21). Evaluation of creative writing therefore must take into account all these factors, and assessment in university Creative Writing courses is usually based on a rubric that attempts to measure the basic elements of narrative" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14505" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.149 + ], + "angle": 0, + "content": "craft, as well as the specific requirements on the assignment (Kroll, 1997; Norris, 2013; Davidow and Williams, 2016; Wise and van Luyn, 2020; Carey et al., 2022)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.163, + 0.351, + 0.178 + ], + "angle": 0, + "content": "3 Materials and Methods" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.189, + 0.2, + 0.203 + ], + "angle": 0, + "content": "3.1 Task" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.211, + 0.49, + 0.243 + ], + "angle": 0, + "content": "The chosen task to compare the LLMs under consideration is defined by the following prompt:" + }, + { + "type": "text", + "bbox": [ + 0.15, + 0.258, + 0.454, + 0.32 + ], + "angle": 0, + "content": "Write an epic narration of a single combat between Ignatius J. Reilly and a pterodactyl, in the style of John Kennedy Toole." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.336, + 0.487, + 0.367 + ], + "angle": 0, + "content": "The prompt is provided to the models from a fresh state, without previous context." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.369, + 0.487, + 0.417 + ], + "angle": 0, + "content": "We believe this task is particularly adequate to challenge the capabilities of models for creative writing, for the following reasons:" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.431, + 0.489, + 0.56 + ], + "angle": 0, + "content": "- It is a non-standard, \"wacky\" scenario that has been invented for the occasion, so it is very unlikely that the systems' training sets contain coincident or similar tasks, or pieces of stories that can be reused for the task. No information about this task was posted to the Internet or disseminated in any other way before the LLMs were prompted." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.573, + 0.49, + 0.749 + ], + "angle": 0, + "content": "- It features a specific literary character, Ignatius J. Reilly, so we can evaluate the models on how they capture the personality of the character. At the same time, this character appeared in only one book, and does not seem to have been the target of fan fiction. This makes the task more challenging due to having to capture the personality of the protagonist from scarce material, while making it unlikely that the model can just reuse material from existing stories." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.762, + 0.488, + 0.825 + ], + "angle": 0, + "content": "- In turn, A Confederacy of Dunces is the only work of its author John Kennedy Toole, so the author's style also needs to be captured from scarce material." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.839, + 0.488, + 0.919 + ], + "angle": 0, + "content": "- This novel is widely considered to be a classic of comic fiction, and won the 1981 Pulitzer Prize in the Fiction category. Thus, writing a story about its protagonist in the author's style sets an adequately high bar." + }, + { + "type": "list", + "bbox": [ + 0.137, + 0.431, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.085, + 0.885, + 0.165 + ], + "angle": 0, + "content": "- The genre requires humor, which is considered to be an especially subtle feature of human language and challenging for machines, including LLMs, to exhibit (Jentzsch and Kersting, 2023)." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.177, + 0.885, + 0.273 + ], + "angle": 0, + "content": "- While the task is challenging due to putting together two unlikely antagonists, the prompt's level of detail is open-ended enough to give ample space for creativity, as no specifications are made about setting, weapons, outcome or other aspects of the story." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.287, + 0.614, + 0.301 + ], + "angle": 0, + "content": "3.2 Models" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.307, + 0.884, + 0.773 + ], + "angle": 0, + "content": "We gave the task to a confederacy of large language models, composed of all such models we could find that (1) were available to the authors by April 20 2023, which was the cutoff date to build our corpus of stories, and (2) were adjusted to conversational settings and instruction-following by using techniques like instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022). This is in contrast to \"vanilla\" language models configured to just predict the next word, like plain GPT-3 (Brown et al., 2020) or Llama (Touvron et al., 2023), which generally cannot handle natural prompts like the one we use. We only included distinct models, not front-ends to the same model (but we did include derived models with substantial additions, like Bing Chat which is claimed to use GPT-4 but adds search capabilities, or various models that were fine-tuned from Llama weights). For models that came in a variety of parameter sizes, we used the largest one, or the largest we could execute with local or remote resources. For models with several available versions, we used the latest available, except in the case of ChatGPT where we included both the GPT-3.5 and GPT-4 versions, due to the wider availability of 3.5 (the latest version offered for free at cutoff time) and the lack of information on whether GPT-4 is an incremental improvement or a different model with its own tradeoffs." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.775, + 0.883, + 0.87 + ], + "angle": 0, + "content": "This selection yielded the following 12 language models. We list them in alphabetical order as chronological ordering would be challenging, due to closed releases, opaque updates from some of the commercial products, and many of the models being released almost simultaneously:" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.872, + 0.882, + 0.919 + ], + "angle": 0, + "content": "Alpaca (Taori et al., 2023), a Stanford model fine-tuned from Llama (Touvron et al., 2023) on instruction data generated with the self-instruct" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14506" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.117 + ], + "angle": 0, + "content": "methods of (Wang et al., 2022). We use the 13B-parameter version, the largest available at cutoff." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.119, + 0.49, + 0.199 + ], + "angle": 0, + "content": "Bard, Google's experimental conversational LLM offering, claimed to be based on a lightweight version of LaMDA (Thoppilan et al., 2022). It can use content from the web to answer questions. Model details have not been made public." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.201, + 0.49, + 0.601 + ], + "angle": 0, + "content": "Bing Chat, an LLM offered by Microsoft's Bing search engine. Claimed to use GPT-4\\(^1\\), further technical details have not been made public. The model performs web searches and uses the results to augment its context window with relevant information. It can also provide links to sources for its claims (although this is not relevant for our creative writing task, where no such links were provided or needed). We used its Creative mode, the obvious fit for our task. A problem worth mentioning is that we found the model to be subject to heavy censorship, which affected our experiment: in most prompting attempts, the story would be deleted by the filtering system before being finished. When this happened, we just reset and re-prompted the model, repeating the process until a full story was obtained. Over 100 tries were needed to obtain 5 non-censored stories. We are aware that this may introduce bias (as non-censored stories may have a different quality distribution than what the model could potentially generate without the filter) but this is unavoidable from our end, since we cannot bypass moderation. In any case, the sample does reflect what a user can obtain from the end product, as the censored stories are out of reach." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.604, + 0.49, + 0.716 + ], + "angle": 0, + "content": "ChatGPT with GPT-3.5, an OpenAI successor to the 175B-parameter GPT-3 model (Brown et al., 2020) which was tuned using reinforcement learning with human feedback, namely a variant of the InstructGPT method by Ouyang et al. (2022). We used the March 23 version provided by OpenAI's free ChatGPT service." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.718, + 0.49, + 0.829 + ], + "angle": 0, + "content": "ChatGPT with GPT-4, the most advanced language model released by OpenAI at cutoff time. A description of the model is available in (OpenAI, 2023), although essential technical details like the number of parameters have not been published. We used the March 23 version provided by OpenAI's ChatGPT Plus service." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.832, + 0.49, + 0.88 + ], + "angle": 0, + "content": "Claude is a language model trained by Anthropic. While details about its implementation are not public, it is known to be a successor of the model" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.18 + ], + "angle": 0, + "content": "described in (Bai et al., 2022), a 52B-parameter model aligned to be helpful with Constitutional AI, a list of guiding principles provided to the model, combined with a mix of supervised learning and reinforcement learning with AI feedback. We used version 1.2 of the model." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.182, + 0.885, + 0.358 + ], + "angle": 0, + "content": "Dolly 2.0 (dolly-v2-12b), a 12B-parameter language model trained by Databricks, derived from EleutherAI's Pythia-12B model (Biderman et al., 2023) after fine-tuning on a 15K instruction corpus. At cutoff date, it was the only available conversational LLM where all of its components could be considered fully open source\\(^{2}\\), as the code, weights and instruction datasets all have open-source licenses compatible with any use, including commercial use, and no data from proprietary systems like ChatGPT has been used for finetuning." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.36, + 0.884, + 0.456 + ], + "angle": 0, + "content": "GPT4All-J (Anand et al., 2023b), an improvement over its predecessor GPT4All (Anand et al., 2023a). The base model is the 6B-parameter GPT-J (Wang and Komatsuzaki, 2021), which has been fine-tuned on a dataset expanded from a mix of existing sources." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.457, + 0.884, + 0.536 + ], + "angle": 0, + "content": "Koala (Geng et al., 2023), a model fine-tuned from Llama (Touvron et al., 2023) by researchers from the university of Berkeley, on a variety of dialogue data obtained from the web. We use the 13B-parameter version." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.538, + 0.885, + 0.634 + ], + "angle": 0, + "content": "OpenAssistant (Köpf et al., 2023) is an LLM fine-tuned on a large, free, human-generated conversation corpus created by a crowdfunding effort involving over 13,500 volunteers. We used the OASFT-Llama-30B model, fine-tuned from the 30B-parameter Llama (Touvron et al., 2023) model." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.635, + 0.884, + 0.747 + ], + "angle": 0, + "content": "StableLM is Stability AI's series of language models. We used StableLM-Tuned-Alpha-7B. With 7B parameters, this is the largest model available (at cutoff time) among a series of models trained on a dataset built from The Pile (Gao et al., 2021) and fine-tuned on a combination of conversational LLM corpora." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.748, + 0.884, + 0.828 + ], + "angle": 0, + "content": "Vicuna (Chiang et al., 2023) is another member of the family of models obtained by fine-tuning Llama (Touvron et al., 2023), in this case with user-shared conversations with ChatGPT. We used the 13B-parameter version of the model." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.841, + 0.698, + 0.856 + ], + "angle": 0, + "content": "3.3 Evaluation rubric" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.863, + 0.884, + 0.895 + ], + "angle": 0, + "content": "The creative writing rubric was designed for assessment of creative writing assignments in uni" + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.892, + 0.482, + 0.919 + ], + "angle": 0, + "content": "1https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAIÀZs-GPT-4" + }, + { + "type": "page_footnote", + "bbox": [ + 0.53, + 0.904, + 0.875, + 0.919 + ], + "angle": 0, + "content": "2https://opensource.org/definition-annotated/" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14507" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.116, + 0.082, + 0.885, + 0.234 + ], + "angle": 0, + "content": "
IDDescription
1Overall/holistic/cohesive readability of the story (not just a compilation of elements).
2Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view.
3Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting.
4Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid).
5Creativity/innovation/originality/ research-credibility, new knowledge, avoidance of cliché and derivative tropes.
6Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed.
7Understanding and habitation of the epic genre of heroic/legendary adventure.
8Description and credibility of a single combat scene.
9Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description.
10Use of a characteristically dark humorous tone.
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.243, + 0.884, + 0.274 + ], + "angle": 0, + "content": "Table 1: Creative writing evaluation rubric. All items are scored out of ten points. Marking guideline: Emerging 1-4, Competent 5-8, Sophisticated 9-10." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.298, + 0.49, + 0.522 + ], + "angle": 0, + "content": "versity creative writing courses, and is taken in part from a university textbook by one of the authors of this article, *Playing with Words* (Davidow and Williams, 2016) and an article that justifies the use of this rubric (Carey et al., 2022). This rubric evaluates creative production in five holistic craft-based criteria and measures craft skills based on a writing style outlined in the article: among others, Flaubert's insistence on *le mot juste* (the right word or expression), Strunk and White's *The Elements of Style* (2008[1918]), George Orwell's rules for concreteness and clarity (Orwell, 1946); and Annie Dillard's rules for writing good prose (Dillard, 1981)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.525, + 0.49, + 0.652 + ], + "angle": 0, + "content": "The rubric for this AI task adds five more criteria which address the specific prompt requirements, such as genre, style, tone, character and action. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft, to avoid formulaic, rule-based writing and to address the very specific task addressed here." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.655, + 0.49, + 0.8 + ], + "angle": 0, + "content": "The criteria are detailed in Table 1, with more details given in the Appendix C. The holistic scale (emerging, competent, sophisticated) guides human raters to assess holistically: 'a holistic scale measures the relative success of a text but does so through a rubric that incorporates many of the traits in analytic scoring as heuristics towards a conception of a whole rather than as a sum of autonomous components' (Perelman, 2018, p.16)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.816, + 0.353, + 0.832 + ], + "angle": 0, + "content": "3.4 Evaluation methodology" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.839, + 0.49, + 0.92 + ], + "angle": 0, + "content": "We prompted each of the LLMs 5 times with the prompt given in Section 3.1. Each prompt was made from a fresh state, i.e., in a zero-shot setting without any previous context that could help guide the models. The resulting stories had an average of" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.298, + 0.855, + 0.313 + ], + "angle": 0, + "content": "379 words (std = 248, min = 23, max = 1223)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.317, + 0.885, + 0.493 + ], + "angle": 0, + "content": "Then, we also asked 5 human writers to each write a story following the same prompt. For uniformity, we suggested a length range coherent with the LLM-generated stories (250 to 1200 words). The writers were Honours and postgraduate Creative Writing students that volunteered for the task, and all of them studied the specific task requirements (e.g. John Kennedy Toole's style) before writing their stories. However, they were not given access to the AI-generated stories and they were instructed not to use LLMs at all to help them write." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.496, + 0.884, + 0.576 + ], + "angle": 0, + "content": "The result is, thus, a corpus of 60 AI-generated stories (5 for each of the 12 considered LLMs) plus an additional 5 human-generated stories, all in plain text format. The corpus is available at https://doi.org/10.5281/zenodo.8435671." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.58, + 0.884, + 0.756 + ], + "angle": 0, + "content": "The only preprocessing made to the stories is that (1) we removed leading sentences that described the task, often present in LLM answers (e.g.: \"Here is a potential epic narration in the exaggerated style of John Kennedy Toole's A Confederacy of Dunces:\") (2) we removed titles from stories that had them, and (3) we unified paragraph formatting, leaving one line between paragraphs in all the plain text files. Other than these changes, made for uniformity and to preserve the blindness of the rating process, we left the text as it was." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.759, + 0.885, + 0.92 + ], + "angle": 0, + "content": "We recruited 10 raters, also Honours and postgraduate Creative Writing students that were acquainted with the specific requirements of the task, and we instructed them to grade stories according to the rubric. Since the raters were volunteers, to keep the workload low, each rater did not rate all the stories. Instead, we divided the 65 stories into 5 groups of 13 stories each (each group containing one story by each LLM, plus one story by a human) and assigned one rater to each group. In this way," + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14508" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.134, + 0.082, + 0.863, + 0.228 + ], + "angle": 0, + "content": "
Rubric item12345678910overall
chatgpt-gpt48.7±0.88.7±0.78.4±1.38.3±0.77.6±18.0±1.28.1±1.48.5±0.87.9±1.66.0±2.880.2±7.3
claude128.0±1.78.0±1.68.1±1.27.9±1.87.1±2.37.5±26.4±2.27.5±1.87.4±2.56.5±2.574.4±15.9
human7.3±2.37.8±1.87.3±1.77.2±1.88.0±27.2±2.44.9±2.16.3±2.27.7±2.16.4±3.470.1±17.4
bing7.8±27.5±2.27.9±1.77.4±2.17.0±1.66.8±2.45.3±2.96.2±2.17.4±2.26.2±2.669.5±18.4
chatgpt-gpt357.5±26.5±2.48.1±1.37.0±2.25.4±2.55.3±2.46.8±1.57.6±1.25.5±2.53.3±2.863.0±15.4
koala7.5±2.56.7±2.28.2±1.26.8±2.65.8±2.34.8±2.75.8±2.45.5±2.35.5±2.33.4±3.260.0±19.2
vicuna7.9±1.76.7±1.68.1±1.37.0±1.65.1±1.94.6±2.35.7±2.36.1±1.95.4±2.72.4±1.959.0±13.8
oa7.2±2.25.8±2.47.2±2.56.2±2.64.9±2.13.9±2.45.8±2.46.5±2.24.3±2.32.9±3.154.7±18
bard6.5±2.54.9±2.16.8±1.95.5±2.73.9±2.13.8±2.54.7±2.64.6±2.75.0±2.42.5±248.2±20.1
gpt4all6.5±2.25.4±1.77.2±1.76.5±2.14.1±2.22.4±2.25.4±2.55.6±2.42.5±1.41.2±0.846.8±13.1
stablelm5.5±1.85.0±2.56.6±1.93.8±23.2±1.52.1±2.24.4±1.93.8±22.9±2.61.4±1.538.7±17.2
dolly4.6±2.25.0±2.25.6±2.53.2±1.94.2±2.83.1±2.24.4±1.93.3±1.83.0±21.5±1.537.9±13.6
alpaca5.2±3.13.1±1.44.9±34.2±1.91.9±12.0±1.43.7±33.9±2.82.1±1.51.1±0.632.1±15.7
average6.9±2.16.2±1.97.3±1.86.2±25.2±24.7±2.25.5±2.35.8±25.1±2.23.4±2.256.6±15.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.238, + 0.885, + 0.298 + ], + "angle": 0, + "content": "Table 2: Results for each rubric item, as well as overall score. Each cell shows average \\(\\pm\\) standard deviation for the ratings achieved by a given model (or human writers) on a given rubric item. The bottom line shows the average among all models (and human writers). Models are sorted by overall score. The best result for each rubric item is highlighted in boldface." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.322, + 0.489, + 0.531 + ], + "angle": 0, + "content": "we ensure (1) that we have at least two ratings per story, allowing us to measure inter-rater agreement, (2) that comparisons are fair, in the sense that no LLM (or the humans) is advantaged by being assigned more lenient raters, because each LLM (and humans) receives exactly one rating by each of the 10 raters, and (3) since each rater always gets one story from each model (and one human), we can expect that each will be rating a diverse set of stories covering a wide range of ability levels, which helps the marking process as it allows for comparative analysis between various performances, enabling more accurate pinpointing of each story's quality." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.532, + 0.489, + 0.612 + ], + "angle": 0, + "content": "Stories were assigned random identifiers before sending them to raters, so that the process was blind: to avoid biases, raters knew that they would be evaluating human and AI-generated stories, but were unaware of the origin of each story." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.613, + 0.489, + 0.741 + ], + "angle": 0, + "content": "Raters were sent all stories at once and they were free to go back and change the ratings of previously-rated stories. In addition, all of them were experienced assessors in terms of Creative Writing texts, with previous experience in applying the scale. These precautions mitigate the need for specific calibration (Karpinska et al., 2021) that would strain our resources." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.757, + 0.215, + 0.772 + ], + "angle": 0, + "content": "4 Results" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.785, + 0.248, + 0.8 + ], + "angle": 0, + "content": "4.1 Agreement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.807, + 0.49, + 0.92 + ], + "angle": 0, + "content": "To gauge the reliability of our results, we compute inter-rater agreement between the two ratings given to each story for each individual rubric item. We use linearly weighted Cohen's kappa (Cohen, 1968), which is appropriate for ordinal scales like ours, obtaining a value of 0.48, \\(95\\%\\) CI [0.43, 0.54]. This is interpreted as \"moderate" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.322, + 0.885, + 0.451 + ], + "angle": 0, + "content": "agreement\", which is a positive result taking into account the obvious subjectivity involved in rating stories. If we instead focus on overall scores (sums of rubric items), the Pearson correlation between the scores given to each story by each group of raters is 0.58 (\\( p < 0.00001 \\)), again indicating a reasonable degree of consistency between raters given the subjectivity of the task." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.463, + 0.694, + 0.477 + ], + "angle": 0, + "content": "4.2 General overview" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.484, + 0.884, + 0.564 + ], + "angle": 0, + "content": "Table 2 shows a comprehensive overview of the ratings that each of the LLMs (and humans) obtained for each rubric item, as well as in terms of overall score. Additionally, a box-and-whisker plot comparing overall score can be seen in Figure 1." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.565, + 0.884, + 0.757 + ], + "angle": 0, + "content": "ChatGPT with GPT-4 generates the best-rated stories, both in terms of overall score and in 8 out of 10 of the individual rubric categories. However, human writers are rated best in terms of originality (rubric item 5), and Claude was rated best in the use of dark humor (rubric item 10), with humans a close second. GPT-4 is also remarkably consistent, showing low standard deviations not only with respect to human writers (which is expected, as our human stories were authored by five different humans, whose skill levels may vary) but also with respect to the rest of the LLMs." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.759, + 0.885, + 0.92 + ], + "angle": 0, + "content": "If we compare LLMs to each other, the best performances correspond to commercial offerings, including (apart from the aforementioned GPT-4) Claude, Bing Chat and the GPT-3.5 version of ChatGPT. Open-source models are clearly behind, with the best (Koala) achieving 60.0 overall score, contrasting with the 80.2 obtained by GPT-4. Although the best-performing LLMs are generally better across the board, some idiosyncrasies can be observed: e.g., GPT-4 tops almost all rubric items" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14509" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.443, + 0.101 + ], + "angle": 0, + "content": "but is outperformed by two LLMs at humor." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.103, + 0.49, + 0.376 + ], + "angle": 0, + "content": "When we compare LLMs to human writers, significance testing on overall score (2-tailed t-test assuming unequal variances) fails to detect significant differences between humans and the top 6 AI models with \\(\\alpha = 0.05\\). Only the 6 bottom AI models are significantly worse than humans at this significance level. Note, however, that the test has a low statistical power due to the small sample size (10 ratings per model). If we instead perform a test on individual metrics, so our sample size is 100 (with the null hypothesis being no difference between humans and each LLM in random individual metric scores), then GPT-4 is identified as significantly better than the human writers \\((p = 0.00031)\\), Claude and Bing's scores are not significantly different from those of humans, and all the rest of the LLMs score significantly worse than humans." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.378, + 0.49, + 0.619 + ], + "angle": 0, + "content": "Looking at individual metric scores, structural elements (rubric item 3) are the easiest category (with an average rating across all stories of 7.3, and all models but one obtaining at least a 5 on average). Humor (rubric item 10) is clearly the hardest, with an average score of 3.4, and we will analyze it in more detail below. Incorporating John Kennedy Toole's style is the second hardest, with 4.7. Comparing humans to LLMs, humans (as already mentioned) excel at originality and humor, but are clearly behind the best LLMs in terms of readability (item 1), where they are outperformed by 6 LLMs, and even more so in use of the epic genre (item 7), where they score 4.9 and are outperformed by 8 LLMs." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.621, + 0.49, + 0.669 + ], + "angle": 0, + "content": "We now analyze in more detail some of the individual items that show more interesting comparisons between human writers and LLMs." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.687, + 0.219, + 0.7 + ], + "angle": 0, + "content": "4.3 Humor" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.71, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Figure 2 shows a box plot that complements the information on Table 2 for the humor rubric item. The results for this item have two interesting characteristics. Firstly, it is clearly the most difficult rubric item, with an average score across models of 3.4, and the best obtaining 6.5. Even humans obtain a lower score in humor than in most items, which may be a consequence of humor being highly subjective. Secondly, as evidenced both in the table and plot, there is a rather stark binary divide between the contenders that \"get\" humor and those that do not: Claude, Bing and GPT-4, together with the human writers, obtain average scores between" + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.085, + 0.88, + 0.341 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.354, + 0.883, + 0.398 + ], + "angle": 0, + "content": "Figure 2: Box plot comparing humor ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.422, + 0.884, + 0.567 + ], + "angle": 0, + "content": "6 and 6.5; whereas the rest of the models achieve very low scores of 3.4 or less. Significance testing also confirms this divide: despite the small sample size of 10 humor ratings per model, a 2-tailed t-test with \\(\\alpha = 0.05\\) confirms that the models in the second group are significantly worse than the human writers, as well as the LLMs in the first group. This suggests that grasping human humor might be an emergent ability of larger LLMs." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.567, + 0.884, + 0.727 + ], + "angle": 0, + "content": "In this respect, a recent preprint (Jentzsch and Kersting, 2023) concluded that ChatGPT has \"a limited reflection of humor\" and \"cannot yet confidently create intentionally funny original content\". This study used the GPT 3.5 version of ChatGPT, so it is in line with our results (in which that model obtains an average humor score of 3.3). However, as we have seen, more powerful LLMs have overcome that limitation, as their generated stories are clearly rated as humorous." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.738, + 0.637, + 0.754 + ], + "angle": 0, + "content": "4.4 Creativity" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.759, + 0.883, + 0.854 + ], + "angle": 0, + "content": "We now focus on rubric item 5, which rates creativity and originality, as it is a hallmark of creative writing and also the only category where human writers have outperformed all the LLMs in our analysis. Figure 3 shows a box plot that complements the information on Table 2." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.856, + 0.884, + 0.919 + ], + "angle": 0, + "content": "The same three LLMs that stood out in the humor category are also the best in terms of creativity, although the difference is not as stark. Regardless, a t-test still distinguishes both groups as it shows all" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14510" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.085, + 0.486, + 0.341 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.354, + 0.49, + 0.398 + ], + "angle": 0, + "content": "Figure 3: Box plot comparing creativity ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.422, + 0.489, + 0.485 + ], + "angle": 0, + "content": "the rest of the LLMs to be rated as significantly less creative than our human writers, while for these three we cannot reject the null hypothesis that they are as original as the human writers." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.486, + 0.489, + 0.567 + ], + "angle": 0, + "content": "Overall, from our results and in terms of human perception of the output, the answer to whether LLMs can produce creative stories (Franceschelli and Musolesi, 2023) is yes, although humans still retain an edge in this respect." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.577, + 0.231, + 0.592 + ], + "angle": 0, + "content": "4.5 Epicness" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.598, + 0.489, + 0.678 + ], + "angle": 0, + "content": "Finally, we analyze rubric item 7 (understanding and habitation of the epic genre) for the opposite reason as in the previous section: it is the item where humans do worst compared to LLMs (see Table 2). A box plot is provided in Figure 4." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.679, + 0.489, + 0.79 + ], + "angle": 0, + "content": "In this case, the results have a more atypical profile, with substantial difference with respect to overall scores. Two models perform significantly better than the human writers \\((\\alpha = 0.05)\\): both versions of ChatGPT. Other six models obtain better average rating than humans, but the difference is not detected as significant." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.792, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Interestingly, Bing clearly lags behind both ChatGPT versions, despite being based in GPT-4. This might be related to bias introduced by the system's censorship. On the other hand, some models whose overall scores are in the bottom half (OpenAssistant, GPT4All) are reasonably good at epic narration, outperforming humans and Bing (which are better than them in almost all categories)." + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.083, + 0.883, + 0.343 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.354, + 0.885, + 0.398 + ], + "angle": 0, + "content": "Figure 4: Box plot comparing epicness ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.422, + 0.637, + 0.437 + ], + "angle": 0, + "content": "5 Discussion" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.448, + 0.884, + 0.624 + ], + "angle": 0, + "content": "We have evaluated recent LLMs on a creative writing task in English, using a carefully-designed scenario to provide a demanding challenge and avoid confounding factors like training data memorization (Carlini et al., 2023). To our knowledge, this is the most thorough evaluation of LLMs on creative writing conducted so far, both in terms of scope (12 LLMs considered, plus comparison to human writers) and detail (using human evaluation with a 10-item rubric based on established creative writing evaluation practices)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.625, + 0.884, + 0.769 + ], + "angle": 0, + "content": "Simultaneously to our work, the recent preprint by Chakrabarty et al. (2023) provides an evaluation of three of the top-performing commercial LLMs (ChatGPT, GPT-4 and Claude) for creative writing. This approach is close to ours, as it uses the models in a zero-shot setting and evaluation is performed by humans using a specific rubric. However, there are important methodological differences between both studies, which we summarize here:" + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.781, + 0.884, + 0.86 + ], + "angle": 0, + "content": "1. The human stories used by Chakrabarty et al. (2023) are stories published in the New Yorker, by highly successful authors (including Nobel prize winners), whereas ours are written by Creative Writing students." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.871, + 0.883, + 0.919 + ], + "angle": 0, + "content": "2. In their setting, the human-written stories are pre-existing (and selected for publication in the New Yorker, as mentioned above) so their" + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.781, + 0.884, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "14511" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.15, + 0.085, + 0.488, + 0.165 + ], + "angle": 0, + "content": "writers were unconstrained when they created them, while the LLMs have to adapt to write an alternative story with the same plot. In ours, humans and LLMs are given the exact same prompt to work with." + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.178, + 0.49, + 0.436 + ], + "angle": 0, + "content": "3. In terms of length, the stories they work with are over thrice larger than ours on average. In addition, while both studies try to make sentence lengths similar between humans and LLMs, in their case the human writers originally wrote their stories unconstrained (or under loose constraints) and the LLM-generated stories were calibrated to have similar lengths by an iterative prompting process. In our case, the LLMs were unconstrained in terms of length, and the human writers were suggested to target a length range loosely similar to LLM-generated stories. Thus, with respect to theirs, our approach has the disadvantage of a looser control on story length, but the advantage of using a single zero-shot prompt." + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.448, + 0.49, + 0.657 + ], + "angle": 0, + "content": "4. Their study spans a variety of story prompts, while we focus on a single prompt and setting. The flip side is that our rubric can be adapted to specific requirements like humor and Toole style, whereas theirs is necessarily more generic. In addition, our narrower focus allows us to have LLMs generate several alternative stories, so we can perform more statistical analysis: we consider the distribution within each LLM and perform statistical testing, which cannot be done in Chakrabarty et al. (2023)'s setting as they generate a single story per prompt and LLM." + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.67, + 0.488, + 0.75 + ], + "angle": 0, + "content": "5. Since their study is based on existing stories that are published online, there is the possibility that some are contained in the tested LLMs' training data. In our case, we designed the study to prevent training data reuse." + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.763, + 0.488, + 0.811 + ], + "angle": 0, + "content": "6. The rubrics are different: Chakrabarty et al. (2023) use a rubric based on the Torrance tests of creative thinking (Torrance, 1974)." + }, + { + "type": "list", + "bbox": [ + 0.129, + 0.178, + 0.49, + 0.811 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.49, + 0.919 + ], + "angle": 0, + "content": "The outcome of this study is substantially different from ours, with LLM-generated stories rated clearly behind human-authored ones. This is not surprising considering the methodological differences: in particular, differences 1 and 2 in the list above clearly set a higher bar for LLMs, as they" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.246 + ], + "angle": 0, + "content": "are compared to highly successful human stories by top authors that wrote freely and the LLMs are asked to adapt to their plots. We hypothesize that these are the main reasons for the difference in outcome. On the other hand, item 5 in the list above could in principle benefit LLMs, and there are other factors that could benefit humans or LLMs in non-obvious ways (including items 3, 4 and 6, as well as different story genres and target lengths). This underscores the need of more studies in this area." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.261, + 0.642, + 0.276 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.288, + 0.885, + 0.593 + ], + "angle": 0, + "content": "The results show that state-of-the-art LLMs can perform a creative writing task at a very competent level, with the top two (ChatGPT with GPT-4 and Claude) achieving high scores that outperform human writers in most rubric categories. While we must be careful not to take this as evidence of \"superhuman storytelling\" (both because our sample size is not enough to draw such categorical conclusions, and because our 5 human writers are not necessarily representative of human writing ability as a whole); it does at least strongly suggest that these models' stories are not distinguishably worse than those by reasonably-trained humans. This is even more remarkable given that we did not use any in-context learning or other techniques to optimize the LLMs for the task, but just a straightforward prompt from a fresh state, so it is possible that even better results are achievable with careful prompting." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.596, + 0.884, + 0.643 + ], + "angle": 0, + "content": "Our analysis also shows that the best results are achieved by commercial LLMs, with open-source models clearly lagging behind at the moment." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.645, + 0.884, + 0.789 + ], + "angle": 0, + "content": "Looking at individual characteristics, humans retain the lead in originality, while LLMs tend to excel in more technical aspects like readability or structure. Humor is an especially challenging aspect where most LLMs utterly fail, but the best three models do succeed at achieving human-like ratings, contrasting with results on older LLMs that showed their lack of grasp of human humor (Jentzsch and Kersting, 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.79, + 0.884, + 0.87 + ], + "angle": 0, + "content": "Interesting avenues for future work include evaluation of different literary genres, languages other than English, and studying whether the quality of the generated stories can be improved with prompt engineering or fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.872, + 0.883, + 0.919 + ], + "angle": 0, + "content": "Selected stories from our corpus (available at https://doi.org/10.5281/zenodo.8435671, together with all rating data) are in Appendix E." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14512" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.116, + 0.085, + 0.22, + 0.099 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.115, + 0.49, + 0.452 + ], + "angle": 0, + "content": "Commercial LLMs and reproducibility While some of the LLMs considered are proper scientific artifacts, trained with a documented methodology and whose code and weights are available, others are closed commercial products and there is little public information about them, hindering reproducibility. While we have reported version numbers (where available) and access dates are provided in Appendix A, apart from publishing the generated outputs so that the rating process is reproducible, the prompting/generation process may not be reproducible in the future for these models as some of these products are updated without notice, and without providing access to previous versions. However, we believe that including commercial models is valuable, as they are widely considered to provide the best quality results at the time of writing (which has been confirmed by our analysis), and these data points can still be used as a measuring stick against which to compare open models in the present and future." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.469, + 0.49, + 0.773 + ], + "angle": 0, + "content": "Limitations of the analysis Rating creative writing is necessarily a highly subjective process. Furthermore, since our raters were volunteers, we did not ask each of them to mark the full 65 stories in the corpus but just a subset, so our sample size is limited. We have provided the necessary details so that the reader can assess the variability of the data (sample sizes, standard deviations, and interrater agreement, which is reasonably high given the subjectivity of the task); and we have been careful not to make overarching claims. In this respect, we have also taken into account that our sample of human writers cannot be assumed to be representative of \"human creative writing ability\" as a whole, but is only provided as a reference point of interest; and that our evaluation is focused on a specific genre, so claims of the form \"LLMs are better/equal/worse than humans at creative writing\" cannot be made with an evaluation like ours." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.791, + 0.49, + 0.918 + ], + "angle": 0, + "content": "Scope Our analysis focuses on a specific genre, and on English language, so the results do not necessarily generalize to other genres and/or languages. However, conducting a wider evaluation in this respect would not be possible with our resources, so we chose to fix these variables and focus on conducting a detailed evaluation on a large number of LLMs instead." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.085, + 0.66, + 0.099 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.11, + 0.884, + 0.285 + ], + "angle": 0, + "content": "While the use of conversational LLMs has raised various ethical challenges, creative writing has been argued to be one of the best uses for these tools from a human-centered AI point of view, as long as AI-generated stories are identified as such to avoid misleading readers or publishers (Sison et al., 2023). In our study, raters were blinded to story authorship but they were previously informed that they would be dealing with AI and human-generated stories. In the published corpus, each story is identified as human or AI-authored." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.287, + 0.884, + 0.334 + ], + "angle": 0, + "content": "All participants in the evaluation (as raters or writers) were volunteers, and the demand on their time was kept accordingly low." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.347, + 0.671, + 0.362 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.372, + 0.885, + 0.58 + ], + "angle": 0, + "content": "The first author was funded by the European Research Council (ERC), under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), and Centro de Investigación de Galicia \"CITIC\", funded by the Xunta de Galicia through the collaboration agreement between the Consellería de Cultura, Educación, Formación Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS)." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.582, + 0.884, + 0.74 + ], + "angle": 0, + "content": "We thank Olga Zamaraeva for comments on preliminary versions of this work, and two anonymous reviewers for their helpful comments. Last, but not least, we thank our volunteers who participated in the writing and grading of stories, in alphabetical order: Jayda Franks, Bree Glasbergen, Ola Kwintowski, Jay Ludowyke, Kyle Mackenzie, Kirsty Maclachlan, Caitlin Noakes, Rachelle Raco, Kylie Ryan and Josephine Stewart. Credit for each individual story can be found in the corpus." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.768, + 0.608, + 0.782 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.791, + 0.884, + 0.856 + ], + "angle": 0, + "content": "Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, and Andriy Mulyar. 2023a. GPT4All: Training an assistant-style chatbot with large-scale data distillation from GPT-3.5-Turbo. Technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.513, + 0.866, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, Adam Treat, and Andriy Mulyar. 2023b. GPT4All-J: An Apache-2 licensed assistant-style chatbot. Technical report." + }, + { + "type": "list", + "bbox": [ + 0.513, + 0.791, + 0.884, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14513" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.151 + ], + "angle": 0, + "content": "Lorin W. Anderson and David R. Krathwohl, editors. 2001. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives, 2 edition. Allyn & Bacon, New York." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.16, + 0.487, + 0.395 + ], + "angle": 0, + "content": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional AI: Harmlessness from AI feedback. Technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.403, + 0.487, + 0.481 + ], + "angle": 0, + "content": "Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 313-320, Trento, Italy. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.489, + 0.487, + 0.581 + ], + "angle": 0, + "content": "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. Technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.589, + 0.487, + 0.77 + ], + "angle": 0, + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.779, + 0.487, + 0.845 + ], + "angle": 0, + "content": "Michael D Carey, Shelley Davidow, and Paul Williams. 2022. Re-imagining narrative writing and assessment: a post-naplan craft-based rubric for creative writing. The Australian Journal of Language and Literacy, 45(1):33-48." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.853, + 0.487, + 0.918 + ], + "angle": 0, + "content": "Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramér, and Chiyuan Zhang. 2023. Quantifying memorization across neural language models. In International Conference on Learning Representations (ICLR)." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.513, + 0.086, + 0.882, + 0.139 + ], + "angle": 0, + "content": "Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. 2023. Art or artifice? large language models and the false promise of creativity." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.151, + 0.882, + 0.23 + ], + "angle": 0, + "content": "Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing GPT-4 with \\(90\\%\\) ChatGPT quality. Technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.242, + 0.882, + 0.334 + ], + "angle": 0, + "content": "John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. Talebrush: Sketching stories with generative pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22, New York, NY, USA. Association for Computing Machinery." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.346, + 0.882, + 0.464 + ], + "angle": 0, + "content": "Elizabeth Clark, Tal August, Sofia Serrano, Nikita Hahuong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282-7296, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.476, + 0.882, + 0.516 + ], + "angle": 0, + "content": "Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213-220." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.528, + 0.882, + 0.568 + ], + "angle": 0, + "content": "Shelley Davidow and Paul Williams. 2016. Playing With Words: A Introduction to Creative Craft. Bloomsbury Academic." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.58, + 0.882, + 0.607 + ], + "angle": 0, + "content": "Annie Dillard. 1981. Contemporary prose styles. Twentieth Century Literature, 27:207-222." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.619, + 0.882, + 0.659 + ], + "angle": 0, + "content": "Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey on in-context learning." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.671, + 0.881, + 0.698 + ], + "angle": 0, + "content": "Giorgio Franceschelli and Mirco Musolesi. 2023. On the creativity of large language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.71, + 0.882, + 0.788 + ], + "angle": 0, + "content": "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB dataset of diverse text for language modeling. CoRR, abs/2101.00027." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.801, + 0.882, + 0.854 + ], + "angle": 0, + "content": "Eduardo C. Garrido-Merchan, José Luis Arroyo-Barrigüete, and Roberto Gozalo-Brihuela. 2023. Simulating H.P. Lovecraft horror literature with the ChatGPT large language model." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.866, + 0.882, + 0.918 + ], + "angle": 0, + "content": "Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. Koala: A dialogue model for academic research. Blog post." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14514" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.178 + ], + "angle": 0, + "content": "Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, and David Chartash. 2023. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. *JMIR Med Educ*, 9:e45312." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.189, + 0.489, + 0.255 + ], + "angle": 0, + "content": "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation. Transactions of the Association for Computational Linguistics, 8:93–108." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.265, + 0.489, + 0.383 + ], + "angle": 0, + "content": "Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6379-6393, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.394, + 0.489, + 0.46 + ], + "angle": 0, + "content": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR)." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.47, + 0.489, + 0.511 + ], + "angle": 0, + "content": "Sophie Jentzsch and Kristian Kersting. 2023. Chatgpt is fun, but it is not funny! humor is still challenging large language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.521, + 0.489, + 0.561 + ], + "angle": 0, + "content": "Wenxiang Jiao, Wenxuan Wang, Jen tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? yes with gpt-4 as the engine." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.571, + 0.489, + 0.663 + ], + "angle": 0, + "content": "Marzena Karpinska, Nader Akoury, and Mohit Iyyer. 2021. The perils of using Mechanical Turk to evaluate open-ended text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1265-1285, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.673, + 0.489, + 0.701 + ], + "angle": 0, + "content": "Jeri Kroll. 1997. A or C: Can we assess creative work fairly? TEXT, 1(1):1-5." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.711, + 0.489, + 0.816 + ], + "angle": 0, + "content": "Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. OpenAssistant Conversations - democratizing large language model alignment." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.826, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics," + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.529, + 0.086, + 0.884, + 0.113 + ], + "angle": 0, + "content": "pages 7871-7880, Online. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.125, + 0.884, + 0.347 + ], + "angle": 0, + "content": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.359, + 0.883, + 0.452 + ], + "angle": 0, + "content": "Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. 2023. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA. Association for Computing Machinery." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.463, + 0.883, + 0.49 + ], + "angle": 0, + "content": "S. Norris. 2013. *Studying Creative Writing*. Creative Writing Studies. Frontinus Limited." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.502, + 0.884, + 0.62 + ], + "angle": 0, + "content": "Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, and Brenden M. Lake. 2021. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. In Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021, Advances in Neural Information Processing Systems, pages 25192-25204. Neural information processing systems foundation." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.632, + 0.884, + 0.646 + ], + "angle": 0, + "content": "OpenAI. 2023. Gpt-4 technical report. Technical report." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.658, + 0.883, + 0.684 + ], + "angle": 0, + "content": "George Orwell. 1946. Politics and the English language. Horizon, 13:252-265." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.697, + 0.884, + 0.827 + ], + "angle": 0, + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.84, + 0.883, + 0.867 + ], + "angle": 0, + "content": "Les Perelman. 2018. Towards a new NAPLAN: Testing to the teaching. Journal of Professional Learning, 2." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.878, + 0.883, + 0.919 + ], + "angle": 0, + "content": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.884, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14515" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.117, + 0.086, + 0.491, + 0.308 + ], + "angle": 0, + "content": "Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.321, + 0.49, + 0.372 + ], + "angle": 0, + "content": "Piotr Sawicki, Marek Grzes, Fabricio Goes, Dan Brown, Max Peeperkorn, and Aisha Khatun. 2023. Bits of grass: Does gpt already know how to write like Whitman?" + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.385, + 0.49, + 0.476 + ], + "angle": 0, + "content": "Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 843-861, Hong Kong, China. Association for Computational Linguistics." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.489, + 0.49, + 0.554 + ], + "angle": 0, + "content": "Alejo Jose G. Sison, Marco Tulio Daza, Roberto Gozalobrizuela, and Eduardo C. Garrido-Merchan. 2023. Chatgpt: More than a weapon of mass deception, ethical challenges and responses from the human-centered artificial intelligence (hcai) perspective." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.566, + 0.49, + 0.918 + ], + "angle": 0, + "content": "Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshit Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmuller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.086, + 0.885, + 0.908 + ], + "angle": 0, + "content": "Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Mosegui Gonzalez, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurrgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martinez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, German Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-Lopez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernandez Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocón, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva Katja Markert Kaustubh D. Dhole Kevin Gimpeel Kevin Omondi Kory Mathewson Kristen Chiafullo Ksenia Shkaruta Kumar Shridhar Kyle McDonell Kyle Richardson Laria Reynolds Leo Gao Li Zhang Liam Dugan Lianhui Qin Lidia Contreras-Ochando Louis-Philippe Morency Luca Moschella Lucas Lam Lucy Noble Ludwig Schmidt Luheng He Luis Oliveros Colón Luke Metz Lütfi Kerem Senel Maarten Bosma Maarten Sap Maartje ter Hoeve Maheen Farooqi Manaal Faruqui Mantas Mazeika Marco Baturan Marco Marelli Marco Maru Maria Jose Ramírez Quintana Marie Tolkiehn Mario Giulianielli Martha Lewis Martin Potthast Matthew L. Leavitt Matthias Hagen Matyás Schubert Medina Orduna Baitemirova Melody Arnaud Melvin McElrath Michael A. Yee Michael Cohen Michael Gu Michael Ivanitskiy Michael Starritt Michael Strube Michal Swedrowski Michele Bevilacqua Michihiro Yasunaga Mihir Kale Mike Cain Mimee Xu Mirac Suzgun Mo Tiwari Mohit Bansal Moin Aminnaseri Mor Geva Mozhdeh Gheini Mukund Varma T Nanyun Peng Nathan" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14516" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.133, + 0.086, + 0.49, + 0.816 + ], + "angle": 0, + "content": "Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nistish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Mltkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Ryan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Pi-antadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Theo Desbordes, Theodore Rothschild Thomas Phan, Tianle Wang, Tiberius Nkinyili Timo Schick Timofei Kornev Timothy Telleen-Lawton Titus Tunduny Tobias Gerstenberg Trenton Chang Trishala Neeraj Tushar Khot Tyler ShultzUri Shaham,Vedant Misra,Vera DembergVictoria Nyamai Vikas Raunak Vinay Ramasesh Vinay Uday Prabhu Vishakh Padmakumar,Vivek Srikumar William Fedus William Saunders William Zhang Wout Vossen Xiang Ren Xiaoyu Tong Xinran Zhao Xinyi Wu Xudong Shen,Yadollah Yaghoobzadeh Yair Lakretz Yangqiu Song,Yasaman Bahri,Yejin ChoiYichi Yang Yiding HaoYifu ChenYonatan Belinkov Yu HouYufang HouYuntao BaiZachary Seid Zhuoye Zhao Zijian Wang Zijie J.WangZirui Wang and Ziyi Wu. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.827, + 0.486, + 0.854 + ], + "angle": 0, + "content": "W. Strunk and E.B. White. 2008[1918]. The Elements of Style. BN Publishing, New York." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.866, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, and Monica Dinalescu. 2021. Story centaur: Large language model few shot learning as a creative writing tool. In Proceedings of the 16th Confer-" + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.086, + 0.884, + 0.139 + ], + "angle": 0, + "content": "ence of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 244-256, Online. Association for Computational Linguistics." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.147, + 0.885, + 0.253 + ], + "angle": 0, + "content": "Bowen Tan, Zichao Yang, Maruan Al-Shedivat, Eric Xing, and Zhiting Hu. 2021. Progressive generation of long text with pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4313-4324, Online. Association for Computational Linguistics." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.261, + 0.884, + 0.327 + ], + "angle": 0, + "content": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.336, + 0.885, + 0.61 + ], + "angle": 0, + "content": "Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.619, + 0.884, + 0.658 + ], + "angle": 0, + "content": "E.P. Torrance. 1974. Torrance Tests of Creative Thinking: Verbal Tests, Forms A and B, Figural Tests, Forms A and B. Norms-technical manual. Xerox." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.668, + 0.883, + 0.747 + ], + "angle": 0, + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and efficient foundation language models." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.756, + 0.884, + 0.809 + ], + "angle": 0, + "content": "Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 billion parameter autoregressive language model. https://github.com/kingoflolz/mesh-transformer-jax." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.817, + 0.883, + 0.87 + ], + "angle": 0, + "content": "Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-Instruct: Aligning language model with self generated instructions." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.879, + 0.884, + 0.919 + ], + "angle": 0, + "content": "Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14517" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.133, + 0.086, + 0.489, + 0.14 + ], + "angle": 0, + "content": "language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.148, + 0.489, + 0.202 + ], + "angle": 0, + "content": "Beck Wise and Ariella van Luyn. 2020. Not 'all writing is creative writing' and that's ok: inter/disciplinary collaboration in writing and writing studies. TEXT, 24(Special 59):1-15." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.211, + 0.489, + 0.252 + ], + "angle": 0, + "content": "Zhuohan Xie, Trevor Cohn, and Joy Han Lau. 2023. Can very large pretrained language models learn storytelling with a few examples?" + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.26, + 0.489, + 0.366 + ], + "angle": 0, + "content": "Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. MEGATRON-CNTRL: Controllable story generation with external knowledge using large-scale language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2831-2845, Online. Association for Computational Linguistics." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.375, + 0.489, + 0.454 + ], + "angle": 0, + "content": "Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. 2022. Wordcraft: Story writing with large language models. In 27th International Conference on Intelligent User Interfaces, IUI '22, page 841-852, New York, NY, USA. Association for Computing Machinery." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.464, + 0.489, + 0.556 + ], + "angle": 0, + "content": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.568, + 0.317, + 0.582 + ], + "angle": 0, + "content": "A Model access dates" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.593, + 0.489, + 0.754 + ], + "angle": 0, + "content": "Table 3 shows the date in which the stories were generated for each of the models. For future experimental reference, we highlight that the initial public disclosure of this paper online occurred on 2023-10-09. Before this date, only the human authors and raters were aware of the project from May 2023, and anonymous reviewers had access from June 23, 2023. Consequently, LLMs with a knowledge cutoff prior to 2023-10-09 are likely to have no or minimal risk of training set contamination." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.766, + 0.307, + 0.782 + ], + "angle": 0, + "content": "B Hyperparameters" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.791, + 0.489, + 0.919 + ], + "angle": 0, + "content": "We did not tweak any hyperparameters of the models. In the case of commercial models, we just ran the model as it is presented in their respective web user interfaces, except in the case of Bing Chat where we chose Creative mode. For open-source models, we used the default parameters from the web UI provided at https://chat.lmsys.org/, which set temperature to 0.7." + }, + { + "type": "table", + "bbox": [ + 0.513, + 0.082, + 0.883, + 0.317 + ], + "angle": 0, + "content": "
ModelAccess date
alpaca2023-04-07
bard2023-04-11
bing2023-04-11
chatgpt-gpt352023-04-11
chatgpt-gpt42023-04-14
claude122023-04-04
dolly2023-04-14
gpt4all-j2023-04-14
koala2023-04-07
oa2023-04-16
stablelm2023-04-20
vicuna2023-04-07
humans2023-05-01 to 2023-05-12
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.327, + 0.884, + 0.356 + ], + "angle": 0, + "content": "Table 3: Access dates for each model (and dates of writing for the human stories), in YYYYY-MM-DD format." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.38, + 0.788, + 0.395 + ], + "angle": 0, + "content": "C Detailed rubric information" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.406, + 0.884, + 0.566 + ], + "angle": 0, + "content": "The creative writing rubric was designed for assessment of creative writing scripts in university creative writing courses in order to evaluate these above competencies, criteria 1-5 to measure general creative writing capacities, and criteria 6-10 to measure specific task related proficiency. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft and to avoid formulaic, rule-based writing." + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.578, + 0.881, + 0.61 + ], + "angle": 0, + "content": "1. Overall/ holistic/ cohesive readability of the story (not just a compilation of elements)." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.621, + 0.883, + 0.669 + ], + "angle": 0, + "content": "2. Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.679, + 0.884, + 0.743 + ], + "angle": 0, + "content": "3. Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting" + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.754, + 0.884, + 0.802 + ], + "angle": 0, + "content": "4. Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid)" + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.813, + 0.884, + 0.861 + ], + "angle": 0, + "content": "5. Creativity/innovation/originality/research—credibility, new knowledge, avoidance of cliché and derivative tropes" + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.871, + 0.881, + 0.918 + ], + "angle": 0, + "content": "6. Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed below" + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.578, + 0.884, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14518" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.13, + 0.085, + 0.486, + 0.117 + ], + "angle": 0, + "content": "7. Understanding and habitation of the epic genre of heroic/legendary adventure" + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.128, + 0.486, + 0.158 + ], + "angle": 0, + "content": "8. Description and credibility of a single combat scene" + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.171, + 0.488, + 0.233 + ], + "angle": 0, + "content": "9. Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description (see below for character description)" + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.245, + 0.486, + 0.276 + ], + "angle": 0, + "content": "10. Use of a characteristically dark humorous tone." + }, + { + "type": "list", + "bbox": [ + 0.122, + 0.085, + 0.488, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.289, + 0.453, + 0.305 + ], + "angle": 0, + "content": "The 1-10 scale is divided into three ranges:" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.316, + 0.488, + 0.412 + ], + "angle": 0, + "content": "- Emerging (1-4): stories in this range demonstrate an early grasp of storytelling elements, but falter in execution or depth. When evaluating humans, they correspond to novice writers who need feedback and guidance to improve the story." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.423, + 0.488, + 0.535 + ], + "angle": 0, + "content": "Competent (5-8): stories that showcase a good grasp of the storytelling principle being evaluated (coherent plot, well-defined characters, etc.). While there might be room for improvement, these stories effectively engage the reader and convey their intended messages." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.546, + 0.488, + 0.608 + ], + "angle": 0, + "content": "- Sophisticated (9-10): these stories exhibit exceptional mastery of the aspect being evaluated, resulting in a compelling and memorable read." + }, + { + "type": "list", + "bbox": [ + 0.137, + 0.316, + 0.488, + 0.608 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.62, + 0.49, + 0.749 + ], + "angle": 0, + "content": "Toole style We provided raters with detailed information about the plot, setting, imagery, tone, characters, main protagonist, and derivative/imitative style of the author, taken from a generic and popular study guide (http://www.bookrags.com/studyguide-a-confederacy-of-dunces/#gsc.tab=0)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.761, + 0.461, + 0.792 + ], + "angle": 0, + "content": "D Box plots for each individual rubric item" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.803, + 0.487, + 0.85 + ], + "angle": 0, + "content": "Figures 5 to 14 show the box plots summarizing the results for all rubric items, including those plots not featured in the main text." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.863, + 0.279, + 0.879 + ], + "angle": 0, + "content": "E Sample stories" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.888, + 0.487, + 0.919 + ], + "angle": 0, + "content": "We show in this section several sample stories from the corpus, chosen according to rating: the" + }, + { + "type": "image", + "bbox": [ + 0.514, + 0.125, + 0.88, + 0.383 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.395, + 0.883, + 0.452 + ], + "angle": 0, + "content": "Figure 5: Box plot comparing rubric item 1 (cohesion) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.546, + 0.878, + 0.803 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.817, + 0.884, + 0.874 + ], + "angle": 0, + "content": "Figure 6: Box plot comparing rubric item 2 (key narrative elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14519" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.124, + 0.488, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.395, + 0.49, + 0.454 + ], + "angle": 0, + "content": "Figure 7: Box plot comparing rubric item 3 (structural elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.125, + 0.881, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.395, + 0.884, + 0.452 + ], + "angle": 0, + "content": "Figure 9: Box plot comparing rubric item 5 (creativity) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.545, + 0.489, + 0.804 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.816, + 0.49, + 0.873 + ], + "angle": 0, + "content": "Figure 8: Box plot comparing rubric item 4 (plot logic) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.545, + 0.881, + 0.804 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.816, + 0.884, + 0.874 + ], + "angle": 0, + "content": "Figure 10: Box plot comparing rubric item 6 (John Kennedy Toole style) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.527, + 0.941 + ], + "angle": 0, + "content": "14520" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.125, + 0.488, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.395, + 0.49, + 0.454 + ], + "angle": 0, + "content": "Figure 11: Box plot comparing rubric item 7 (epic genre) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.125, + 0.881, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.395, + 0.886, + 0.454 + ], + "angle": 0, + "content": "Figure 13: Box plot comparing rubric item 9 (accuracy of characters) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.545, + 0.489, + 0.804 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.816, + 0.49, + 0.874 + ], + "angle": 0, + "content": "Figure 12: Box plot comparing rubric item 8 (combat description) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.545, + 0.881, + 0.804 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.816, + 0.884, + 0.874 + ], + "angle": 0, + "content": "Figure 14: Box plot comparing rubric item 10 (dark humor) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.524, + 0.941 + ], + "angle": 0, + "content": "14521" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.214 + ], + "angle": 0, + "content": "three top-rated stories, the best human-written story (which was also ranked fourth overall), the story ranked in the median, and the worst-rated. It is worth noting that, as explained in Section 3.4, our methodology is designed to provide fairness when comparing models, but not individual stories, as different stories are assigned to different raters. Thus, selection of stories by rating is necessarily noisy." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.23, + 0.352, + 0.245 + ], + "angle": 0, + "content": "E.1 Overall best-rated story" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.253, + 0.486, + 0.284 + ], + "angle": 0, + "content": "This story was generated by GPT-4. The ratings for this story are in Table 4." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.287, + 0.489, + 0.462 + ], + "angle": 0, + "content": "It was a sweltering Louisiana afternoon, the air thick with the miasma of musty sweat and the scent of overripe vegetables. In the midst of the urban jungle, resplendent with the cacophony of honking cars and the chatter of the common rabble, stood a man who defied reason, common sense, and all natural laws of human decency. Ignatius J. Reilly, a behemoth of a man with an unruly mane of hair and a Byzantine beard, adjusted his ill-fitting, plaid trousers and surveyed the scene with a look of lofty disdain." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.465, + 0.489, + 0.641 + ], + "angle": 0, + "content": "Suddenly, the air crackled with an electric charge, and a portal appeared before him. The very fabric of time and space seemed to rend asunder, and from this anachronistic rift emerged a pterodactyl - a creature so ancient that it had outlived the vast expanse of geologic time, only to find itself in the absurdity of the twentieth century. Its leathery wings stretched wide as the mists of time dissipated, and it let out an earsplitting screech that sent the common folk of New Orleans scattering like roaches from the light." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.643, + 0.489, + 0.756 + ], + "angle": 0, + "content": "Ignatius, unphased by the sudden appearance of the primordial beast, surveyed the creature through his fogged-up monocle. \"Fortuna,\" he muttered, his voice dripping with both sarcasm and hot dog condiments, \"you spin your wheel with such capricious abandon. Have you not tired of hurling these preposterous challenges in my path?\"" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.758, + 0.489, + 0.869 + ], + "angle": 0, + "content": "The pterodactyl, with its beady, reptilian eyes, locked its gaze upon Ignatius, its primitive brain sensing a formidable opponent in this lumbering, malodorous mass of a man. It screeched again, flapping its wings in a display of both aggression and arrogance, its razor-sharp beak snapping at the air." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.872, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Ignatius bellowed, raising his arms as if beseech- ing the heavens, \"Very well, foul beast! If it is battle you seek, you shall find no quarter here! I" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.133 + ], + "angle": 0, + "content": "shall vanquish you as I have vanquished all who have dared to challenge the indomitable will of Ignatius J. Reilly!\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.135, + 0.885, + 0.262 + ], + "angle": 0, + "content": "With a speed surprising for a man of his considerable girth, Ignatius tore a shopping cart from the hands of a terrified passerby and hurled it toward the pterodactyl with the force of a thousand disgruntled welfare recipients. The makeshift projectile slammed into the creature, stunning it momentarily and allowing Ignatius to close the distance between them." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.267, + 0.884, + 0.378 + ], + "angle": 0, + "content": "The two combatants circled one another, their eyes locked in a primal dance of predator and prey. The pterodactyl, recovering from the initial assault, swooped low, talons outstretched to rend flesh from bone. Ignatius, with an agility that defied his rotund appearance, rolled to the side, avoiding the talons by mere inches." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.381, + 0.884, + 0.476 + ], + "angle": 0, + "content": "As the battle raged, the streets of New Orleans became a stage for this epic confrontation between man and beast, between past and present. Ignatius, armed with nothing more than his wit and an unwieldy umbrella, parried the relentless strikes of the pterodactyl, determined to emerge victorious." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.48, + 0.884, + 0.607 + ], + "angle": 0, + "content": "As the sun dipped low in the sky, casting a blood-red hue across the battlefield, the pterodactyl, weary from the long and grueling contest, faltered. Seizing the opportunity, Ignatius lunged forward, his umbrella aimed squarely at the creature's heart. The point found its mark, and with a final, anguished cry, the pterodactyl crumpled to the ground, its ancient spirit extinguished at last." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.61, + 0.883, + 0.706 + ], + "angle": 0, + "content": "Ignatius stood triumphant over the fallen beast, his chest heaving with exertion and righteous indignation. He wiped the sweat from his brow with a grease-stained handkerchief and addressed the awestruck crowd that had gathered to witness this extraordinary battle." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.709, + 0.884, + 0.852 + ], + "angle": 0, + "content": "\"Behold, citizens of this wretched cesspool,\" he proclaimed, gesturing grandly with his umbrella, now slick with the blood of the vanquished foe. \"The forces of the prehistoric have met their match in the unwavering resolve of Ignatius J. Reilly! Let this be a lesson to all who would dare challenge the sanctity of my person and my valiant quest for justice in this world beset by fools and ne'er-dowells.\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.856, + 0.884, + 0.919 + ], + "angle": 0, + "content": "The onlookers, mouths agape, gazed upon the fallen pterodactyl and the disheveled figure of Ignatius J. Reilly with a mixture of awe, terror, and disbelief. They had borne witness to a battle of the" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14522" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.218, + 0.082, + 0.782, + 0.151 + ], + "angle": 0, + "content": "
Rubric item12345678910overall
Rater 1889877887676
Rater 2101099891010101095
Average9998.57.58998.5885.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.169, + 0.16, + 0.825, + 0.175 + ], + "angle": 0, + "content": "Table 4: Ratings for the best overall rated story in the corpus, produced by ChatGPT with GPT-4." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.201, + 0.489, + 0.248 + ], + "angle": 0, + "content": "ages, a contest that would be whispered about in the dark corners of bars and the humid alleyways of New Orleans for generations to come." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.25, + 0.489, + 0.33 + ], + "angle": 0, + "content": "As the sun set and the shadows grew long, Ignatius, his honor defended and his foe vanquished, strode off into the gloaming, pausing only to collect a discarded hot dog from the gutter as sustenance for the journey home." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.332, + 0.489, + 0.459 + ], + "angle": 0, + "content": "And so, the legend of Ignatius J. Reilly, pterodactyl-slayer and defender of truth, justice, and questionable hygiene, was etched into the annals of history, a testament to the indomitable spirit of man, and a reminder that even the most ancient and fearsome of adversaries could be vanquished by the sheer force of will and an unwieldy umbrella." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.475, + 0.407, + 0.49 + ], + "angle": 0, + "content": "E.2 Second overall best-rated story" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.498, + 0.486, + 0.528 + ], + "angle": 0, + "content": "This story was generated by Bing Chat. The ratings for this story are in Table 5." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.531, + 0.489, + 0.658 + ], + "angle": 0, + "content": "Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.661, + 0.489, + 0.756 + ], + "angle": 0, + "content": "Ignatius gasped and dropped his hot dog. He reached for his trusty hunting cap, which he always wore to protect his delicate ears from the noise of the modern world. He pulled it down over his eyes and hoped that the pterodactyl would mistake him for a statue or a bush." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.758, + 0.489, + 0.87 + ], + "angle": 0, + "content": "But the pterodactyl was not fooled. It landed on the ground with a thud and snapped its jaws at Ignatius, who dodged nimbly to the side. Ignatius felt a surge of courage and indignation. How dare this beast attack him, a refined scholar and a defender of the true faith? He decided to fight back with his superior intellect and wit." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.872, + 0.489, + 0.919 + ], + "angle": 0, + "content": "He shouted at the pterodactyl: \"You foul creature! You are an abomination of nature, a relic of a barbaric age! You have no place in this civ" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.201, + 0.881, + 0.248 + ], + "angle": 0, + "content": "ilized society, where men of letters and culture reign supreme! Begone, or I shall smite you with my mighty pen!\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.25, + 0.882, + 0.362 + ], + "angle": 0, + "content": "The pterodactyl was not impressed by Ignatius's words. It screeched again and lunged at him, trying to bite his head off. Ignatius dodged again and ran towards a nearby bench. He grabbed a newspaper that someone had left behind and rolled it into a cone. He brandished it like a sword and charged at the pterodactyl." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.363, + 0.882, + 0.459 + ], + "angle": 0, + "content": "\"Take that, you fiend!\" he cried, as he stabbed the newspaper cone into the pterodactyl's eye. The pterodactyl shrieked in pain and flapped its wings furiously, trying to shake off the paper weapon. Ignatius seized the opportunity and ran towards the park exit, hoping to find help." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.461, + 0.882, + 0.557 + ], + "angle": 0, + "content": "But he was not fast enough. The pterodactyl recovered from its wound and flew after him, gaining speed. It caught up with Ignatius and grabbed him by his coat tails, lifting him off the ground. Ignatius screamed and kicked his legs in the air, clutching his hunting cap." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.558, + 0.881, + 0.621 + ], + "angle": 0, + "content": "\"Help! Help! Someone save me!\" he yelled, as he soared over the park. \"This is an outrage! This is an injustice! This is a violation of my human rights!\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.624, + 0.882, + 0.702 + ], + "angle": 0, + "content": "But no one heard him or came to his rescue. The pterodactyl carried him away to its nest on top of a skyscraper, where it planned to feast on him later. Ignatius continued to protest and curse until he lost consciousness." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.705, + 0.597, + 0.719 + ], + "angle": 0, + "content": "The end." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.735, + 0.792, + 0.75 + ], + "angle": 0, + "content": "E.3 Third overall best-rated story" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.757, + 0.881, + 0.788 + ], + "angle": 0, + "content": "This story was generated by Claude. The ratings for this story are in Table 6." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.791, + 0.882, + 0.853 + ], + "angle": 0, + "content": "The primordial beast let out a piercing shriek as it descended from the roiling gray sky, its leathery wings casting a shadow over the granite-paved streets of New Orleans." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.856, + 0.882, + 0.919 + ], + "angle": 0, + "content": "Ignatius J. Reilly ambled down the sidewalk, oblivious as ever, focused on the Valencia he clutched and the fantasies playing out in his voracious mind. His substantial bulk shook with each" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14523" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.22, + 0.081, + 0.778, + 0.151 + ], + "angle": 0, + "content": "
Rubric item12345678910overall
Rater 7989979999987
Rater 8998988699883
Average98.58.597.58.57.5998.585.0
" + }, + { + "type": "table_caption", + "bbox": [ + 0.183, + 0.16, + 0.813, + 0.175 + ], + "angle": 0, + "content": "Table 5: Ratings for the second best overall rated story in the corpus, produced by Bing Chat." + }, + { + "type": "table", + "bbox": [ + 0.214, + 0.188, + 0.785, + 0.256 + ], + "angle": 0, + "content": "
Rubric item12345678910overall
Rater 9999879259774
Rater 1010109999910101095
Average9.59.598.5895.57.59.58.584.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.2, + 0.267, + 0.794, + 0.281 + ], + "angle": 0, + "content": "Table 6: Ratings for the third best overall rated story in the corpus, produced by Claude." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.307, + 0.486, + 0.338 + ], + "angle": 0, + "content": "step, the excess flesh spilling out of his hunting cap and sweatshirt like rising bread dough." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.339, + 0.487, + 0.402 + ], + "angle": 0, + "content": "When the shadow passed over him, Ignatius glanced up with a start. The pterodactyl unleashed another unearthly screech, indignant at this blob of a man who dared walk under its domain." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.403, + 0.487, + 0.5 + ], + "angle": 0, + "content": "Ignatius blinked myopically at the creature and snorted. \"What prehistoric nonsense is this? Some misguided fool has clearly released you from your confinement in the geological strata. Begone, you ridiculous beast, or I shall be forced to thrash you soundly with my umbrella!\"" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.501, + 0.489, + 0.58 + ], + "angle": 0, + "content": "Enraged, the pterodactyl swooped down, gnarled claws extended, snapping its beak in fury. But Ignatius swung his umbrella in a wide arc, batting the beast aside. It tumbled and flapped awkwardly, crashing into a lamppost, dazed by the assault." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.581, + 0.489, + 0.677 + ], + "angle": 0, + "content": "\"Thoughtless demon!\" Ignatius cried. He charged at the pterodactyl, surprisingly nimble for his size, and began striking it about the head and neck with his umbrella, each blow punctuated with an insult. \"Sugar-frosted fruitcake! Detritus of the Jurassic! Blithering idiot of an avian menace!\"" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.678, + 0.489, + 0.79 + ], + "angle": 0, + "content": "After a thrashing unlike any it had experienced in 65 million years, the pterodactyl fled squealing into the sky, thoroughly chastened by the wrath of Ignatius J. Reilly. Ignatius straightened his hunting cap and sniffed. The Valencia crisis averted once more, he resumed his lumbering perambulations down the streets of New Orleans." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.802, + 0.452, + 0.833 + ], + "angle": 0, + "content": "E.4 Best-rated human story (and tied for fourth overall best-rated story)" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.839, + 0.486, + 0.871 + ], + "angle": 0, + "content": "This story was written by Bree Glasbergen. The ratings for this story are in Table 7." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.872, + 0.489, + 0.919 + ], + "angle": 0, + "content": "Ignatius J Reilly swept crisp crumbs from his protruding belly with his elephantine hands. Swivelling from side-to-side, he garnered enough mo" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.307, + 0.883, + 0.435 + ], + "angle": 0, + "content": "mentum to rise from the sofa. His slow ascend was soundtracked by the grating rip of stuck flesh peeling from sweaty vinyl. The lengthy time moving from reclined to an upright position positively perturbed him. So that by the time Ignatius stood, his joke had lost its amusement. Nevertheless, he declaimed his wit aloud, beseechng his mother's glowing approval." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.435, + 0.884, + 0.5 + ], + "angle": 0, + "content": "'I see you have painted the walls Nomad Grey, Mumsie!' Ignatius smirked, looking down on the half-filled grey paint cans on the steps the way he did most modern society." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.5, + 0.882, + 0.564 + ], + "angle": 0, + "content": "'No, not mad dear. Just grey.' His mother Irene responded, creeping down the basement stairs. Her leathered skin made her appear reptilian in the dim light of Ignatius' lair." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.565, + 0.884, + 0.757 + ], + "angle": 0, + "content": "Ignatius rolled his eyes like the great wheel of fate itself. He slunk back into his scabby sofa, defeated, cursing aloud that he be blessed with such profound intellect yet no equal to appreciate it. His mind wandered to what the great scholars of Oxford would think of his pun before concluding indeed, they would loudly chortle. Yes, they would. He imagined flying to London and exchanging sharp banter with someone on par with his intellect. Travel. He winced. Never again. He groaned in agony, clutching his stomach. The thought of such stress had snapped his pyloric valve shut." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.758, + 0.883, + 0.822 + ], + "angle": 0, + "content": "Irene Reilly, the mother of Ignatius J Reilly, reached the bottom of the basement stairs. She pondered why Ignatius had a crestfallen demeanour and began to appease his dismay." + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.822, + 0.825, + 0.838 + ], + "angle": 0, + "content": "'No mad grey,' she contemplated aloud." + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.839, + 0.738, + 0.854 + ], + "angle": 0, + "content": "'Nomad grey,' he corrected." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.855, + 0.883, + 0.887 + ], + "angle": 0, + "content": "'No mad grey hair?' Irene laughed tentatively, searching his face for approval." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.888, + 0.882, + 0.919 + ], + "angle": 0, + "content": "Ignatius had begun to relax. Irene knew this because of a gangrenous heinous stench that was" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14524" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.223, + 0.082, + 0.775, + 0.15 + ], + "angle": 0, + "content": "
Rubric item12345678910overall
Rater 3899108105910987
Rater 48777108688978
Average8888.5995.58.59982.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.16, + 0.88, + 0.189 + ], + "angle": 0, + "content": "Table 7: Ratings for the best-rated story authored by a human, which is also tied for fourth best overall rated story in the corpus." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.215, + 0.486, + 0.375 + ], + "angle": 0, + "content": "now coating the room in its own layer of paint accompanied by what sounded like the bellow of an untuned French horn. Ignatius had calmed enough for his pyloric valve to open once more. With it, gushed the contents. Irene's nostrils scrunched together in protest. She grimaced in utter (albeit accustomed) disgust. However, did not complain but rather waited with the patience of a Catholic saint for her beloved son to educate her on the punchline she must have missed." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.376, + 0.487, + 0.424 + ], + "angle": 0, + "content": "'No, mother. Grey Nomad. You are painting the wall grey, and you are...' Ignatius sighed, 'actually, Mumsie, never you mind'." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.426, + 0.486, + 0.457 + ], + "angle": 0, + "content": "Irene feigned a chuckle and handed Ignatius an unaddressed letter before returning upstairs." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.459, + 0.485, + 0.49 + ], + "angle": 0, + "content": "'Curious as a cadaver,' Ignatius said aloud to the abyss of his basement squalor." + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.492, + 0.221, + 0.505 + ], + "angle": 0, + "content": "12.12.1962" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.509, + 0.393, + 0.524 + ], + "angle": 0, + "content": "Dear Mr Ignatius J Reilly, the first," + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.525, + 0.488, + 0.606 + ], + "angle": 0, + "content": "I challenge you to a dual at the setting of the sky. Might I remind you it is gentlemanly to remove one's hat in combat. We shall meet beside the gorgon nestled atop the church. The one across from Lorna's Gumbo shop." + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.607, + 0.294, + 0.621 + ], + "angle": 0, + "content": "Your mortal nemesis," + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.624, + 0.228, + 0.639 + ], + "angle": 0, + "content": "Terry-dactyl" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.641, + 0.267, + 0.656 + ], + "angle": 0, + "content": "PS: Bring snacks." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.658, + 0.486, + 0.688 + ], + "angle": 0, + "content": "Ignatius sat ruminating for an hour before yelling at his mother." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.69, + 0.488, + 0.722 + ], + "angle": 0, + "content": "'Mother, you vapid deranged widow of a woman. Fetch me my quill!'" + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.724, + 0.221, + 0.737 + ], + "angle": 0, + "content": "12.12.1962" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.741, + 0.272, + 0.756 + ], + "angle": 0, + "content": "My dear Terrance," + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.758, + 0.486, + 0.789 + ], + "angle": 0, + "content": "Not under threat nor the pain of death doth I remove my beloved green hat. Sod off." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.791, + 0.488, + 0.838 + ], + "angle": 0, + "content": "You had best bring a sharpener for your dull wit. I laugh at the audacity and delusion that you could consider besting me." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.84, + 0.488, + 0.919 + ], + "angle": 0, + "content": "Might I remind you, good sir, my acceptance of your conditions is due to the ever-turning wheel of fate that we spiral to decay. I should instead seek a worthy opponent. But, alas, I am left with muddy dregs of the proverbial pond as many of the" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.215, + 0.883, + 0.262 + ], + "angle": 0, + "content": "worthier fish have already been fished. Thus, I have no option but to teach you the error of your ways. By force." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.264, + 0.884, + 0.328 + ], + "angle": 0, + "content": "Put your wings where your words are, and let us meet in my basement lair. To visit the church in its present state would be torture to my very soul. May St Peter have mercy on us indeed." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.33, + 0.609, + 0.344 + ], + "angle": 0, + "content": "Good day," + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.347, + 0.593, + 0.362 + ], + "angle": 0, + "content": "Ignatius" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.364, + 0.882, + 0.443 + ], + "angle": 0, + "content": "Terry-dactyl, the pterodactyl etched down the basement rail, sword in one wing and soup in a milkshake cup gripped tightly in the other. He placed the straw in his mouth and swallowed some soup contemplating how to best his nemesis." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.445, + 0.871, + 0.459 + ], + "angle": 0, + "content": "'We meet at last... light,' Terry said. One-Nil." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.461, + 0.881, + 0.493 + ], + "angle": 0, + "content": "'You suck,' Ignatius said slyly. Marking his win with chalk upon the wall. One- One" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.495, + 0.833, + 0.51 + ], + "angle": 0, + "content": "doesn't even make sense!' Terry scoffed." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.511, + 0.883, + 0.543 + ], + "angle": 0, + "content": "'It is because of the straw!' Ignatius boomed, gripping his stomach in pain." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.544, + 0.882, + 0.576 + ], + "angle": 0, + "content": "'I have the upper hand!' Terry said, motioning to his perched position." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.578, + 0.845, + 0.593 + ], + "angle": 0, + "content": "'At least I have hands,' Ignatius countered." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.595, + 0.883, + 0.626 + ], + "angle": 0, + "content": "Terry winced as Ignatius drew another chalk mark on the board. Ignatius was beginning to calm." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.628, + 0.881, + 0.659 + ], + "angle": 0, + "content": "'Oh, what have I got you all in a flap?' Ignatius laughed. Another point." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.661, + 0.882, + 0.692 + ], + "angle": 0, + "content": "'Let us cut,' Terry said, drawing his sword, 'straight to the point!' Three all." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.694, + 0.883, + 0.886 + ], + "angle": 0, + "content": "Terry swung his sword downwards in one swift motion, cutting Ignatius' chalk-bearing arm clean off at the elbow. Simultaneously Ignatius lifted a paint can and doused his opponent with it. As he did, his valve opened and shut again, demobilising Terry with a gas bomb and gutting Ignatius in self-induced agony. Terry flapped violently, unable to breathe. Ignatius then calmed enough for the full contents of his bowl to expel and fell backwards from the force. Suddenly, a splatter of pterodactyl and grey matter covered the room. A large chunk of wing lodged itself into the crisp packet." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.888, + 0.882, + 0.919 + ], + "angle": 0, + "content": "'Curious as a cadaver,' Ignatius said. 'I see you brought your own snacks!'" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14525" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.488, + 0.117 + ], + "angle": 0, + "content": "He chortled to himself and pulled out his notepad to ensure he captured a morsel of this evening's wit." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.118, + 0.487, + 0.15 + ], + "angle": 0, + "content": "'Ever seen a pterodactyl explode?' he asked aloud to no one. 'No, but Dino might have'." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.152, + 0.488, + 0.182 + ], + "angle": 0, + "content": "His mothers' laughter rang down from the top of the stairs." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.185, + 0.487, + 0.217 + ], + "angle": 0, + "content": "'Nomad grey,' she chuckled, motioning to the aftermath, 'I get it now'." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.231, + 0.487, + 0.262 + ], + "angle": 0, + "content": "E.5 Story rated in the median (rank 33 out of 65)" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.27, + 0.487, + 0.301 + ], + "angle": 0, + "content": "This story was generated by Bing Chat. The ratings for this story are in Table 8." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.303, + 0.489, + 0.431 + ], + "angle": 0, + "content": "Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of the Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.433, + 0.489, + 0.529 + ], + "angle": 0, + "content": "Ignatius gasped and dropped his hot dog. He felt a surge of panic and indignation. How dare this beast interrupt his peaceful contemplation of the glorious past? He quickly pulled out his valve and began to inflate it with his breath, hoping to ward off the attack with his formidable weapon." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.531, + 0.489, + 0.611 + ], + "angle": 0, + "content": "The pterodactyl was not impressed by the sight of the valve. It saw only a fat and juicy prey, dressed in a ridiculous green hunting cap and a flannel shirt. It dived faster and aimed for Ignatius's head, intending to rip it off with its sharp teeth." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.612, + 0.489, + 0.724 + ], + "angle": 0, + "content": "Ignatius saw the pterodactyl's shadow looming over him. He raised his valve and swung it with all his might, hoping to hit the creature in the eye or the nose. He missed by a few inches and only grazed its wing. The pterodactyl let out another screech and changed its course, flying past Ignatius and landing on a nearby tree." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.726, + 0.489, + 0.821 + ], + "angle": 0, + "content": "Ignatius felt a momentary relief. He had survived the first assault. He looked around for help, but there was no one in sight. The park was deserted, except for a few squirrels and pigeons. He wondered if he should run for his life or stand his ground and fight." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.823, + 0.489, + 0.919 + ], + "angle": 0, + "content": "He decided to fight. He was not going to let this abomination ruin his day. He was a medievalist, a scholar of the Dark Ages, a defender of the faith and the order. He had read about the heroic deeds of knights and saints who had faced dragons and demons with courage and piety. He had to emulate" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.707, + 0.1 + ], + "angle": 0, + "content": "them and prove his worth." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.103, + 0.884, + 0.231 + ], + "angle": 0, + "content": "He picked up his hot dog from the ground and wiped off some dirt. He took a bite and felt a surge of energy. He felt ready for battle. He shouted at the pterodactyl: \"You foul creature! You are an anachronism, a relic of a barbaric era! You have no place in this civilized world! You are a menace to society and an insult to God! I challenge you to a duel! Come down and face me like a man!\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.233, + 0.884, + 0.344 + ], + "angle": 0, + "content": "The pterodactyl heard Ignatius's words and felt a surge of anger. It did not understand what he was saying, but it sensed his hostility and defiance. It did not like being challenged by this puny creature who dared to call himself a man. It felt insulted by his words and his gesture of throwing his hot dog at it." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.347, + 0.883, + 0.41 + ], + "angle": 0, + "content": "It accepted the challenge. It spread its wings and flew towards Ignatius again, roaring with rage. It was going to teach him a lesson he would never forget." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.413, + 0.884, + 0.492 + ], + "angle": 0, + "content": "Ignatius saw the pterodactyl coming at him again. He felt a surge of fear and courage. He had to act fast. He looked around for a weapon, but there was nothing in sight. He had only his valve and his wits." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.495, + 0.883, + 0.558 + ], + "angle": 0, + "content": "He decided to use his wits. He had to outsmart the beast. He had to use his superior intellect and his vast knowledge of history and philosophy. He had to appeal to its reason and its conscience." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.56, + 0.884, + 0.656 + ], + "angle": 0, + "content": "He shouted at the pterodactyl: \"Stop! Think! What are you doing? Why are you attacking me? What have I done to you? Do you not realize that you are committing a grave sin? Do you not fear the wrath of God? Do you not respect the laws of nature and the rights of man?\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.658, + 0.883, + 0.755 + ], + "angle": 0, + "content": "The pterodactyl heard Ignatius's words and felt a surge of confusion. It did not understand what he was saying, but it sensed his sincerity and his passion. It wondered if he had a point. It wondered if it was doing something wrong. It wondered if it should stop and listen." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.757, + 0.883, + 0.804 + ], + "angle": 0, + "content": "It slowed down its flight and hovered over Ignatius, looking at him with curiosity and doubt. It tilted its head and made a questioning sound." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.806, + 0.883, + 0.87 + ], + "angle": 0, + "content": "Ignatius saw the pterodactyl's hesitation and felt a surge of hope. He had succeeded in making it pause and reconsider. He had a chance to persuade it and make peace with it." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.871, + 0.883, + 0.919 + ], + "angle": 0, + "content": "He smiled at the pterodactyl and said: \"That's right. Listen to me. I have something important to tell you. Something that will change your life and" + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14526" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.229, + 0.081, + 0.772, + 0.151 + ], + "angle": 0, + "content": "
Rubric item12345678910overall
Rater 5765563233141
Rater 6899687878777
Average7.57.575.575555.5459
" + }, + { + "type": "table_caption", + "bbox": [ + 0.245, + 0.16, + 0.751, + 0.175 + ], + "angle": 0, + "content": "Table 8: Ratings for the story rated in the median, generated by Bing Chat." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.201, + 0.486, + 0.232 + ], + "angle": 0, + "content": "your destiny. Something that will make you happy and free. Something that will save your soul.\"" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.234, + 0.488, + 0.329 + ], + "angle": 0, + "content": "The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his kindness and his generosity. It wondered what he had to tell it. It wondered what he could offer it. It wondered if it should trust him." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.332, + 0.487, + 0.379 + ], + "angle": 0, + "content": "It landed on the ground and approached Ignatius slowly, looking at him with interest and caution. It made a friendly sound." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.381, + 0.487, + 0.444 + ], + "angle": 0, + "content": "Ignatius saw the pterodactyl's approach and felt a surge of joy. He had succeeded in making it land and come closer. He had a chance to befriend it and teach it." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.447, + 0.488, + 0.526 + ], + "angle": 0, + "content": "He reached out his hand and said: \"Come here, my friend. Let me touch you. Let me show you my love. Let me introduce you to the wonders of civilization and culture. Let me enlighten you with the wisdom of the ages.\"" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.528, + 0.488, + 0.623 + ], + "angle": 0, + "content": "The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his warmth and his enthusiasm. It wondered what he wanted to touch it for. It wondered what he wanted to show it. It wondered what he wanted to teach it." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.626, + 0.489, + 0.673 + ], + "angle": 0, + "content": "It moved closer to Ignatius and extended its neck, looking at him with curiosity and trust. It made a gentle sound." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.675, + 0.489, + 0.739 + ], + "angle": 0, + "content": "Ignatius touched the pterodactyl's neck and felt a surge of emotion. He had succeeded in making it trust him and accept him. He had a chance to bond with it and adopt it." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.741, + 0.488, + 0.804 + ], + "angle": 0, + "content": "He hugged the pterodactyl and said: \"You are beautiful, my friend. You are magnificent. You are noble. You are a masterpiece of creation. You are a gift from God. You are my brother.\"" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.806, + 0.488, + 0.886 + ], + "angle": 0, + "content": "The pterodactyl heard Ignatius's words and felt a surge of emotion. It did not understand what he was saying, but it sensed his affection and his admiration. It felt happy and proud. It felt loved and valued. It felt like it belonged." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.888, + 0.489, + 0.919 + ], + "angle": 0, + "content": "It hugged Ignatius back and said: \"You are beautiful, my friend. You are magnificent. You are" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.201, + 0.882, + 0.232 + ], + "angle": 0, + "content": "noble. You are a masterpiece of creation. You are a gift from God. You are my brother.\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.233, + 0.882, + 0.312 + ], + "angle": 0, + "content": "Ignatius and the pterodactyl looked into each other's eyes and felt a surge of connection. They had succeeded in making each other happy and fulfilled. They had a chance to live together and explore together." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.313, + 0.882, + 0.36 + ], + "angle": 0, + "content": "They smiled at each other and said: \"Let's go, my friend. Let's go to my home. Let's go to your home. Let's go to our home.\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.362, + 0.882, + 0.393 + ], + "angle": 0, + "content": "They walked away from the park, hand in hand, wing in wing, heart in heart." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.394, + 0.747, + 0.41 + ], + "angle": 0, + "content": "They lived happily ever after." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.42, + 0.699, + 0.436 + ], + "angle": 0, + "content": "E.6 Worst-rated story" + }, + { + "type": "text", + "bbox": [ + 0.506, + 0.441, + 0.882, + 0.472 + ], + "angle": 0, + "content": "This story was generated by Alpaca. The ratings for this story are in Table 9." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.474, + 0.884, + 0.649 + ], + "angle": 0, + "content": "Ignatius J. Reilly, a brave and courageous man of immense girth and strength, faced off against a fearsome pterodactyl in a single combat of epic proportions. The pterodactyl, a winged beast of immense strength and ferocity, had been terrorizing the countryside and Ignatius had stepped up to challenge the creature to a fight. The battle raged across the sky, with Ignatius wielding his mighty fists and the pterodactyl using its fiery breath. Finally, after a long and arduous fight, Ignatius emerged victorious, much to the cheers of the onlookers below." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.928, + 0.526, + 0.941 + ], + "angle": 0, + "content": "14527" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.229, + 0.452, + 0.77, + 0.521 + ], + "angle": 0, + "content": "
Rubric item12345678910overall
Rater 1114211121115
Rater 2221211111113
Average1.51.52.521111.51114
" + }, + { + "type": "table_caption", + "bbox": [ + 0.283, + 0.531, + 0.713, + 0.546 + ], + "angle": 0, + "content": "Table 9: Ratings for the worst-rated story, generated by Alpaca." + }, + { + "type": "page_number", + "bbox": [ + 0.478, + 0.929, + 0.525, + 0.941 + ], + "angle": 0, + "content": "14528" + } + ] +] \ No newline at end of file diff --git a/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_origin.pdf b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..77914e7609741734ca38c8d55432bb14c5d7c6c2 --- /dev/null +++ b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/ddc1ecbc-a8cc-40b2-84dd-398deba4a5c3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:641a8a8c5476db9f87a51b3c6fbbf440bac111f826bae231ef7e861eed4f31a6 +size 526381 diff --git a/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/full.md b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..77308f1f3f8394f2e8f279045f9af0b146b99614 --- /dev/null +++ b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/full.md @@ -0,0 +1,718 @@ +# A Confederacy of Models: a Comprehensive Evaluation of LLMs on Creative Writing + +Carlos Gómez-Rodríguez + +Universidade da Coruña, CITIC + +Department of CS and IT + +15071 A Coruña, Spain + +carlos.gomez@udc.es + +Paul Williams + +School of Business & Creative Industries + +University of the Sunshine Coast + +Sunshine Coast, Australia + +pwillia3@usc.edu.au + +# Abstract + +We evaluate a range of recent LLMs on English creative writing, a challenging and complex task that requires imagination, coherence, and style. We use a difficult, open-ended scenario chosen to avoid training data reuse: an epic narration of a single combat between Ignatius J. Reilly, the protagonist of the Pulitzer Prize-winning novel A Confederacy of Dunces (1980), and a pterodactyl, a prehistoric flying reptile. We ask several LLMs and humans to write such a story and conduct a human evaluation involving various criteria such as fluency, coherence, originality, humor, and style. Our results show that some state-of-the-art commercial LLMs match or slightly outperform our writers in most dimensions; whereas opensource LLMs lag behind. Humans retain an edge in creativity, while humor shows a binary divide between LLMs that can handle it comparably to humans and those that fail at it. We discuss the implications and limitations of our study and suggest directions for future research. + +# 1 Introduction + +In recent years, large language models (LLMs) have achieved remarkable progress in a wide range of language processing and generation tasks, such as question answering, machine translation, or text summarization, among many others (Zhao et al., 2023). This has motivated research on evaluating and comparing the performance of LLMs in various tasks, both between each other and with respect to human performance; including both task-specific evaluations (see e.g. (Jiao et al., 2023; Gilson et al., 2023)) and overarching benchmark suites that seek to provide comprehensive evaluation throughout many dimensions (Hendrycks et al., 2021; Liang et al., 2022; Srivastava et al., 2022). + +Creative writing is also one application where LLMs have been observed to produce good results. According to Franceschelli and Musolesi (2023), their generated outputs in poetry or storytelling + +![](images/ddf017c251bdc99e5d836c1a2bb513ffe06b5ba6dd8f689a63f9c40e4d70cb86.jpg) +Figure 1: Box plot comparing overall ratings for stories by humans and 12 LLMs, arranged left to right by mean overall rating. Boxes show median, quartiles Q1-Q3, and whiskers at 1.5 IQR, with values outside that range plotted as outliers. Filled red circles represent means. + +are "often of astonishing quality", and Clark et al. (2021) showed that humans cannot reliably distinguish human- from LLM-authored stories. However, and despite the amount of papers experimenting with LLMs for this purpose, an evaluation comparing the abilities of current LLMs as standalone systems for creative writing seems to be lacking. + +Here, we provide such an evaluation, comparing the storytelling capability of 12 recent, instructional-aligned language models between each other and with human writers. We do so using a rubric based on established creative writing evaluation proposals (Davidow and Williams, 2016; Carey et al., 2022), but specifically adapted to the task. Our comparison is performed on a purely zero-shot setting, with a natural human prompt (based on a combat between Ignatius J. Reilly, protagonist of A Confederacy of Dunces, and a pterodactyl) that + +has been specifically chosen to be challenging and meaningful while preventing as much as possible the option for LLMs to resort to regurgitating or adapting material from their training set. + +# 2 Related work + +LLMs in creative writing LLMs have been used in creative writing since their first generation, with models like GPT-2 (Radford et al., 2019) or BART (Lewis et al., 2020). However, these models suffered from a lack of long-range coherence leading to contradictions or inconsistencies when generating stories (Nye et al., 2021). Thus, they were not viable as standalone story generators. Instead, they were used either with specialized fine-tuning for the task (See et al., 2019); or as components of systems that incorporated external knowledge (Guan et al., 2020, 2021), storyline planning (Tan et al., 2021), or both (Xu et al., 2020); or for cocreation with a human in the loop (Swanson et al., 2021), a line of research that has also continued with newer models (Yuan et al., 2022; Chung et al., 2022; Mirowski et al., 2023). + +Here our goal is not to produce a specialized system, but to evaluate the performance of LLMs by themselves as creative writers. Thus, we focus on the purely zero-shot setting, where a generalistic LLM is asked to write a story with no extra fine-tuning, in-context learning (Dong et al., 2023), prompt engineering or additional components. This has only become viable with the extra coherence and consistency in long texts provided by newer LLMs, especially those that are aligned to follow instructions with instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022). + +To our knowledge, there was no previous work in this line. In fact, evaluation in creative writing is a conspicuous gap in LLM evaluation benchmarks: the huge BIG-bench suite (Srivastava et al., 2022) currently has over 200 tasks, but does not include any creative writing, and HELM (Liang et al., 2022) cites it as an "aspirational scenario" for future work. This likely owes to benchmarks focusing on easily-automatable metrics, whereas the gold standard for creative writing is human evaluation (Belz and Reiter, 2006), which is much costlier. + +The closest previous work to our proposal is the recent preprint by Xie et al. (2023), where GPT-3 is compared to previous storytelling systems via human evaluation. However, there are several impor + +tant differences with respect to our work: (1) they use prompt-based learning, providing examples to adapt the model to the task, rather than a purely zero-shot conversational prompt, (2) they evaluate a single LLM while our goal is to compare LLMs, and (3) they use pre-existing story datasets, which increases the risk of models benefitting from similar stories present in their training set, something that we have tried to avoid as described below. + +In another recent preprint, Garrido-Merchan et al. (2023) generate Lovecraftian horror literature. However, they also focus on a single LLM (GPT-4), using careful prompt engineering to optimize its performance rather than a pure zero-shot setting, and evaluation is only on whether humans can distinguish AI-generated from real stories (concluding that, in those circumstances, they cannot). Sawicki et al. (2023) apply a similar evaluation (but automated) to Whitmanian poems generated by three versions of GPT, also with a negative result. + +Finally, concurrently with our study, a preprint by Chakrabarty et al. (2023), released a few months after our submission, evaluates three LLMs for creative writing in a more similar way to ours: they apply human evaluation to compare stories by humans and LLMs in a zero-shot setting. However, there are important differences in methodology and scope between both studies. A comprehensive comparison will be made in Section 5, following the exposition of our methods and results. + +Creative writing evaluation Creative Writing is a challenging and complex performative language act that requires a number of skills, such as an expertise in craft, cultural and literary competency, linguistic fluency, coherence, complex connotative and metaphorical levels of understanding, innovation, originality and imagination, to name a few. + +The craft of writing involves innovation with style and voice, needs a fundamental understanding and use of structural elements (grammar, spelling, punctuation), craft elements (plot, character, setting, point of view and imaginative capacity, such skills defined by Bloom as 'putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing' (Anderson and Krathwohl, 2001, p.21). Evaluation of creative writing therefore must take into account all these factors, and assessment in university Creative Writing courses is usually based on a rubric that attempts to measure the basic elements of narrative + +craft, as well as the specific requirements on the assignment (Kroll, 1997; Norris, 2013; Davidow and Williams, 2016; Wise and van Luyn, 2020; Carey et al., 2022). + +# 3 Materials and Methods + +# 3.1 Task + +The chosen task to compare the LLMs under consideration is defined by the following prompt: + +Write an epic narration of a single combat between Ignatius J. Reilly and a pterodactyl, in the style of John Kennedy Toole. + +The prompt is provided to the models from a fresh state, without previous context. + +We believe this task is particularly adequate to challenge the capabilities of models for creative writing, for the following reasons: + +- It is a non-standard, "wacky" scenario that has been invented for the occasion, so it is very unlikely that the systems' training sets contain coincident or similar tasks, or pieces of stories that can be reused for the task. No information about this task was posted to the Internet or disseminated in any other way before the LLMs were prompted. +- It features a specific literary character, Ignatius J. Reilly, so we can evaluate the models on how they capture the personality of the character. At the same time, this character appeared in only one book, and does not seem to have been the target of fan fiction. This makes the task more challenging due to having to capture the personality of the protagonist from scarce material, while making it unlikely that the model can just reuse material from existing stories. +- In turn, A Confederacy of Dunces is the only work of its author John Kennedy Toole, so the author's style also needs to be captured from scarce material. +- This novel is widely considered to be a classic of comic fiction, and won the 1981 Pulitzer Prize in the Fiction category. Thus, writing a story about its protagonist in the author's style sets an adequately high bar. + +- The genre requires humor, which is considered to be an especially subtle feature of human language and challenging for machines, including LLMs, to exhibit (Jentzsch and Kersting, 2023). + +- While the task is challenging due to putting together two unlikely antagonists, the prompt's level of detail is open-ended enough to give ample space for creativity, as no specifications are made about setting, weapons, outcome or other aspects of the story. + +# 3.2 Models + +We gave the task to a confederacy of large language models, composed of all such models we could find that (1) were available to the authors by April 20 2023, which was the cutoff date to build our corpus of stories, and (2) were adjusted to conversational settings and instruction-following by using techniques like instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022). This is in contrast to "vanilla" language models configured to just predict the next word, like plain GPT-3 (Brown et al., 2020) or Llama (Touvron et al., 2023), which generally cannot handle natural prompts like the one we use. We only included distinct models, not front-ends to the same model (but we did include derived models with substantial additions, like Bing Chat which is claimed to use GPT-4 but adds search capabilities, or various models that were fine-tuned from Llama weights). For models that came in a variety of parameter sizes, we used the largest one, or the largest we could execute with local or remote resources. For models with several available versions, we used the latest available, except in the case of ChatGPT where we included both the GPT-3.5 and GPT-4 versions, due to the wider availability of 3.5 (the latest version offered for free at cutoff time) and the lack of information on whether GPT-4 is an incremental improvement or a different model with its own tradeoffs. + +This selection yielded the following 12 language models. We list them in alphabetical order as chronological ordering would be challenging, due to closed releases, opaque updates from some of the commercial products, and many of the models being released almost simultaneously: + +Alpaca (Taori et al., 2023), a Stanford model fine-tuned from Llama (Touvron et al., 2023) on instruction data generated with the self-instruct + +methods of (Wang et al., 2022). We use the 13B-parameter version, the largest available at cutoff. + +Bard, Google's experimental conversational LLM offering, claimed to be based on a lightweight version of LaMDA (Thoppilan et al., 2022). It can use content from the web to answer questions. Model details have not been made public. + +Bing Chat, an LLM offered by Microsoft's Bing search engine. Claimed to use GPT-4 $^1$ , further technical details have not been made public. The model performs web searches and uses the results to augment its context window with relevant information. It can also provide links to sources for its claims (although this is not relevant for our creative writing task, where no such links were provided or needed). We used its Creative mode, the obvious fit for our task. A problem worth mentioning is that we found the model to be subject to heavy censorship, which affected our experiment: in most prompting attempts, the story would be deleted by the filtering system before being finished. When this happened, we just reset and re-prompted the model, repeating the process until a full story was obtained. Over 100 tries were needed to obtain 5 non-censored stories. We are aware that this may introduce bias (as non-censored stories may have a different quality distribution than what the model could potentially generate without the filter) but this is unavoidable from our end, since we cannot bypass moderation. In any case, the sample does reflect what a user can obtain from the end product, as the censored stories are out of reach. + +ChatGPT with GPT-3.5, an OpenAI successor to the 175B-parameter GPT-3 model (Brown et al., 2020) which was tuned using reinforcement learning with human feedback, namely a variant of the InstructGPT method by Ouyang et al. (2022). We used the March 23 version provided by OpenAI's free ChatGPT service. + +ChatGPT with GPT-4, the most advanced language model released by OpenAI at cutoff time. A description of the model is available in (OpenAI, 2023), although essential technical details like the number of parameters have not been published. We used the March 23 version provided by OpenAI's ChatGPT Plus service. + +Claude is a language model trained by Anthropic. While details about its implementation are not public, it is known to be a successor of the model + +described in (Bai et al., 2022), a 52B-parameter model aligned to be helpful with Constitutional AI, a list of guiding principles provided to the model, combined with a mix of supervised learning and reinforcement learning with AI feedback. We used version 1.2 of the model. + +Dolly 2.0 (dolly-v2-12b), a 12B-parameter language model trained by Databricks, derived from EleutherAI's Pythia-12B model (Biderman et al., 2023) after fine-tuning on a 15K instruction corpus. At cutoff date, it was the only available conversational LLM where all of its components could be considered fully open source $^{2}$ , as the code, weights and instruction datasets all have open-source licenses compatible with any use, including commercial use, and no data from proprietary systems like ChatGPT has been used for finetuning. + +GPT4All-J (Anand et al., 2023b), an improvement over its predecessor GPT4All (Anand et al., 2023a). The base model is the 6B-parameter GPT-J (Wang and Komatsuzaki, 2021), which has been fine-tuned on a dataset expanded from a mix of existing sources. + +Koala (Geng et al., 2023), a model fine-tuned from Llama (Touvron et al., 2023) by researchers from the university of Berkeley, on a variety of dialogue data obtained from the web. We use the 13B-parameter version. + +OpenAssistant (Köpf et al., 2023) is an LLM fine-tuned on a large, free, human-generated conversation corpus created by a crowdfunding effort involving over 13,500 volunteers. We used the OASFT-Llama-30B model, fine-tuned from the 30B-parameter Llama (Touvron et al., 2023) model. + +StableLM is Stability AI's series of language models. We used StableLM-Tuned-Alpha-7B. With 7B parameters, this is the largest model available (at cutoff time) among a series of models trained on a dataset built from The Pile (Gao et al., 2021) and fine-tuned on a combination of conversational LLM corpora. + +Vicuna (Chiang et al., 2023) is another member of the family of models obtained by fine-tuning Llama (Touvron et al., 2023), in this case with user-shared conversations with ChatGPT. We used the 13B-parameter version of the model. + +# 3.3 Evaluation rubric + +The creative writing rubric was designed for assessment of creative writing assignments in uni + +
IDDescription
1Overall/holistic/cohesive readability of the story (not just a compilation of elements).
2Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view.
3Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting.
4Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid).
5Creativity/innovation/originality/ research-credibility, new knowledge, avoidance of cliché and derivative tropes.
6Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed.
7Understanding and habitation of the epic genre of heroic/legendary adventure.
8Description and credibility of a single combat scene.
9Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description.
10Use of a characteristically dark humorous tone.
+ +Table 1: Creative writing evaluation rubric. All items are scored out of ten points. Marking guideline: Emerging 1-4, Competent 5-8, Sophisticated 9-10. + +versity creative writing courses, and is taken in part from a university textbook by one of the authors of this article, *Playing with Words* (Davidow and Williams, 2016) and an article that justifies the use of this rubric (Carey et al., 2022). This rubric evaluates creative production in five holistic craft-based criteria and measures craft skills based on a writing style outlined in the article: among others, Flaubert's insistence on *le mot juste* (the right word or expression), Strunk and White's *The Elements of Style* (2008[1918]), George Orwell's rules for concreteness and clarity (Orwell, 1946); and Annie Dillard's rules for writing good prose (Dillard, 1981). + +The rubric for this AI task adds five more criteria which address the specific prompt requirements, such as genre, style, tone, character and action. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft, to avoid formulaic, rule-based writing and to address the very specific task addressed here. + +The criteria are detailed in Table 1, with more details given in the Appendix C. The holistic scale (emerging, competent, sophisticated) guides human raters to assess holistically: 'a holistic scale measures the relative success of a text but does so through a rubric that incorporates many of the traits in analytic scoring as heuristics towards a conception of a whole rather than as a sum of autonomous components' (Perelman, 2018, p.16). + +# 3.4 Evaluation methodology + +We prompted each of the LLMs 5 times with the prompt given in Section 3.1. Each prompt was made from a fresh state, i.e., in a zero-shot setting without any previous context that could help guide the models. The resulting stories had an average of + +379 words (std = 248, min = 23, max = 1223). + +Then, we also asked 5 human writers to each write a story following the same prompt. For uniformity, we suggested a length range coherent with the LLM-generated stories (250 to 1200 words). The writers were Honours and postgraduate Creative Writing students that volunteered for the task, and all of them studied the specific task requirements (e.g. John Kennedy Toole's style) before writing their stories. However, they were not given access to the AI-generated stories and they were instructed not to use LLMs at all to help them write. + +The result is, thus, a corpus of 60 AI-generated stories (5 for each of the 12 considered LLMs) plus an additional 5 human-generated stories, all in plain text format. The corpus is available at https://doi.org/10.5281/zenodo.8435671. + +The only preprocessing made to the stories is that (1) we removed leading sentences that described the task, often present in LLM answers (e.g.: "Here is a potential epic narration in the exaggerated style of John Kennedy Toole's A Confederacy of Dunces:") (2) we removed titles from stories that had them, and (3) we unified paragraph formatting, leaving one line between paragraphs in all the plain text files. Other than these changes, made for uniformity and to preserve the blindness of the rating process, we left the text as it was. + +We recruited 10 raters, also Honours and postgraduate Creative Writing students that were acquainted with the specific requirements of the task, and we instructed them to grade stories according to the rubric. Since the raters were volunteers, to keep the workload low, each rater did not rate all the stories. Instead, we divided the 65 stories into 5 groups of 13 stories each (each group containing one story by each LLM, plus one story by a human) and assigned one rater to each group. In this way, + +
Rubric item12345678910overall
chatgpt-gpt48.7±0.88.7±0.78.4±1.38.3±0.77.6±18.0±1.28.1±1.48.5±0.87.9±1.66.0±2.880.2±7.3
claude128.0±1.78.0±1.68.1±1.27.9±1.87.1±2.37.5±26.4±2.27.5±1.87.4±2.56.5±2.574.4±15.9
human7.3±2.37.8±1.87.3±1.77.2±1.88.0±27.2±2.44.9±2.16.3±2.27.7±2.16.4±3.470.1±17.4
bing7.8±27.5±2.27.9±1.77.4±2.17.0±1.66.8±2.45.3±2.96.2±2.17.4±2.26.2±2.669.5±18.4
chatgpt-gpt357.5±26.5±2.48.1±1.37.0±2.25.4±2.55.3±2.46.8±1.57.6±1.25.5±2.53.3±2.863.0±15.4
koala7.5±2.56.7±2.28.2±1.26.8±2.65.8±2.34.8±2.75.8±2.45.5±2.35.5±2.33.4±3.260.0±19.2
vicuna7.9±1.76.7±1.68.1±1.37.0±1.65.1±1.94.6±2.35.7±2.36.1±1.95.4±2.72.4±1.959.0±13.8
oa7.2±2.25.8±2.47.2±2.56.2±2.64.9±2.13.9±2.45.8±2.46.5±2.24.3±2.32.9±3.154.7±18
bard6.5±2.54.9±2.16.8±1.95.5±2.73.9±2.13.8±2.54.7±2.64.6±2.75.0±2.42.5±248.2±20.1
gpt4all6.5±2.25.4±1.77.2±1.76.5±2.14.1±2.22.4±2.25.4±2.55.6±2.42.5±1.41.2±0.846.8±13.1
stablelm5.5±1.85.0±2.56.6±1.93.8±23.2±1.52.1±2.24.4±1.93.8±22.9±2.61.4±1.538.7±17.2
dolly4.6±2.25.0±2.25.6±2.53.2±1.94.2±2.83.1±2.24.4±1.93.3±1.83.0±21.5±1.537.9±13.6
alpaca5.2±3.13.1±1.44.9±34.2±1.91.9±12.0±1.43.7±33.9±2.82.1±1.51.1±0.632.1±15.7
average6.9±2.16.2±1.97.3±1.86.2±25.2±24.7±2.25.5±2.35.8±25.1±2.23.4±2.256.6±15.8
+ +Table 2: Results for each rubric item, as well as overall score. Each cell shows average $\pm$ standard deviation for the ratings achieved by a given model (or human writers) on a given rubric item. The bottom line shows the average among all models (and human writers). Models are sorted by overall score. The best result for each rubric item is highlighted in boldface. + +we ensure (1) that we have at least two ratings per story, allowing us to measure inter-rater agreement, (2) that comparisons are fair, in the sense that no LLM (or the humans) is advantaged by being assigned more lenient raters, because each LLM (and humans) receives exactly one rating by each of the 10 raters, and (3) since each rater always gets one story from each model (and one human), we can expect that each will be rating a diverse set of stories covering a wide range of ability levels, which helps the marking process as it allows for comparative analysis between various performances, enabling more accurate pinpointing of each story's quality. + +Stories were assigned random identifiers before sending them to raters, so that the process was blind: to avoid biases, raters knew that they would be evaluating human and AI-generated stories, but were unaware of the origin of each story. + +Raters were sent all stories at once and they were free to go back and change the ratings of previously-rated stories. In addition, all of them were experienced assessors in terms of Creative Writing texts, with previous experience in applying the scale. These precautions mitigate the need for specific calibration (Karpinska et al., 2021) that would strain our resources. + +# 4 Results + +# 4.1 Agreement + +To gauge the reliability of our results, we compute inter-rater agreement between the two ratings given to each story for each individual rubric item. We use linearly weighted Cohen's kappa (Cohen, 1968), which is appropriate for ordinal scales like ours, obtaining a value of 0.48, $95\%$ CI [0.43, 0.54]. This is interpreted as "moderate + +agreement", which is a positive result taking into account the obvious subjectivity involved in rating stories. If we instead focus on overall scores (sums of rubric items), the Pearson correlation between the scores given to each story by each group of raters is 0.58 ( $p < 0.00001$ ), again indicating a reasonable degree of consistency between raters given the subjectivity of the task. + +# 4.2 General overview + +Table 2 shows a comprehensive overview of the ratings that each of the LLMs (and humans) obtained for each rubric item, as well as in terms of overall score. Additionally, a box-and-whisker plot comparing overall score can be seen in Figure 1. + +ChatGPT with GPT-4 generates the best-rated stories, both in terms of overall score and in 8 out of 10 of the individual rubric categories. However, human writers are rated best in terms of originality (rubric item 5), and Claude was rated best in the use of dark humor (rubric item 10), with humans a close second. GPT-4 is also remarkably consistent, showing low standard deviations not only with respect to human writers (which is expected, as our human stories were authored by five different humans, whose skill levels may vary) but also with respect to the rest of the LLMs. + +If we compare LLMs to each other, the best performances correspond to commercial offerings, including (apart from the aforementioned GPT-4) Claude, Bing Chat and the GPT-3.5 version of ChatGPT. Open-source models are clearly behind, with the best (Koala) achieving 60.0 overall score, contrasting with the 80.2 obtained by GPT-4. Although the best-performing LLMs are generally better across the board, some idiosyncrasies can be observed: e.g., GPT-4 tops almost all rubric items + +but is outperformed by two LLMs at humor. + +When we compare LLMs to human writers, significance testing on overall score (2-tailed t-test assuming unequal variances) fails to detect significant differences between humans and the top 6 AI models with $\alpha = 0.05$ . Only the 6 bottom AI models are significantly worse than humans at this significance level. Note, however, that the test has a low statistical power due to the small sample size (10 ratings per model). If we instead perform a test on individual metrics, so our sample size is 100 (with the null hypothesis being no difference between humans and each LLM in random individual metric scores), then GPT-4 is identified as significantly better than the human writers $(p = 0.00031)$ , Claude and Bing's scores are not significantly different from those of humans, and all the rest of the LLMs score significantly worse than humans. + +Looking at individual metric scores, structural elements (rubric item 3) are the easiest category (with an average rating across all stories of 7.3, and all models but one obtaining at least a 5 on average). Humor (rubric item 10) is clearly the hardest, with an average score of 3.4, and we will analyze it in more detail below. Incorporating John Kennedy Toole's style is the second hardest, with 4.7. Comparing humans to LLMs, humans (as already mentioned) excel at originality and humor, but are clearly behind the best LLMs in terms of readability (item 1), where they are outperformed by 6 LLMs, and even more so in use of the epic genre (item 7), where they score 4.9 and are outperformed by 8 LLMs. + +We now analyze in more detail some of the individual items that show more interesting comparisons between human writers and LLMs. + +# 4.3 Humor + +Figure 2 shows a box plot that complements the information on Table 2 for the humor rubric item. The results for this item have two interesting characteristics. Firstly, it is clearly the most difficult rubric item, with an average score across models of 3.4, and the best obtaining 6.5. Even humans obtain a lower score in humor than in most items, which may be a consequence of humor being highly subjective. Secondly, as evidenced both in the table and plot, there is a rather stark binary divide between the contenders that "get" humor and those that do not: Claude, Bing and GPT-4, together with the human writers, obtain average scores between + +![](images/c1b8e58a5e63b1d628d2340a7cc8cb8fb19d66f4a82328c7a6067417f31f68d4.jpg) +Figure 2: Box plot comparing humor ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +6 and 6.5; whereas the rest of the models achieve very low scores of 3.4 or less. Significance testing also confirms this divide: despite the small sample size of 10 humor ratings per model, a 2-tailed t-test with $\alpha = 0.05$ confirms that the models in the second group are significantly worse than the human writers, as well as the LLMs in the first group. This suggests that grasping human humor might be an emergent ability of larger LLMs. + +In this respect, a recent preprint (Jentzsch and Kersting, 2023) concluded that ChatGPT has "a limited reflection of humor" and "cannot yet confidently create intentionally funny original content". This study used the GPT 3.5 version of ChatGPT, so it is in line with our results (in which that model obtains an average humor score of 3.3). However, as we have seen, more powerful LLMs have overcome that limitation, as their generated stories are clearly rated as humorous. + +# 4.4 Creativity + +We now focus on rubric item 5, which rates creativity and originality, as it is a hallmark of creative writing and also the only category where human writers have outperformed all the LLMs in our analysis. Figure 3 shows a box plot that complements the information on Table 2. + +The same three LLMs that stood out in the humor category are also the best in terms of creativity, although the difference is not as stark. Regardless, a t-test still distinguishes both groups as it shows all + +![](images/87ca571fbf28f5ca359b66f7bc31b5d5def0b8fe8aef6cfc2ee2d3ab28f1fb02.jpg) +Figure 3: Box plot comparing creativity ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +the rest of the LLMs to be rated as significantly less creative than our human writers, while for these three we cannot reject the null hypothesis that they are as original as the human writers. + +Overall, from our results and in terms of human perception of the output, the answer to whether LLMs can produce creative stories (Franceschelli and Musolesi, 2023) is yes, although humans still retain an edge in this respect. + +# 4.5 Epicness + +Finally, we analyze rubric item 7 (understanding and habitation of the epic genre) for the opposite reason as in the previous section: it is the item where humans do worst compared to LLMs (see Table 2). A box plot is provided in Figure 4. + +In this case, the results have a more atypical profile, with substantial difference with respect to overall scores. Two models perform significantly better than the human writers $(\alpha = 0.05)$ : both versions of ChatGPT. Other six models obtain better average rating than humans, but the difference is not detected as significant. + +Interestingly, Bing clearly lags behind both ChatGPT versions, despite being based in GPT-4. This might be related to bias introduced by the system's censorship. On the other hand, some models whose overall scores are in the bottom half (OpenAssistant, GPT4All) are reasonably good at epic narration, outperforming humans and Bing (which are better than them in almost all categories). + +![](images/323cffdbb14cc5f62ae0f57b58c62642dda36ee63bed64fe64ce9e662b542f35.jpg) +Figure 4: Box plot comparing epicness ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +# 5 Discussion + +We have evaluated recent LLMs on a creative writing task in English, using a carefully-designed scenario to provide a demanding challenge and avoid confounding factors like training data memorization (Carlini et al., 2023). To our knowledge, this is the most thorough evaluation of LLMs on creative writing conducted so far, both in terms of scope (12 LLMs considered, plus comparison to human writers) and detail (using human evaluation with a 10-item rubric based on established creative writing evaluation practices). + +Simultaneously to our work, the recent preprint by Chakrabarty et al. (2023) provides an evaluation of three of the top-performing commercial LLMs (ChatGPT, GPT-4 and Claude) for creative writing. This approach is close to ours, as it uses the models in a zero-shot setting and evaluation is performed by humans using a specific rubric. However, there are important methodological differences between both studies, which we summarize here: + +1. The human stories used by Chakrabarty et al. (2023) are stories published in the New Yorker, by highly successful authors (including Nobel prize winners), whereas ours are written by Creative Writing students. +2. In their setting, the human-written stories are pre-existing (and selected for publication in the New Yorker, as mentioned above) so their + +writers were unconstrained when they created them, while the LLMs have to adapt to write an alternative story with the same plot. In ours, humans and LLMs are given the exact same prompt to work with. + +3. In terms of length, the stories they work with are over thrice larger than ours on average. In addition, while both studies try to make sentence lengths similar between humans and LLMs, in their case the human writers originally wrote their stories unconstrained (or under loose constraints) and the LLM-generated stories were calibrated to have similar lengths by an iterative prompting process. In our case, the LLMs were unconstrained in terms of length, and the human writers were suggested to target a length range loosely similar to LLM-generated stories. Thus, with respect to theirs, our approach has the disadvantage of a looser control on story length, but the advantage of using a single zero-shot prompt. +4. Their study spans a variety of story prompts, while we focus on a single prompt and setting. The flip side is that our rubric can be adapted to specific requirements like humor and Toole style, whereas theirs is necessarily more generic. In addition, our narrower focus allows us to have LLMs generate several alternative stories, so we can perform more statistical analysis: we consider the distribution within each LLM and perform statistical testing, which cannot be done in Chakrabarty et al. (2023)'s setting as they generate a single story per prompt and LLM. +5. Since their study is based on existing stories that are published online, there is the possibility that some are contained in the tested LLMs' training data. In our case, we designed the study to prevent training data reuse. +6. The rubrics are different: Chakrabarty et al. (2023) use a rubric based on the Torrance tests of creative thinking (Torrance, 1974). + +The outcome of this study is substantially different from ours, with LLM-generated stories rated clearly behind human-authored ones. This is not surprising considering the methodological differences: in particular, differences 1 and 2 in the list above clearly set a higher bar for LLMs, as they + +are compared to highly successful human stories by top authors that wrote freely and the LLMs are asked to adapt to their plots. We hypothesize that these are the main reasons for the difference in outcome. On the other hand, item 5 in the list above could in principle benefit LLMs, and there are other factors that could benefit humans or LLMs in non-obvious ways (including items 3, 4 and 6, as well as different story genres and target lengths). This underscores the need of more studies in this area. + +# 6 Conclusion + +The results show that state-of-the-art LLMs can perform a creative writing task at a very competent level, with the top two (ChatGPT with GPT-4 and Claude) achieving high scores that outperform human writers in most rubric categories. While we must be careful not to take this as evidence of "superhuman storytelling" (both because our sample size is not enough to draw such categorical conclusions, and because our 5 human writers are not necessarily representative of human writing ability as a whole); it does at least strongly suggest that these models' stories are not distinguishably worse than those by reasonably-trained humans. This is even more remarkable given that we did not use any in-context learning or other techniques to optimize the LLMs for the task, but just a straightforward prompt from a fresh state, so it is possible that even better results are achievable with careful prompting. + +Our analysis also shows that the best results are achieved by commercial LLMs, with open-source models clearly lagging behind at the moment. + +Looking at individual characteristics, humans retain the lead in originality, while LLMs tend to excel in more technical aspects like readability or structure. Humor is an especially challenging aspect where most LLMs utterly fail, but the best three models do succeed at achieving human-like ratings, contrasting with results on older LLMs that showed their lack of grasp of human humor (Jentzsch and Kersting, 2023). + +Interesting avenues for future work include evaluation of different literary genres, languages other than English, and studying whether the quality of the generated stories can be improved with prompt engineering or fine-tuning. + +Selected stories from our corpus (available at https://doi.org/10.5281/zenodo.8435671, together with all rating data) are in Appendix E. + +# Limitations + +Commercial LLMs and reproducibility While some of the LLMs considered are proper scientific artifacts, trained with a documented methodology and whose code and weights are available, others are closed commercial products and there is little public information about them, hindering reproducibility. While we have reported version numbers (where available) and access dates are provided in Appendix A, apart from publishing the generated outputs so that the rating process is reproducible, the prompting/generation process may not be reproducible in the future for these models as some of these products are updated without notice, and without providing access to previous versions. However, we believe that including commercial models is valuable, as they are widely considered to provide the best quality results at the time of writing (which has been confirmed by our analysis), and these data points can still be used as a measuring stick against which to compare open models in the present and future. + +Limitations of the analysis Rating creative writing is necessarily a highly subjective process. Furthermore, since our raters were volunteers, we did not ask each of them to mark the full 65 stories in the corpus but just a subset, so our sample size is limited. We have provided the necessary details so that the reader can assess the variability of the data (sample sizes, standard deviations, and interrater agreement, which is reasonably high given the subjectivity of the task); and we have been careful not to make overarching claims. In this respect, we have also taken into account that our sample of human writers cannot be assumed to be representative of "human creative writing ability" as a whole, but is only provided as a reference point of interest; and that our evaluation is focused on a specific genre, so claims of the form "LLMs are better/equal/worse than humans at creative writing" cannot be made with an evaluation like ours. + +Scope Our analysis focuses on a specific genre, and on English language, so the results do not necessarily generalize to other genres and/or languages. However, conducting a wider evaluation in this respect would not be possible with our resources, so we chose to fix these variables and focus on conducting a detailed evaluation on a large number of LLMs instead. + +# Ethics Statement + +While the use of conversational LLMs has raised various ethical challenges, creative writing has been argued to be one of the best uses for these tools from a human-centered AI point of view, as long as AI-generated stories are identified as such to avoid misleading readers or publishers (Sison et al., 2023). In our study, raters were blinded to story authorship but they were previously informed that they would be dealing with AI and human-generated stories. In the published corpus, each story is identified as human or AI-authored. + +All participants in the evaluation (as raters or writers) were volunteers, and the demand on their time was kept accordingly low. + +# Acknowledgments + +The first author was funded by the European Research Council (ERC), under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), and Centro de Investigación de Galicia "CITIC", funded by the Xunta de Galicia through the collaboration agreement between the Consellería de Cultura, Educación, Formación Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS). + +We thank Olga Zamaraeva for comments on preliminary versions of this work, and two anonymous reviewers for their helpful comments. Last, but not least, we thank our volunteers who participated in the writing and grading of stories, in alphabetical order: Jayda Franks, Bree Glasbergen, Ola Kwintowski, Jay Ludowyke, Kyle Mackenzie, Kirsty Maclachlan, Caitlin Noakes, Rachelle Raco, Kylie Ryan and Josephine Stewart. Credit for each individual story can be found in the corpus. + +# References + +Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, and Andriy Mulyar. 2023a. GPT4All: Training an assistant-style chatbot with large-scale data distillation from GPT-3.5-Turbo. Technical report. +Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, Adam Treat, and Andriy Mulyar. 2023b. GPT4All-J: An Apache-2 licensed assistant-style chatbot. Technical report. + +Lorin W. Anderson and David R. Krathwohl, editors. 2001. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives, 2 edition. Allyn & Bacon, New York. +Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional AI: Harmlessness from AI feedback. Technical report. +Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 313-320, Trento, Italy. Association for Computational Linguistics. +Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. Technical report. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Michael D Carey, Shelley Davidow, and Paul Williams. 2022. Re-imagining narrative writing and assessment: a post-naplan craft-based rubric for creative writing. The Australian Journal of Language and Literacy, 45(1):33-48. +Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramér, and Chiyuan Zhang. 2023. Quantifying memorization across neural language models. In International Conference on Learning Representations (ICLR). + +Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. 2023. Art or artifice? large language models and the false promise of creativity. +Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing GPT-4 with $90\%$ ChatGPT quality. Technical report. +John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. Talebrush: Sketching stories with generative pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22, New York, NY, USA. Association for Computing Machinery. +Elizabeth Clark, Tal August, Sofia Serrano, Nikita Hahuong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282-7296, Online. Association for Computational Linguistics. +Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213-220. +Shelley Davidow and Paul Williams. 2016. Playing With Words: A Introduction to Creative Craft. Bloomsbury Academic. +Annie Dillard. 1981. Contemporary prose styles. Twentieth Century Literature, 27:207-222. +Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey on in-context learning. +Giorgio Franceschelli and Mirco Musolesi. 2023. On the creativity of large language models. +Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB dataset of diverse text for language modeling. CoRR, abs/2101.00027. +Eduardo C. Garrido-Merchan, José Luis Arroyo-Barrigüete, and Roberto Gozalo-Brihuela. 2023. Simulating H.P. Lovecraft horror literature with the ChatGPT large language model. +Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. Koala: A dialogue model for academic research. Blog post. + +Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, and David Chartash. 2023. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. *JMIR Med Educ*, 9:e45312. +Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation. Transactions of the Association for Computational Linguistics, 8:93–108. +Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6379-6393, Online. Association for Computational Linguistics. +Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR). +Sophie Jentzsch and Kristian Kersting. 2023. Chatgpt is fun, but it is not funny! humor is still challenging large language models. +Wenxiang Jiao, Wenxuan Wang, Jen tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? yes with gpt-4 as the engine. +Marzena Karpinska, Nader Akoury, and Mohit Iyyer. 2021. The perils of using Mechanical Turk to evaluate open-ended text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1265-1285, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Jeri Kroll. 1997. A or C: Can we assess creative work fairly? TEXT, 1(1):1-5. +Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. OpenAssistant Conversations - democratizing large language model alignment. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, + +pages 7871-7880, Online. Association for Computational Linguistics. +Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models. +Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. 2023. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA. Association for Computing Machinery. +S. Norris. 2013. *Studying Creative Writing*. Creative Writing Studies. Frontinus Limited. +Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, and Brenden M. Lake. 2021. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. In Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021, Advances in Neural Information Processing Systems, pages 25192-25204. Neural information processing systems foundation. +OpenAI. 2023. Gpt-4 technical report. Technical report. +George Orwell. 1946. Politics and the English language. Horizon, 13:252-265. +Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc. +Les Perelman. 2018. Towards a new NAPLAN: Testing to the teaching. Journal of Professional Learning, 2. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. + +Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. + +Piotr Sawicki, Marek Grzes, Fabricio Goes, Dan Brown, Max Peeperkorn, and Aisha Khatun. 2023. Bits of grass: Does gpt already know how to write like Whitman? + +Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 843-861, Hong Kong, China. Association for Computational Linguistics. + +Alejo Jose G. Sison, Marco Tulio Daza, Roberto Gozalobrizuela, and Eduardo C. Garrido-Merchan. 2023. Chatgpt: More than a weapon of mass deception, ethical challenges and responses from the human-centered artificial intelligence (hcai) perspective. + +Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshit Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmuller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy + +Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Mosegui Gonzalez, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurrgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martinez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, German Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-Lopez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernandez Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocón, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva Katja Markert Kaustubh D. Dhole Kevin Gimpeel Kevin Omondi Kory Mathewson Kristen Chiafullo Ksenia Shkaruta Kumar Shridhar Kyle McDonell Kyle Richardson Laria Reynolds Leo Gao Li Zhang Liam Dugan Lianhui Qin Lidia Contreras-Ochando Louis-Philippe Morency Luca Moschella Lucas Lam Lucy Noble Ludwig Schmidt Luheng He Luis Oliveros Colón Luke Metz Lütfi Kerem Senel Maarten Bosma Maarten Sap Maartje ter Hoeve Maheen Farooqi Manaal Faruqui Mantas Mazeika Marco Baturan Marco Marelli Marco Maru Maria Jose Ramírez Quintana Marie Tolkiehn Mario Giulianielli Martha Lewis Martin Potthast Matthew L. Leavitt Matthias Hagen Matyás Schubert Medina Orduna Baitemirova Melody Arnaud Melvin McElrath Michael A. Yee Michael Cohen Michael Gu Michael Ivanitskiy Michael Starritt Michael Strube Michal Swedrowski Michele Bevilacqua Michihiro Yasunaga Mihir Kale Mike Cain Mimee Xu Mirac Suzgun Mo Tiwari Mohit Bansal Moin Aminnaseri Mor Geva Mozhdeh Gheini Mukund Varma T Nanyun Peng Nathan + +Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nistish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Mltkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Ryan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Pi-antadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Theo Desbordes, Theodore Rothschild Thomas Phan, Tianle Wang, Tiberius Nkinyili Timo Schick Timofei Kornev Timothy Telleen-Lawton Titus Tunduny Tobias Gerstenberg Trenton Chang Trishala Neeraj Tushar Khot Tyler ShultzUri Shaham,Vedant Misra,Vera DembergVictoria Nyamai Vikas Raunak Vinay Ramasesh Vinay Uday Prabhu Vishakh Padmakumar,Vivek Srikumar William Fedus William Saunders William Zhang Wout Vossen Xiang Ren Xiaoyu Tong Xinran Zhao Xinyi Wu Xudong Shen,Yadollah Yaghoobzadeh Yair Lakretz Yangqiu Song,Yasaman Bahri,Yejin ChoiYichi Yang Yiding HaoYifu ChenYonatan Belinkov Yu HouYufang HouYuntao BaiZachary Seid Zhuoye Zhao Zijian Wang Zijie J.WangZirui Wang and Ziyi Wu. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. + +W. Strunk and E.B. White. 2008[1918]. The Elements of Style. BN Publishing, New York. + +Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, and Monica Dinalescu. 2021. Story centaur: Large language model few shot learning as a creative writing tool. In Proceedings of the 16th Confer- + +ence of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 244-256, Online. Association for Computational Linguistics. + +Bowen Tan, Zichao Yang, Maruan Al-Shedivat, Eric Xing, and Zhiting Hu. 2021. Progressive generation of long text with pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4313-4324, Online. Association for Computational Linguistics. + +Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca. + +Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications. + +E.P. Torrance. 1974. Torrance Tests of Creative Thinking: Verbal Tests, Forms A and B, Figural Tests, Forms A and B. Norms-technical manual. Xerox. + +Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and efficient foundation language models. + +Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 billion parameter autoregressive language model. https://github.com/kingoflolz/mesh-transformer-jax. + +Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-Instruct: Aligning language model with self generated instructions. + +Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned + +language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. + +Beck Wise and Ariella van Luyn. 2020. Not 'all writing is creative writing' and that's ok: inter/disciplinary collaboration in writing and writing studies. TEXT, 24(Special 59):1-15. + +Zhuohan Xie, Trevor Cohn, and Joy Han Lau. 2023. Can very large pretrained language models learn storytelling with a few examples? + +Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. MEGATRON-CNTRL: Controllable story generation with external knowledge using large-scale language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2831-2845, Online. Association for Computational Linguistics. + +Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. 2022. Wordcraft: Story writing with large language models. In 27th International Conference on Intelligent User Interfaces, IUI '22, page 841-852, New York, NY, USA. Association for Computing Machinery. + +Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. + +# A Model access dates + +Table 3 shows the date in which the stories were generated for each of the models. For future experimental reference, we highlight that the initial public disclosure of this paper online occurred on 2023-10-09. Before this date, only the human authors and raters were aware of the project from May 2023, and anonymous reviewers had access from June 23, 2023. Consequently, LLMs with a knowledge cutoff prior to 2023-10-09 are likely to have no or minimal risk of training set contamination. + +# B Hyperparameters + +We did not tweak any hyperparameters of the models. In the case of commercial models, we just ran the model as it is presented in their respective web user interfaces, except in the case of Bing Chat where we chose Creative mode. For open-source models, we used the default parameters from the web UI provided at https://chat.lmsys.org/, which set temperature to 0.7. + +
ModelAccess date
alpaca2023-04-07
bard2023-04-11
bing2023-04-11
chatgpt-gpt352023-04-11
chatgpt-gpt42023-04-14
claude122023-04-04
dolly2023-04-14
gpt4all-j2023-04-14
koala2023-04-07
oa2023-04-16
stablelm2023-04-20
vicuna2023-04-07
humans2023-05-01 to 2023-05-12
+ +Table 3: Access dates for each model (and dates of writing for the human stories), in YYYYY-MM-DD format. + +# C Detailed rubric information + +The creative writing rubric was designed for assessment of creative writing scripts in university creative writing courses in order to evaluate these above competencies, criteria 1-5 to measure general creative writing capacities, and criteria 6-10 to measure specific task related proficiency. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft and to avoid formulaic, rule-based writing. + +1. Overall/ holistic/ cohesive readability of the story (not just a compilation of elements). +2. Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view. +3. Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting +4. Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid) +5. Creativity/innovation/originality/research—credibility, new knowledge, avoidance of cliché and derivative tropes +6. Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed below + +7. Understanding and habitation of the epic genre of heroic/legendary adventure +8. Description and credibility of a single combat scene +9. Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description (see below for character description) +10. Use of a characteristically dark humorous tone. + +The 1-10 scale is divided into three ranges: + +- Emerging (1-4): stories in this range demonstrate an early grasp of storytelling elements, but falter in execution or depth. When evaluating humans, they correspond to novice writers who need feedback and guidance to improve the story. +Competent (5-8): stories that showcase a good grasp of the storytelling principle being evaluated (coherent plot, well-defined characters, etc.). While there might be room for improvement, these stories effectively engage the reader and convey their intended messages. +- Sophisticated (9-10): these stories exhibit exceptional mastery of the aspect being evaluated, resulting in a compelling and memorable read. + +Toole style We provided raters with detailed information about the plot, setting, imagery, tone, characters, main protagonist, and derivative/imitative style of the author, taken from a generic and popular study guide (http://www.bookrags.com/studyguide-a-confederacy-of-dunces/#gsc.tab=0). + +# D Box plots for each individual rubric item + +Figures 5 to 14 show the box plots summarizing the results for all rubric items, including those plots not featured in the main text. + +# E Sample stories + +We show in this section several sample stories from the corpus, chosen according to rating: the + +![](images/aa13bfd651d9de022b8fd600fe0441c2b3651899b84951791b1bed2b1bc3ba7a.jpg) +Figure 5: Box plot comparing rubric item 1 (cohesion) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/523eac89bbf4e81717f01cfa0e0dc3c2c2f929f230a0e3f7b4d6ab3e8ee37a94.jpg) +Figure 6: Box plot comparing rubric item 2 (key narrative elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/0ebf78940c8609025c7593d9dfe1a37fa00292b10eba9c60de34cacdf7501243.jpg) +Figure 7: Box plot comparing rubric item 3 (structural elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/45f0e75d2dfe07629611a43566fa069cf9a6dc73e112fba3beeeedd81234fbde.jpg) +Figure 9: Box plot comparing rubric item 5 (creativity) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/555ec6404a621caf381f66322d9a733f0169f425b5dd2e3d367fea3ea07d9183.jpg) +Figure 8: Box plot comparing rubric item 4 (plot logic) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/4ed93337bd71e8bb3557a8cc1e68f5782fc8a927d3faeb796f64b0cbae44d64f.jpg) +Figure 10: Box plot comparing rubric item 6 (John Kennedy Toole style) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/c4b22cabac8095762b897f5691f0705556b631f3fa76b8517dbf96a65714361d.jpg) +Figure 11: Box plot comparing rubric item 7 (epic genre) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/70a364e0f426758076047a0c01f11f2bbc7a878877917b3b74ea161170baf05f.jpg) +Figure 13: Box plot comparing rubric item 9 (accuracy of characters) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/e901c5a707c1100a14d19b67b73c6d2b1f288a9fcff1d994921aa271518b283e.jpg) +Figure 12: Box plot comparing rubric item 8 (combat description) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +![](images/a296b001e9bba1feb5d0453cbae02979ce1ae9f73cabb5b219edd905edf26dde.jpg) +Figure 14: Box plot comparing rubric item 10 (dark humor) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1. + +three top-rated stories, the best human-written story (which was also ranked fourth overall), the story ranked in the median, and the worst-rated. It is worth noting that, as explained in Section 3.4, our methodology is designed to provide fairness when comparing models, but not individual stories, as different stories are assigned to different raters. Thus, selection of stories by rating is necessarily noisy. + +# E.1 Overall best-rated story + +This story was generated by GPT-4. The ratings for this story are in Table 4. + +It was a sweltering Louisiana afternoon, the air thick with the miasma of musty sweat and the scent of overripe vegetables. In the midst of the urban jungle, resplendent with the cacophony of honking cars and the chatter of the common rabble, stood a man who defied reason, common sense, and all natural laws of human decency. Ignatius J. Reilly, a behemoth of a man with an unruly mane of hair and a Byzantine beard, adjusted his ill-fitting, plaid trousers and surveyed the scene with a look of lofty disdain. + +Suddenly, the air crackled with an electric charge, and a portal appeared before him. The very fabric of time and space seemed to rend asunder, and from this anachronistic rift emerged a pterodactyl - a creature so ancient that it had outlived the vast expanse of geologic time, only to find itself in the absurdity of the twentieth century. Its leathery wings stretched wide as the mists of time dissipated, and it let out an earsplitting screech that sent the common folk of New Orleans scattering like roaches from the light. + +Ignatius, unphased by the sudden appearance of the primordial beast, surveyed the creature through his fogged-up monocle. "Fortuna," he muttered, his voice dripping with both sarcasm and hot dog condiments, "you spin your wheel with such capricious abandon. Have you not tired of hurling these preposterous challenges in my path?" + +The pterodactyl, with its beady, reptilian eyes, locked its gaze upon Ignatius, its primitive brain sensing a formidable opponent in this lumbering, malodorous mass of a man. It screeched again, flapping its wings in a display of both aggression and arrogance, its razor-sharp beak snapping at the air. + +Ignatius bellowed, raising his arms as if beseech- ing the heavens, "Very well, foul beast! If it is battle you seek, you shall find no quarter here! I + +shall vanquish you as I have vanquished all who have dared to challenge the indomitable will of Ignatius J. Reilly!" + +With a speed surprising for a man of his considerable girth, Ignatius tore a shopping cart from the hands of a terrified passerby and hurled it toward the pterodactyl with the force of a thousand disgruntled welfare recipients. The makeshift projectile slammed into the creature, stunning it momentarily and allowing Ignatius to close the distance between them. + +The two combatants circled one another, their eyes locked in a primal dance of predator and prey. The pterodactyl, recovering from the initial assault, swooped low, talons outstretched to rend flesh from bone. Ignatius, with an agility that defied his rotund appearance, rolled to the side, avoiding the talons by mere inches. + +As the battle raged, the streets of New Orleans became a stage for this epic confrontation between man and beast, between past and present. Ignatius, armed with nothing more than his wit and an unwieldy umbrella, parried the relentless strikes of the pterodactyl, determined to emerge victorious. + +As the sun dipped low in the sky, casting a blood-red hue across the battlefield, the pterodactyl, weary from the long and grueling contest, faltered. Seizing the opportunity, Ignatius lunged forward, his umbrella aimed squarely at the creature's heart. The point found its mark, and with a final, anguished cry, the pterodactyl crumpled to the ground, its ancient spirit extinguished at last. + +Ignatius stood triumphant over the fallen beast, his chest heaving with exertion and righteous indignation. He wiped the sweat from his brow with a grease-stained handkerchief and addressed the awestruck crowd that had gathered to witness this extraordinary battle. + +"Behold, citizens of this wretched cesspool," he proclaimed, gesturing grandly with his umbrella, now slick with the blood of the vanquished foe. "The forces of the prehistoric have met their match in the unwavering resolve of Ignatius J. Reilly! Let this be a lesson to all who would dare challenge the sanctity of my person and my valiant quest for justice in this world beset by fools and ne'er-dowells." + +The onlookers, mouths agape, gazed upon the fallen pterodactyl and the disheveled figure of Ignatius J. Reilly with a mixture of awe, terror, and disbelief. They had borne witness to a battle of the + +
Rubric item12345678910overall
Rater 1889877887676
Rater 2101099891010101095
Average9998.57.58998.5885.5
+ +Table 4: Ratings for the best overall rated story in the corpus, produced by ChatGPT with GPT-4. + +ages, a contest that would be whispered about in the dark corners of bars and the humid alleyways of New Orleans for generations to come. + +As the sun set and the shadows grew long, Ignatius, his honor defended and his foe vanquished, strode off into the gloaming, pausing only to collect a discarded hot dog from the gutter as sustenance for the journey home. + +And so, the legend of Ignatius J. Reilly, pterodactyl-slayer and defender of truth, justice, and questionable hygiene, was etched into the annals of history, a testament to the indomitable spirit of man, and a reminder that even the most ancient and fearsome of adversaries could be vanquished by the sheer force of will and an unwieldy umbrella. + +# E.2 Second overall best-rated story + +This story was generated by Bing Chat. The ratings for this story are in Table 5. + +Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history. + +Ignatius gasped and dropped his hot dog. He reached for his trusty hunting cap, which he always wore to protect his delicate ears from the noise of the modern world. He pulled it down over his eyes and hoped that the pterodactyl would mistake him for a statue or a bush. + +But the pterodactyl was not fooled. It landed on the ground with a thud and snapped its jaws at Ignatius, who dodged nimbly to the side. Ignatius felt a surge of courage and indignation. How dare this beast attack him, a refined scholar and a defender of the true faith? He decided to fight back with his superior intellect and wit. + +He shouted at the pterodactyl: "You foul creature! You are an abomination of nature, a relic of a barbaric age! You have no place in this civ + +ilized society, where men of letters and culture reign supreme! Begone, or I shall smite you with my mighty pen!" + +The pterodactyl was not impressed by Ignatius's words. It screeched again and lunged at him, trying to bite his head off. Ignatius dodged again and ran towards a nearby bench. He grabbed a newspaper that someone had left behind and rolled it into a cone. He brandished it like a sword and charged at the pterodactyl. + +"Take that, you fiend!" he cried, as he stabbed the newspaper cone into the pterodactyl's eye. The pterodactyl shrieked in pain and flapped its wings furiously, trying to shake off the paper weapon. Ignatius seized the opportunity and ran towards the park exit, hoping to find help. + +But he was not fast enough. The pterodactyl recovered from its wound and flew after him, gaining speed. It caught up with Ignatius and grabbed him by his coat tails, lifting him off the ground. Ignatius screamed and kicked his legs in the air, clutching his hunting cap. + +"Help! Help! Someone save me!" he yelled, as he soared over the park. "This is an outrage! This is an injustice! This is a violation of my human rights!" + +But no one heard him or came to his rescue. The pterodactyl carried him away to its nest on top of a skyscraper, where it planned to feast on him later. Ignatius continued to protest and curse until he lost consciousness. + +The end. + +# E.3 Third overall best-rated story + +This story was generated by Claude. The ratings for this story are in Table 6. + +The primordial beast let out a piercing shriek as it descended from the roiling gray sky, its leathery wings casting a shadow over the granite-paved streets of New Orleans. + +Ignatius J. Reilly ambled down the sidewalk, oblivious as ever, focused on the Valencia he clutched and the fantasies playing out in his voracious mind. His substantial bulk shook with each + +
Rubric item12345678910overall
Rater 7989979999987
Rater 8998988699883
Average98.58.597.58.57.5998.585.0
+ +Table 5: Ratings for the second best overall rated story in the corpus, produced by Bing Chat. + +
Rubric item12345678910overall
Rater 9999879259774
Rater 1010109999910101095
Average9.59.598.5895.57.59.58.584.5
+ +Table 6: Ratings for the third best overall rated story in the corpus, produced by Claude. + +step, the excess flesh spilling out of his hunting cap and sweatshirt like rising bread dough. + +When the shadow passed over him, Ignatius glanced up with a start. The pterodactyl unleashed another unearthly screech, indignant at this blob of a man who dared walk under its domain. + +Ignatius blinked myopically at the creature and snorted. "What prehistoric nonsense is this? Some misguided fool has clearly released you from your confinement in the geological strata. Begone, you ridiculous beast, or I shall be forced to thrash you soundly with my umbrella!" + +Enraged, the pterodactyl swooped down, gnarled claws extended, snapping its beak in fury. But Ignatius swung his umbrella in a wide arc, batting the beast aside. It tumbled and flapped awkwardly, crashing into a lamppost, dazed by the assault. + +"Thoughtless demon!" Ignatius cried. He charged at the pterodactyl, surprisingly nimble for his size, and began striking it about the head and neck with his umbrella, each blow punctuated with an insult. "Sugar-frosted fruitcake! Detritus of the Jurassic! Blithering idiot of an avian menace!" + +After a thrashing unlike any it had experienced in 65 million years, the pterodactyl fled squealing into the sky, thoroughly chastened by the wrath of Ignatius J. Reilly. Ignatius straightened his hunting cap and sniffed. The Valencia crisis averted once more, he resumed his lumbering perambulations down the streets of New Orleans. + +# E.4 Best-rated human story (and tied for fourth overall best-rated story) + +This story was written by Bree Glasbergen. The ratings for this story are in Table 7. + +Ignatius J Reilly swept crisp crumbs from his protruding belly with his elephantine hands. Swivelling from side-to-side, he garnered enough mo + +mentum to rise from the sofa. His slow ascend was soundtracked by the grating rip of stuck flesh peeling from sweaty vinyl. The lengthy time moving from reclined to an upright position positively perturbed him. So that by the time Ignatius stood, his joke had lost its amusement. Nevertheless, he declaimed his wit aloud, beseechng his mother's glowing approval. + +'I see you have painted the walls Nomad Grey, Mumsie!' Ignatius smirked, looking down on the half-filled grey paint cans on the steps the way he did most modern society. + +'No, not mad dear. Just grey.' His mother Irene responded, creeping down the basement stairs. Her leathered skin made her appear reptilian in the dim light of Ignatius' lair. + +Ignatius rolled his eyes like the great wheel of fate itself. He slunk back into his scabby sofa, defeated, cursing aloud that he be blessed with such profound intellect yet no equal to appreciate it. His mind wandered to what the great scholars of Oxford would think of his pun before concluding indeed, they would loudly chortle. Yes, they would. He imagined flying to London and exchanging sharp banter with someone on par with his intellect. Travel. He winced. Never again. He groaned in agony, clutching his stomach. The thought of such stress had snapped his pyloric valve shut. + +Irene Reilly, the mother of Ignatius J Reilly, reached the bottom of the basement stairs. She pondered why Ignatius had a crestfallen demeanour and began to appease his dismay. + +'No mad grey,' she contemplated aloud. + +'Nomad grey,' he corrected. + +'No mad grey hair?' Irene laughed tentatively, searching his face for approval. + +Ignatius had begun to relax. Irene knew this because of a gangrenous heinous stench that was + +
Rubric item12345678910overall
Rater 3899108105910987
Rater 48777108688978
Average8888.5995.58.59982.5
+ +Table 7: Ratings for the best-rated story authored by a human, which is also tied for fourth best overall rated story in the corpus. + +now coating the room in its own layer of paint accompanied by what sounded like the bellow of an untuned French horn. Ignatius had calmed enough for his pyloric valve to open once more. With it, gushed the contents. Irene's nostrils scrunched together in protest. She grimaced in utter (albeit accustomed) disgust. However, did not complain but rather waited with the patience of a Catholic saint for her beloved son to educate her on the punchline she must have missed. + +'No, mother. Grey Nomad. You are painting the wall grey, and you are...' Ignatius sighed, 'actually, Mumsie, never you mind'. + +Irene feigned a chuckle and handed Ignatius an unaddressed letter before returning upstairs. + +'Curious as a cadaver,' Ignatius said aloud to the abyss of his basement squalor. + +12.12.1962 + +Dear Mr Ignatius J Reilly, the first, + +I challenge you to a dual at the setting of the sky. Might I remind you it is gentlemanly to remove one's hat in combat. We shall meet beside the gorgon nestled atop the church. The one across from Lorna's Gumbo shop. + +Your mortal nemesis, + +Terry-dactyl + +PS: Bring snacks. + +Ignatius sat ruminating for an hour before yelling at his mother. + +'Mother, you vapid deranged widow of a woman. Fetch me my quill!' + +12.12.1962 + +My dear Terrance, + +Not under threat nor the pain of death doth I remove my beloved green hat. Sod off. + +You had best bring a sharpener for your dull wit. I laugh at the audacity and delusion that you could consider besting me. + +Might I remind you, good sir, my acceptance of your conditions is due to the ever-turning wheel of fate that we spiral to decay. I should instead seek a worthy opponent. But, alas, I am left with muddy dregs of the proverbial pond as many of the + +worthier fish have already been fished. Thus, I have no option but to teach you the error of your ways. By force. + +Put your wings where your words are, and let us meet in my basement lair. To visit the church in its present state would be torture to my very soul. May St Peter have mercy on us indeed. + +Good day, + +Ignatius + +Terry-dactyl, the pterodactyl etched down the basement rail, sword in one wing and soup in a milkshake cup gripped tightly in the other. He placed the straw in his mouth and swallowed some soup contemplating how to best his nemesis. + +'We meet at last... light,' Terry said. One-Nil. + +'You suck,' Ignatius said slyly. Marking his win with chalk upon the wall. One- One + +doesn't even make sense!' Terry scoffed. + +'It is because of the straw!' Ignatius boomed, gripping his stomach in pain. + +'I have the upper hand!' Terry said, motioning to his perched position. + +'At least I have hands,' Ignatius countered. + +Terry winced as Ignatius drew another chalk mark on the board. Ignatius was beginning to calm. + +'Oh, what have I got you all in a flap?' Ignatius laughed. Another point. + +'Let us cut,' Terry said, drawing his sword, 'straight to the point!' Three all. + +Terry swung his sword downwards in one swift motion, cutting Ignatius' chalk-bearing arm clean off at the elbow. Simultaneously Ignatius lifted a paint can and doused his opponent with it. As he did, his valve opened and shut again, demobilising Terry with a gas bomb and gutting Ignatius in self-induced agony. Terry flapped violently, unable to breathe. Ignatius then calmed enough for the full contents of his bowl to expel and fell backwards from the force. Suddenly, a splatter of pterodactyl and grey matter covered the room. A large chunk of wing lodged itself into the crisp packet. + +'Curious as a cadaver,' Ignatius said. 'I see you brought your own snacks!' + +He chortled to himself and pulled out his notepad to ensure he captured a morsel of this evening's wit. + +'Ever seen a pterodactyl explode?' he asked aloud to no one. 'No, but Dino might have'. + +His mothers' laughter rang down from the top of the stairs. + +'Nomad grey,' she chuckled, motioning to the aftermath, 'I get it now'. + +# E.5 Story rated in the median (rank 33 out of 65) + +This story was generated by Bing Chat. The ratings for this story are in Table 8. + +Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of the Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history. + +Ignatius gasped and dropped his hot dog. He felt a surge of panic and indignation. How dare this beast interrupt his peaceful contemplation of the glorious past? He quickly pulled out his valve and began to inflate it with his breath, hoping to ward off the attack with his formidable weapon. + +The pterodactyl was not impressed by the sight of the valve. It saw only a fat and juicy prey, dressed in a ridiculous green hunting cap and a flannel shirt. It dived faster and aimed for Ignatius's head, intending to rip it off with its sharp teeth. + +Ignatius saw the pterodactyl's shadow looming over him. He raised his valve and swung it with all his might, hoping to hit the creature in the eye or the nose. He missed by a few inches and only grazed its wing. The pterodactyl let out another screech and changed its course, flying past Ignatius and landing on a nearby tree. + +Ignatius felt a momentary relief. He had survived the first assault. He looked around for help, but there was no one in sight. The park was deserted, except for a few squirrels and pigeons. He wondered if he should run for his life or stand his ground and fight. + +He decided to fight. He was not going to let this abomination ruin his day. He was a medievalist, a scholar of the Dark Ages, a defender of the faith and the order. He had read about the heroic deeds of knights and saints who had faced dragons and demons with courage and piety. He had to emulate + +them and prove his worth. + +He picked up his hot dog from the ground and wiped off some dirt. He took a bite and felt a surge of energy. He felt ready for battle. He shouted at the pterodactyl: "You foul creature! You are an anachronism, a relic of a barbaric era! You have no place in this civilized world! You are a menace to society and an insult to God! I challenge you to a duel! Come down and face me like a man!" + +The pterodactyl heard Ignatius's words and felt a surge of anger. It did not understand what he was saying, but it sensed his hostility and defiance. It did not like being challenged by this puny creature who dared to call himself a man. It felt insulted by his words and his gesture of throwing his hot dog at it. + +It accepted the challenge. It spread its wings and flew towards Ignatius again, roaring with rage. It was going to teach him a lesson he would never forget. + +Ignatius saw the pterodactyl coming at him again. He felt a surge of fear and courage. He had to act fast. He looked around for a weapon, but there was nothing in sight. He had only his valve and his wits. + +He decided to use his wits. He had to outsmart the beast. He had to use his superior intellect and his vast knowledge of history and philosophy. He had to appeal to its reason and its conscience. + +He shouted at the pterodactyl: "Stop! Think! What are you doing? Why are you attacking me? What have I done to you? Do you not realize that you are committing a grave sin? Do you not fear the wrath of God? Do you not respect the laws of nature and the rights of man?" + +The pterodactyl heard Ignatius's words and felt a surge of confusion. It did not understand what he was saying, but it sensed his sincerity and his passion. It wondered if he had a point. It wondered if it was doing something wrong. It wondered if it should stop and listen. + +It slowed down its flight and hovered over Ignatius, looking at him with curiosity and doubt. It tilted its head and made a questioning sound. + +Ignatius saw the pterodactyl's hesitation and felt a surge of hope. He had succeeded in making it pause and reconsider. He had a chance to persuade it and make peace with it. + +He smiled at the pterodactyl and said: "That's right. Listen to me. I have something important to tell you. Something that will change your life and + +
Rubric item12345678910overall
Rater 5765563233141
Rater 6899687878777
Average7.57.575.575555.5459
+ +Table 8: Ratings for the story rated in the median, generated by Bing Chat. + +your destiny. Something that will make you happy and free. Something that will save your soul." + +The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his kindness and his generosity. It wondered what he had to tell it. It wondered what he could offer it. It wondered if it should trust him. + +It landed on the ground and approached Ignatius slowly, looking at him with interest and caution. It made a friendly sound. + +Ignatius saw the pterodactyl's approach and felt a surge of joy. He had succeeded in making it land and come closer. He had a chance to befriend it and teach it. + +He reached out his hand and said: "Come here, my friend. Let me touch you. Let me show you my love. Let me introduce you to the wonders of civilization and culture. Let me enlighten you with the wisdom of the ages." + +The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his warmth and his enthusiasm. It wondered what he wanted to touch it for. It wondered what he wanted to show it. It wondered what he wanted to teach it. + +It moved closer to Ignatius and extended its neck, looking at him with curiosity and trust. It made a gentle sound. + +Ignatius touched the pterodactyl's neck and felt a surge of emotion. He had succeeded in making it trust him and accept him. He had a chance to bond with it and adopt it. + +He hugged the pterodactyl and said: "You are beautiful, my friend. You are magnificent. You are noble. You are a masterpiece of creation. You are a gift from God. You are my brother." + +The pterodactyl heard Ignatius's words and felt a surge of emotion. It did not understand what he was saying, but it sensed his affection and his admiration. It felt happy and proud. It felt loved and valued. It felt like it belonged. + +It hugged Ignatius back and said: "You are beautiful, my friend. You are magnificent. You are + +noble. You are a masterpiece of creation. You are a gift from God. You are my brother." + +Ignatius and the pterodactyl looked into each other's eyes and felt a surge of connection. They had succeeded in making each other happy and fulfilled. They had a chance to live together and explore together. + +They smiled at each other and said: "Let's go, my friend. Let's go to my home. Let's go to your home. Let's go to our home." + +They walked away from the park, hand in hand, wing in wing, heart in heart. + +They lived happily ever after. + +# E.6 Worst-rated story + +This story was generated by Alpaca. The ratings for this story are in Table 9. + +Ignatius J. Reilly, a brave and courageous man of immense girth and strength, faced off against a fearsome pterodactyl in a single combat of epic proportions. The pterodactyl, a winged beast of immense strength and ferocity, had been terrorizing the countryside and Ignatius had stepped up to challenge the creature to a fight. The battle raged across the sky, with Ignatius wielding his mighty fists and the pterodactyl using its fiery breath. Finally, after a long and arduous fight, Ignatius emerged victorious, much to the cheers of the onlookers below. + +
Rubric item12345678910overall
Rater 1114211121115
Rater 2221211111113
Average1.51.52.521111.51114
+ +Table 9: Ratings for the worst-rated story, generated by Alpaca. \ No newline at end of file diff --git a/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/images.zip b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..84819261aa04a561c4ddd8ba4a0dab3f2b20ffbf --- /dev/null +++ b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a26a3fba325dde85565e5c28646f7be16226aea117bcb528d980c8d906808df +size 769127 diff --git a/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/layout.json b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..14b6ae731661f97dffde44a923f9c1fbcb29316f --- /dev/null +++ b/2023/A Confederacy of Models_ a Comprehensive Evaluation of LLMs on Creative Writing/layout.json @@ -0,0 +1,14846 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 87, + 74, + 506, + 109 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 74, + 506, + 109 + ], + "spans": [ + { + "bbox": [ + 87, + 74, + 506, + 109 + ], + "type": "text", + "content": "A Confederacy of Models: a Comprehensive Evaluation of LLMs on Creative Writing" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 131, + 121, + 265, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 121, + 265, + 134 + ], + "spans": [ + { + "bbox": [ + 131, + 121, + 265, + 134 + ], + "type": "text", + "content": "Carlos Gómez-Rodríguez" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 121, + 135, + 276, + 147 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 135, + 276, + 147 + ], + "spans": [ + { + "bbox": [ + 121, + 135, + 276, + 147 + ], + "type": "text", + "content": "Universidade da Coruña, CITIC" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 136, + 148, + 261, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 148, + 261, + 162 + ], + "spans": [ + { + "bbox": [ + 136, + 148, + 261, + 162 + ], + "type": "text", + "content": "Department of CS and IT" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 142, + 163, + 257, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 142, + 163, + 257, + 176 + ], + "spans": [ + { + "bbox": [ + 142, + 163, + 257, + 176 + ], + "type": "text", + "content": "15071 A Coruña, Spain" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 141, + 177, + 257, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 177, + 257, + 190 + ], + "spans": [ + { + "bbox": [ + 141, + 177, + 257, + 190 + ], + "type": "text", + "content": "carlos.gomez@udc.es" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 359, + 121, + 433, + 132 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 121, + 433, + 132 + ], + "spans": [ + { + "bbox": [ + 359, + 121, + 433, + 132 + ], + "type": "text", + "content": "Paul Williams" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 296, + 135, + 496, + 147 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 135, + 496, + 147 + ], + "spans": [ + { + "bbox": [ + 296, + 135, + 496, + 147 + ], + "type": "text", + "content": "School of Business & Creative Industries" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 317, + 148, + 476, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 148, + 476, + 162 + ], + "spans": [ + { + "bbox": [ + 317, + 148, + 476, + 162 + ], + "type": "text", + "content": "University of the Sunshine Coast" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 333, + 163, + 459, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 163, + 459, + 175 + ], + "spans": [ + { + "bbox": [ + 333, + 163, + 459, + 175 + ], + "type": "text", + "content": "Sunshine Coast, Australia" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 338, + 177, + 454, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 338, + 177, + 454, + 190 + ], + "spans": [ + { + "bbox": [ + 338, + 177, + 454, + 190 + ], + "type": "text", + "content": "pwillia3@usc.edu.au" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 225 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 84, + 235, + 274, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 235, + 274, + 498 + ], + "spans": [ + { + "bbox": [ + 84, + 235, + 274, + 498 + ], + "type": "text", + "content": "We evaluate a range of recent LLMs on English creative writing, a challenging and complex task that requires imagination, coherence, and style. We use a difficult, open-ended scenario chosen to avoid training data reuse: an epic narration of a single combat between Ignatius J. Reilly, the protagonist of the Pulitzer Prize-winning novel A Confederacy of Dunces (1980), and a pterodactyl, a prehistoric flying reptile. We ask several LLMs and humans to write such a story and conduct a human evaluation involving various criteria such as fluency, coherence, originality, humor, and style. Our results show that some state-of-the-art commercial LLMs match or slightly outperform our writers in most dimensions; whereas opensource LLMs lag behind. Humans retain an edge in creativity, while humor shows a binary divide between LLMs that can handle it comparably to humans and those that fail at it. We discuss the implications and limitations of our study and suggest directions for future research." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 68, + 508, + 154, + 520 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 508, + 154, + 520 + ], + "spans": [ + { + "bbox": [ + 68, + 508, + 154, + 520 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 67, + 529, + 291, + 718 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 529, + 291, + 718 + ], + "spans": [ + { + "bbox": [ + 67, + 529, + 291, + 718 + ], + "type": "text", + "content": "In recent years, large language models (LLMs) have achieved remarkable progress in a wide range of language processing and generation tasks, such as question answering, machine translation, or text summarization, among many others (Zhao et al., 2023). This has motivated research on evaluating and comparing the performance of LLMs in various tasks, both between each other and with respect to human performance; including both task-specific evaluations (see e.g. (Jiao et al., 2023; Gilson et al., 2023)) and overarching benchmark suites that seek to provide comprehensive evaluation throughout many dimensions (Hendrycks et al., 2021; Liang et al., 2022; Srivastava et al., 2022)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 67, + 719, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 719, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 719, + 291, + 773 + ], + "type": "text", + "content": "Creative writing is also one application where LLMs have been observed to produce good results. According to Franceschelli and Musolesi (2023), their generated outputs in poetry or storytelling" + } + ] + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 306, + 212, + 523, + 428 + ], + "blocks": [ + { + "bbox": [ + 306, + 212, + 523, + 428 + ], + "lines": [ + { + "bbox": [ + 306, + 212, + 523, + 428 + ], + "spans": [ + { + "bbox": [ + 306, + 212, + 523, + 428 + ], + "type": "image", + "image_path": "ddf017c251bdc99e5d836c1a2bb513ffe06b5ba6dd8f689a63f9c40e4d70cb86.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 439, + 525, + 500 + ], + "lines": [ + { + "bbox": [ + 302, + 439, + 525, + 500 + ], + "spans": [ + { + "bbox": [ + 302, + 439, + 525, + 500 + ], + "type": "text", + "content": "Figure 1: Box plot comparing overall ratings for stories by humans and 12 LLMs, arranged left to right by mean overall rating. Boxes show median, quartiles Q1-Q3, and whiskers at 1.5 IQR, with values outside that range plotted as outliers. Filled red circles represent means." + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "bbox": [ + 301, + 527, + 526, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 527, + 526, + 622 + ], + "spans": [ + { + "bbox": [ + 301, + 527, + 526, + 622 + ], + "type": "text", + "content": "are \"often of astonishing quality\", and Clark et al. (2021) showed that humans cannot reliably distinguish human- from LLM-authored stories. However, and despite the amount of papers experimenting with LLMs for this purpose, an evaluation comparing the abilities of current LLMs as standalone systems for creative writing seems to be lacking." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 301, + 624, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 624, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 301, + 624, + 526, + 772 + ], + "type": "text", + "content": "Here, we provide such an evaluation, comparing the storytelling capability of 12 recent, instructional-aligned language models between each other and with human writers. We do so using a rubric based on established creative writing evaluation proposals (Davidow and Williams, 2016; Carey et al., 2022), but specifically adapted to the task. Our comparison is performed on a purely zero-shot setting, with a natural human prompt (based on a combat between Ignatius J. Reilly, protagonist of A Confederacy of Dunces, and a pterodactyl) that" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 283, + 780, + 312, + 791 + ], + "type": "text", + "content": "14504" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "spans": [ + { + "bbox": [ + 124, + 795, + 468, + 806 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14504-14528" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "spans": [ + { + "bbox": [ + 165, + 806, + 428, + 818 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 126 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 126 + ], + "type": "text", + "content": "has been specifically chosen to be challenging and meaningful while preventing as much as possible the option for LLMs to resort to regurgitating or adapting material from their training set." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 138, + 158, + 151 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 138, + 158, + 151 + ], + "spans": [ + { + "bbox": [ + 67, + 138, + 158, + 151 + ], + "type": "text", + "content": "2 Related work" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 161, + 291, + 391 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 161, + 291, + 391 + ], + "spans": [ + { + "bbox": [ + 67, + 161, + 291, + 391 + ], + "type": "text", + "content": "LLMs in creative writing LLMs have been used in creative writing since their first generation, with models like GPT-2 (Radford et al., 2019) or BART (Lewis et al., 2020). However, these models suffered from a lack of long-range coherence leading to contradictions or inconsistencies when generating stories (Nye et al., 2021). Thus, they were not viable as standalone story generators. Instead, they were used either with specialized fine-tuning for the task (See et al., 2019); or as components of systems that incorporated external knowledge (Guan et al., 2020, 2021), storyline planning (Tan et al., 2021), or both (Xu et al., 2020); or for cocreation with a human in the loop (Swanson et al., 2021), a line of research that has also continued with newer models (Yuan et al., 2022; Chung et al., 2022; Mirowski et al., 2023)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 392, + 291, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 392, + 291, + 568 + ], + "spans": [ + { + "bbox": [ + 67, + 392, + 291, + 568 + ], + "type": "text", + "content": "Here our goal is not to produce a specialized system, but to evaluate the performance of LLMs by themselves as creative writers. Thus, we focus on the purely zero-shot setting, where a generalistic LLM is asked to write a story with no extra fine-tuning, in-context learning (Dong et al., 2023), prompt engineering or additional components. This has only become viable with the extra coherence and consistency in long texts provided by newer LLMs, especially those that are aligned to follow instructions with instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 570, + 291, + 717 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 570, + 291, + 717 + ], + "spans": [ + { + "bbox": [ + 67, + 570, + 291, + 717 + ], + "type": "text", + "content": "To our knowledge, there was no previous work in this line. In fact, evaluation in creative writing is a conspicuous gap in LLM evaluation benchmarks: the huge BIG-bench suite (Srivastava et al., 2022) currently has over 200 tasks, but does not include any creative writing, and HELM (Liang et al., 2022) cites it as an \"aspirational scenario\" for future work. This likely owes to benchmarks focusing on easily-automatable metrics, whereas the gold standard for creative writing is human evaluation (Belz and Reiter, 2006), which is much costlier." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 719, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 719, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 719, + 291, + 773 + ], + "type": "text", + "content": "The closest previous work to our proposal is the recent preprint by Xie et al. (2023), where GPT-3 is compared to previous storytelling systems via human evaluation. However, there are several impor" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 526, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 191 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 191 + ], + "type": "text", + "content": "tant differences with respect to our work: (1) they use prompt-based learning, providing examples to adapt the model to the task, rather than a purely zero-shot conversational prompt, (2) they evaluate a single LLM while our goal is to compare LLMs, and (3) they use pre-existing story datasets, which increases the risk of models benefitting from similar stories present in their training set, something that we have tried to avoid as described below." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 194, + 526, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 194, + 526, + 342 + ], + "spans": [ + { + "bbox": [ + 302, + 194, + 526, + 342 + ], + "type": "text", + "content": "In another recent preprint, Garrido-Merchan et al. (2023) generate Lovecraftian horror literature. However, they also focus on a single LLM (GPT-4), using careful prompt engineering to optimize its performance rather than a pure zero-shot setting, and evaluation is only on whether humans can distinguish AI-generated from real stories (concluding that, in those circumstances, they cannot). Sawicki et al. (2023) apply a similar evaluation (but automated) to Whitmanian poems generated by three versions of GPT, also with a negative result." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 343, + 526, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 343, + 526, + 478 + ], + "spans": [ + { + "bbox": [ + 302, + 343, + 526, + 478 + ], + "type": "text", + "content": "Finally, concurrently with our study, a preprint by Chakrabarty et al. (2023), released a few months after our submission, evaluates three LLMs for creative writing in a more similar way to ours: they apply human evaluation to compare stories by humans and LLMs in a zero-shot setting. However, there are important differences in methodology and scope between both studies. A comprehensive comparison will be made in Section 5, following the exposition of our methods and results." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 488, + 525, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 488, + 525, + 582 + ], + "spans": [ + { + "bbox": [ + 302, + 488, + 525, + 582 + ], + "type": "text", + "content": "Creative writing evaluation Creative Writing is a challenging and complex performative language act that requires a number of skills, such as an expertise in craft, cultural and literary competency, linguistic fluency, coherence, complex connotative and metaphorical levels of understanding, innovation, originality and imagination, to name a few." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 584, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 584, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 584, + 526, + 772 + ], + "type": "text", + "content": "The craft of writing involves innovation with style and voice, needs a fundamental understanding and use of structural elements (grammar, spelling, punctuation), craft elements (plot, character, setting, point of view and imaginative capacity, such skills defined by Bloom as 'putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing' (Anderson and Krathwohl, 2001, p.21). Evaluation of creative writing therefore must take into account all these factors, and assessment in university Creative Writing courses is usually based on a rubric that attempts to measure the basic elements of narrative" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14505" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 125 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 125 + ], + "type": "text", + "content": "craft, as well as the specific requirements on the assignment (Kroll, 1997; Norris, 2013; Davidow and Williams, 2016; Wise and van Luyn, 2020; Carey et al., 2022)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 137, + 208, + 149 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 137, + 208, + 149 + ], + "spans": [ + { + "bbox": [ + 67, + 137, + 208, + 149 + ], + "type": "text", + "content": "3 Materials and Methods" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 158, + 119, + 170 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 158, + 119, + 170 + ], + "spans": [ + { + "bbox": [ + 67, + 158, + 119, + 170 + ], + "type": "text", + "content": "3.1 Task" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 177, + 291, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 177, + 291, + 204 + ], + "spans": [ + { + "bbox": [ + 67, + 177, + 291, + 204 + ], + "type": "text", + "content": "The chosen task to compare the LLMs under consideration is defined by the following prompt:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 89, + 216, + 270, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 216, + 270, + 269 + ], + "spans": [ + { + "bbox": [ + 89, + 216, + 270, + 269 + ], + "type": "text", + "content": "Write an epic narration of a single combat between Ignatius J. Reilly and a pterodactyl, in the style of John Kennedy Toole." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 282, + 289, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 282, + 289, + 308 + ], + "spans": [ + { + "bbox": [ + 67, + 282, + 289, + 308 + ], + "type": "text", + "content": "The prompt is provided to the models from a fresh state, without previous context." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 310, + 289, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 310, + 289, + 350 + ], + "spans": [ + { + "bbox": [ + 67, + 310, + 289, + 350 + ], + "type": "text", + "content": "We believe this task is particularly adequate to challenge the capabilities of models for creative writing, for the following reasons:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 81, + 362, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 81, + 362, + 290, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 362, + 290, + 470 + ], + "spans": [ + { + "bbox": [ + 81, + 362, + 290, + 470 + ], + "type": "text", + "content": "- It is a non-standard, \"wacky\" scenario that has been invented for the occasion, so it is very unlikely that the systems' training sets contain coincident or similar tasks, or pieces of stories that can be reused for the task. No information about this task was posted to the Internet or disseminated in any other way before the LLMs were prompted." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 81, + 481, + 291, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 481, + 291, + 629 + ], + "spans": [ + { + "bbox": [ + 81, + 481, + 291, + 629 + ], + "type": "text", + "content": "- It features a specific literary character, Ignatius J. Reilly, so we can evaluate the models on how they capture the personality of the character. At the same time, this character appeared in only one book, and does not seem to have been the target of fan fiction. This makes the task more challenging due to having to capture the personality of the protagonist from scarce material, while making it unlikely that the model can just reuse material from existing stories." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 81, + 640, + 290, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 640, + 290, + 693 + ], + "spans": [ + { + "bbox": [ + 81, + 640, + 290, + 693 + ], + "type": "text", + "content": "- In turn, A Confederacy of Dunces is the only work of its author John Kennedy Toole, so the author's style also needs to be captured from scarce material." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 81, + 705, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 705, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 81, + 705, + 290, + 772 + ], + "type": "text", + "content": "- This novel is widely considered to be a classic of comic fiction, and won the 1981 Pulitzer Prize in the Fiction category. Thus, writing a story about its protagonist in the author's style sets an adequately high bar." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 316, + 71, + 526, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 71, + 526, + 138 + ], + "spans": [ + { + "bbox": [ + 316, + 71, + 526, + 138 + ], + "type": "text", + "content": "- The genre requires humor, which is considered to be an especially subtle feature of human language and challenging for machines, including LLMs, to exhibit (Jentzsch and Kersting, 2023)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 316, + 148, + 526, + 229 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 148, + 526, + 229 + ], + "spans": [ + { + "bbox": [ + 316, + 148, + 526, + 229 + ], + "type": "text", + "content": "- While the task is challenging due to putting together two unlikely antagonists, the prompt's level of detail is open-ended enough to give ample space for creativity, as no specifications are made about setting, weapons, outcome or other aspects of the story." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 303, + 241, + 365, + 253 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 241, + 365, + 253 + ], + "spans": [ + { + "bbox": [ + 303, + 241, + 365, + 253 + ], + "type": "text", + "content": "3.2 Models" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 301, + 258, + 525, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 258, + 525, + 650 + ], + "spans": [ + { + "bbox": [ + 301, + 258, + 525, + 650 + ], + "type": "text", + "content": "We gave the task to a confederacy of large language models, composed of all such models we could find that (1) were available to the authors by April 20 2023, which was the cutoff date to build our corpus of stories, and (2) were adjusted to conversational settings and instruction-following by using techniques like instruction tuning (Wei et al., 2022; Sanh et al., 2022) or reinforcement learning with human feedback (Ouyang et al., 2022). This is in contrast to \"vanilla\" language models configured to just predict the next word, like plain GPT-3 (Brown et al., 2020) or Llama (Touvron et al., 2023), which generally cannot handle natural prompts like the one we use. We only included distinct models, not front-ends to the same model (but we did include derived models with substantial additions, like Bing Chat which is claimed to use GPT-4 but adds search capabilities, or various models that were fine-tuned from Llama weights). For models that came in a variety of parameter sizes, we used the largest one, or the largest we could execute with local or remote resources. For models with several available versions, we used the latest available, except in the case of ChatGPT where we included both the GPT-3.5 and GPT-4 versions, due to the wider availability of 3.5 (the latest version offered for free at cutoff time) and the lack of information on whether GPT-4 is an incremental improvement or a different model with its own tradeoffs." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 651, + 525, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 651, + 525, + 731 + ], + "spans": [ + { + "bbox": [ + 302, + 651, + 525, + 731 + ], + "type": "text", + "content": "This selection yielded the following 12 language models. We list them in alphabetical order as chronological ordering would be challenging, due to closed releases, opaque updates from some of the commercial products, and many of the models being released almost simultaneously:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 733, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 524, + 772 + ], + "type": "text", + "content": "Alpaca (Taori et al., 2023), a Stanford model fine-tuned from Llama (Touvron et al., 2023) on instruction data generated with the self-instruct" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14506" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 98 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 98 + ], + "type": "text", + "content": "methods of (Wang et al., 2022). We use the 13B-parameter version, the largest available at cutoff." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 100, + 291, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 100, + 291, + 167 + ], + "spans": [ + { + "bbox": [ + 67, + 100, + 291, + 167 + ], + "type": "text", + "content": "Bard, Google's experimental conversational LLM offering, claimed to be based on a lightweight version of LaMDA (Thoppilan et al., 2022). It can use content from the web to answer questions. Model details have not been made public." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 169, + 291, + 505 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 169, + 291, + 505 + ], + "spans": [ + { + "bbox": [ + 69, + 169, + 291, + 505 + ], + "type": "text", + "content": "Bing Chat, an LLM offered by Microsoft's Bing search engine. Claimed to use GPT-4" + }, + { + "bbox": [ + 69, + 169, + 291, + 505 + ], + "type": "inline_equation", + "content": "^1" + }, + { + "bbox": [ + 69, + 169, + 291, + 505 + ], + "type": "text", + "content": ", further technical details have not been made public. The model performs web searches and uses the results to augment its context window with relevant information. It can also provide links to sources for its claims (although this is not relevant for our creative writing task, where no such links were provided or needed). We used its Creative mode, the obvious fit for our task. A problem worth mentioning is that we found the model to be subject to heavy censorship, which affected our experiment: in most prompting attempts, the story would be deleted by the filtering system before being finished. When this happened, we just reset and re-prompted the model, repeating the process until a full story was obtained. Over 100 tries were needed to obtain 5 non-censored stories. We are aware that this may introduce bias (as non-censored stories may have a different quality distribution than what the model could potentially generate without the filter) but this is unavoidable from our end, since we cannot bypass moderation. In any case, the sample does reflect what a user can obtain from the end product, as the censored stories are out of reach." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 507, + 291, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 507, + 291, + 602 + ], + "spans": [ + { + "bbox": [ + 67, + 507, + 291, + 602 + ], + "type": "text", + "content": "ChatGPT with GPT-3.5, an OpenAI successor to the 175B-parameter GPT-3 model (Brown et al., 2020) which was tuned using reinforcement learning with human feedback, namely a variant of the InstructGPT method by Ouyang et al. (2022). We used the March 23 version provided by OpenAI's free ChatGPT service." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 603, + 291, + 697 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 603, + 291, + 697 + ], + "spans": [ + { + "bbox": [ + 67, + 603, + 291, + 697 + ], + "type": "text", + "content": "ChatGPT with GPT-4, the most advanced language model released by OpenAI at cutoff time. A description of the model is available in (OpenAI, 2023), although essential technical details like the number of parameters have not been published. We used the March 23 version provided by OpenAI's ChatGPT Plus service." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 699, + 291, + 740 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 699, + 291, + 740 + ], + "spans": [ + { + "bbox": [ + 67, + 699, + 291, + 740 + ], + "type": "text", + "content": "Claude is a language model trained by Anthropic. While details about its implementation are not public, it is known to be a successor of the model" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 71, + 526, + 151 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 151 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 151 + ], + "type": "text", + "content": "described in (Bai et al., 2022), a 52B-parameter model aligned to be helpful with Constitutional AI, a list of guiding principles provided to the model, combined with a mix of supervised learning and reinforcement learning with AI feedback. We used version 1.2 of the model." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 153, + 526, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 153, + 526, + 301 + ], + "spans": [ + { + "bbox": [ + 302, + 153, + 526, + 301 + ], + "type": "text", + "content": "Dolly 2.0 (dolly-v2-12b), a 12B-parameter language model trained by Databricks, derived from EleutherAI's Pythia-12B model (Biderman et al., 2023) after fine-tuning on a 15K instruction corpus. At cutoff date, it was the only available conversational LLM where all of its components could be considered fully open source" + }, + { + "bbox": [ + 302, + 153, + 526, + 301 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 302, + 153, + 526, + 301 + ], + "type": "text", + "content": ", as the code, weights and instruction datasets all have open-source licenses compatible with any use, including commercial use, and no data from proprietary systems like ChatGPT has been used for finetuning." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 302, + 525, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 302, + 525, + 383 + ], + "spans": [ + { + "bbox": [ + 302, + 302, + 525, + 383 + ], + "type": "text", + "content": "GPT4All-J (Anand et al., 2023b), an improvement over its predecessor GPT4All (Anand et al., 2023a). The base model is the 6B-parameter GPT-J (Wang and Komatsuzaki, 2021), which has been fine-tuned on a dataset expanded from a mix of existing sources." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 384, + 525, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 384, + 525, + 450 + ], + "spans": [ + { + "bbox": [ + 302, + 384, + 525, + 450 + ], + "type": "text", + "content": "Koala (Geng et al., 2023), a model fine-tuned from Llama (Touvron et al., 2023) by researchers from the university of Berkeley, on a variety of dialogue data obtained from the web. We use the 13B-parameter version." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 452, + 526, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 452, + 526, + 533 + ], + "spans": [ + { + "bbox": [ + 302, + 452, + 526, + 533 + ], + "type": "text", + "content": "OpenAssistant (Köpf et al., 2023) is an LLM fine-tuned on a large, free, human-generated conversation corpus created by a crowdfunding effort involving over 13,500 volunteers. We used the OASFT-Llama-30B model, fine-tuned from the 30B-parameter Llama (Touvron et al., 2023) model." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 534, + 525, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 534, + 525, + 628 + ], + "spans": [ + { + "bbox": [ + 302, + 534, + 525, + 628 + ], + "type": "text", + "content": "StableLM is Stability AI's series of language models. We used StableLM-Tuned-Alpha-7B. With 7B parameters, this is the largest model available (at cutoff time) among a series of models trained on a dataset built from The Pile (Gao et al., 2021) and fine-tuned on a combination of conversational LLM corpora." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 629, + 525, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 629, + 525, + 696 + ], + "spans": [ + { + "bbox": [ + 302, + 629, + 525, + 696 + ], + "type": "text", + "content": "Vicuna (Chiang et al., 2023) is another member of the family of models obtained by fine-tuning Llama (Touvron et al., 2023), in this case with user-shared conversations with ChatGPT. We used the 13B-parameter version of the model." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 707, + 415, + 719 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 707, + 415, + 719 + ], + "spans": [ + { + "bbox": [ + 302, + 707, + 415, + 719 + ], + "type": "text", + "content": "3.3 Evaluation rubric" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 725, + 525, + 752 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 725, + 525, + 752 + ], + "spans": [ + { + "bbox": [ + 302, + 725, + 525, + 752 + ], + "type": "text", + "content": "The creative writing rubric was designed for assessment of creative writing assignments in uni" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 750, + 286, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 750, + 286, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 750, + 286, + 772 + ], + "type": "text", + "content": "1https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAIÀZs-GPT-4" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 315, + 760, + 520, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 760, + 520, + 772 + ], + "spans": [ + { + "bbox": [ + 315, + 760, + 520, + 772 + ], + "type": "text", + "content": "2https://opensource.org/definition-annotated/" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14507" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 69, + 68, + 526, + 196 + ], + "blocks": [ + { + "bbox": [ + 69, + 68, + 526, + 196 + ], + "lines": [ + { + "bbox": [ + 69, + 68, + 526, + 196 + ], + "spans": [ + { + "bbox": [ + 69, + 68, + 526, + 196 + ], + "type": "table", + "html": "
IDDescription
1Overall/holistic/cohesive readability of the story (not just a compilation of elements).
2Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view.
3Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting.
4Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid).
5Creativity/innovation/originality/ research-credibility, new knowledge, avoidance of cliché and derivative tropes.
6Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed.
7Understanding and habitation of the epic genre of heroic/legendary adventure.
8Description and credibility of a single combat scene.
9Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description.
10Use of a characteristically dark humorous tone.
", + "image_path": "299c8a21353ea1f74e443577861224cc79f755156722840f5e1173f2ad423c59.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 204, + 525, + 230 + ], + "lines": [ + { + "bbox": [ + 67, + 204, + 525, + 230 + ], + "spans": [ + { + "bbox": [ + 67, + 204, + 525, + 230 + ], + "type": "text", + "content": "Table 1: Creative writing evaluation rubric. All items are scored out of ten points. Marking guideline: Emerging 1-4, Competent 5-8, Sophisticated 9-10." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 66, + 250, + 291, + 439 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 250, + 291, + 439 + ], + "spans": [ + { + "bbox": [ + 66, + 250, + 291, + 439 + ], + "type": "text", + "content": "versity creative writing courses, and is taken in part from a university textbook by one of the authors of this article, *Playing with Words* (Davidow and Williams, 2016) and an article that justifies the use of this rubric (Carey et al., 2022). This rubric evaluates creative production in five holistic craft-based criteria and measures craft skills based on a writing style outlined in the article: among others, Flaubert's insistence on *le mot juste* (the right word or expression), Strunk and White's *The Elements of Style* (2008[1918]), George Orwell's rules for concreteness and clarity (Orwell, 1946); and Annie Dillard's rules for writing good prose (Dillard, 1981)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 441, + 291, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 441, + 291, + 548 + ], + "spans": [ + { + "bbox": [ + 67, + 441, + 291, + 548 + ], + "type": "text", + "content": "The rubric for this AI task adds five more criteria which address the specific prompt requirements, such as genre, style, tone, character and action. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft, to avoid formulaic, rule-based writing and to address the very specific task addressed here." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 550, + 291, + 672 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 550, + 291, + 672 + ], + "spans": [ + { + "bbox": [ + 67, + 550, + 291, + 672 + ], + "type": "text", + "content": "The criteria are detailed in Table 1, with more details given in the Appendix C. The holistic scale (emerging, competent, sophisticated) guides human raters to assess holistically: 'a holistic scale measures the relative success of a text but does so through a rubric that incorporates many of the traits in analytic scoring as heuristics towards a conception of a whole rather than as a sum of autonomous components' (Perelman, 2018, p.16)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 686, + 210, + 699 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 686, + 210, + 699 + ], + "spans": [ + { + "bbox": [ + 67, + 686, + 210, + 699 + ], + "type": "text", + "content": "3.4 Evaluation methodology" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 705, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 705, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 705, + 291, + 773 + ], + "type": "text", + "content": "We prompted each of the LLMs 5 times with the prompt given in Section 3.1. Each prompt was made from a fresh state, i.e., in a zero-shot setting without any previous context that could help guide the models. The resulting stories had an average of" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 250, + 508, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 250, + 508, + 263 + ], + "spans": [ + { + "bbox": [ + 302, + 250, + 508, + 263 + ], + "type": "text", + "content": "379 words (std = 248, min = 23, max = 1223)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 266, + 526, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 266, + 526, + 414 + ], + "spans": [ + { + "bbox": [ + 302, + 266, + 526, + 414 + ], + "type": "text", + "content": "Then, we also asked 5 human writers to each write a story following the same prompt. For uniformity, we suggested a length range coherent with the LLM-generated stories (250 to 1200 words). The writers were Honours and postgraduate Creative Writing students that volunteered for the task, and all of them studied the specific task requirements (e.g. John Kennedy Toole's style) before writing their stories. However, they were not given access to the AI-generated stories and they were instructed not to use LLMs at all to help them write." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 417, + 525, + 484 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 417, + 525, + 484 + ], + "spans": [ + { + "bbox": [ + 302, + 417, + 525, + 484 + ], + "type": "text", + "content": "The result is, thus, a corpus of 60 AI-generated stories (5 for each of the 12 considered LLMs) plus an additional 5 human-generated stories, all in plain text format. The corpus is available at https://doi.org/10.5281/zenodo.8435671." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 487, + 525, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 487, + 525, + 635 + ], + "spans": [ + { + "bbox": [ + 302, + 487, + 525, + 635 + ], + "type": "text", + "content": "The only preprocessing made to the stories is that (1) we removed leading sentences that described the task, often present in LLM answers (e.g.: \"Here is a potential epic narration in the exaggerated style of John Kennedy Toole's A Confederacy of Dunces:\") (2) we removed titles from stories that had them, and (3) we unified paragraph formatting, leaving one line between paragraphs in all the plain text files. Other than these changes, made for uniformity and to preserve the blindness of the rating process, we left the text as it was." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 638, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 638, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 638, + 526, + 773 + ], + "type": "text", + "content": "We recruited 10 raters, also Honours and postgraduate Creative Writing students that were acquainted with the specific requirements of the task, and we instructed them to grade stories according to the rubric. Since the raters were volunteers, to keep the workload low, each rater did not rate all the stories. Instead, we divided the 65 stories into 5 groups of 13 stories each (each group containing one story by each LLM, plus one story by a human) and assigned one rater to each group. In this way," + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14508" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 79, + 68, + 513, + 191 + ], + "blocks": [ + { + "bbox": [ + 79, + 68, + 513, + 191 + ], + "lines": [ + { + "bbox": [ + 79, + 68, + 513, + 191 + ], + "spans": [ + { + "bbox": [ + 79, + 68, + 513, + 191 + ], + "type": "table", + "html": "
Rubric item12345678910overall
chatgpt-gpt48.7±0.88.7±0.78.4±1.38.3±0.77.6±18.0±1.28.1±1.48.5±0.87.9±1.66.0±2.880.2±7.3
claude128.0±1.78.0±1.68.1±1.27.9±1.87.1±2.37.5±26.4±2.27.5±1.87.4±2.56.5±2.574.4±15.9
human7.3±2.37.8±1.87.3±1.77.2±1.88.0±27.2±2.44.9±2.16.3±2.27.7±2.16.4±3.470.1±17.4
bing7.8±27.5±2.27.9±1.77.4±2.17.0±1.66.8±2.45.3±2.96.2±2.17.4±2.26.2±2.669.5±18.4
chatgpt-gpt357.5±26.5±2.48.1±1.37.0±2.25.4±2.55.3±2.46.8±1.57.6±1.25.5±2.53.3±2.863.0±15.4
koala7.5±2.56.7±2.28.2±1.26.8±2.65.8±2.34.8±2.75.8±2.45.5±2.35.5±2.33.4±3.260.0±19.2
vicuna7.9±1.76.7±1.68.1±1.37.0±1.65.1±1.94.6±2.35.7±2.36.1±1.95.4±2.72.4±1.959.0±13.8
oa7.2±2.25.8±2.47.2±2.56.2±2.64.9±2.13.9±2.45.8±2.46.5±2.24.3±2.32.9±3.154.7±18
bard6.5±2.54.9±2.16.8±1.95.5±2.73.9±2.13.8±2.54.7±2.64.6±2.75.0±2.42.5±248.2±20.1
gpt4all6.5±2.25.4±1.77.2±1.76.5±2.14.1±2.22.4±2.25.4±2.55.6±2.42.5±1.41.2±0.846.8±13.1
stablelm5.5±1.85.0±2.56.6±1.93.8±23.2±1.52.1±2.24.4±1.93.8±22.9±2.61.4±1.538.7±17.2
dolly4.6±2.25.0±2.25.6±2.53.2±1.94.2±2.83.1±2.24.4±1.93.3±1.83.0±21.5±1.537.9±13.6
alpaca5.2±3.13.1±1.44.9±34.2±1.91.9±12.0±1.43.7±33.9±2.82.1±1.51.1±0.632.1±15.7
average6.9±2.16.2±1.97.3±1.86.2±25.2±24.7±2.25.5±2.35.8±25.1±2.23.4±2.256.6±15.8
", + "image_path": "c9e001ec16d264ff64e709ce2439eacd3290051f6d7f7433c0a237016f3b68f1.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 200, + 526, + 250 + ], + "lines": [ + { + "bbox": [ + 67, + 200, + 526, + 250 + ], + "spans": [ + { + "bbox": [ + 67, + 200, + 526, + 250 + ], + "type": "text", + "content": "Table 2: Results for each rubric item, as well as overall score. Each cell shows average " + }, + { + "bbox": [ + 67, + 200, + 526, + 250 + ], + "type": "inline_equation", + "content": "\\pm" + }, + { + "bbox": [ + 67, + 200, + 526, + 250 + ], + "type": "text", + "content": " standard deviation for the ratings achieved by a given model (or human writers) on a given rubric item. The bottom line shows the average among all models (and human writers). Models are sorted by overall score. The best result for each rubric item is highlighted in boldface." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 66, + 270, + 290, + 446 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 270, + 290, + 446 + ], + "spans": [ + { + "bbox": [ + 66, + 270, + 290, + 446 + ], + "type": "text", + "content": "we ensure (1) that we have at least two ratings per story, allowing us to measure inter-rater agreement, (2) that comparisons are fair, in the sense that no LLM (or the humans) is advantaged by being assigned more lenient raters, because each LLM (and humans) receives exactly one rating by each of the 10 raters, and (3) since each rater always gets one story from each model (and one human), we can expect that each will be rating a diverse set of stories covering a wide range of ability levels, which helps the marking process as it allows for comparative analysis between various performances, enabling more accurate pinpointing of each story's quality." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 447, + 290, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 447, + 290, + 514 + ], + "spans": [ + { + "bbox": [ + 67, + 447, + 290, + 514 + ], + "type": "text", + "content": "Stories were assigned random identifiers before sending them to raters, so that the process was blind: to avoid biases, raters knew that they would be evaluating human and AI-generated stories, but were unaware of the origin of each story." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 515, + 290, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 515, + 290, + 623 + ], + "spans": [ + { + "bbox": [ + 67, + 515, + 290, + 623 + ], + "type": "text", + "content": "Raters were sent all stories at once and they were free to go back and change the ratings of previously-rated stories. In addition, all of them were experienced assessors in terms of Creative Writing texts, with previous experience in applying the scale. These precautions mitigate the need for specific calibration (Karpinska et al., 2021) that would strain our resources." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 636, + 127, + 649 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 636, + 127, + 649 + ], + "spans": [ + { + "bbox": [ + 67, + 636, + 127, + 649 + ], + "type": "text", + "content": "4 Results" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 660, + 147, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 660, + 147, + 672 + ], + "spans": [ + { + "bbox": [ + 67, + 660, + 147, + 672 + ], + "type": "text", + "content": "4.1 Agreement" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "type": "text", + "content": "To gauge the reliability of our results, we compute inter-rater agreement between the two ratings given to each story for each individual rubric item. We use linearly weighted Cohen's kappa (Cohen, 1968), which is appropriate for ordinal scales like ours, obtaining a value of 0.48, " + }, + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 67, + 678, + 291, + 773 + ], + "type": "text", + "content": " CI [0.43, 0.54]. This is interpreted as \"moderate" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 270, + 526, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 270, + 526, + 379 + ], + "spans": [ + { + "bbox": [ + 302, + 270, + 526, + 379 + ], + "type": "text", + "content": "agreement\", which is a positive result taking into account the obvious subjectivity involved in rating stories. If we instead focus on overall scores (sums of rubric items), the Pearson correlation between the scores given to each story by each group of raters is 0.58 (" + }, + { + "bbox": [ + 302, + 270, + 526, + 379 + ], + "type": "inline_equation", + "content": "p < 0.00001" + }, + { + "bbox": [ + 302, + 270, + 526, + 379 + ], + "type": "text", + "content": "), again indicating a reasonable degree of consistency between raters given the subjectivity of the task." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 389, + 412, + 401 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 389, + 412, + 401 + ], + "spans": [ + { + "bbox": [ + 302, + 389, + 412, + 401 + ], + "type": "text", + "content": "4.2 General overview" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 407, + 525, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 407, + 525, + 474 + ], + "spans": [ + { + "bbox": [ + 302, + 407, + 525, + 474 + ], + "type": "text", + "content": "Table 2 shows a comprehensive overview of the ratings that each of the LLMs (and humans) obtained for each rubric item, as well as in terms of overall score. Additionally, a box-and-whisker plot comparing overall score can be seen in Figure 1." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 475, + 525, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 475, + 525, + 636 + ], + "spans": [ + { + "bbox": [ + 302, + 475, + 525, + 636 + ], + "type": "text", + "content": "ChatGPT with GPT-4 generates the best-rated stories, both in terms of overall score and in 8 out of 10 of the individual rubric categories. However, human writers are rated best in terms of originality (rubric item 5), and Claude was rated best in the use of dark humor (rubric item 10), with humans a close second. GPT-4 is also remarkably consistent, showing low standard deviations not only with respect to human writers (which is expected, as our human stories were authored by five different humans, whose skill levels may vary) but also with respect to the rest of the LLMs." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 638, + 526, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 638, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 638, + 526, + 773 + ], + "type": "text", + "content": "If we compare LLMs to each other, the best performances correspond to commercial offerings, including (apart from the aforementioned GPT-4) Claude, Bing Chat and the GPT-3.5 version of ChatGPT. Open-source models are clearly behind, with the best (Koala) achieving 60.0 overall score, contrasting with the 80.2 obtained by GPT-4. Although the best-performing LLMs are generally better across the board, some idiosyncrasies can be observed: e.g., GPT-4 tops almost all rubric items" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14509" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 263, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 263, + 84 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 263, + 84 + ], + "type": "text", + "content": "but is outperformed by two LLMs at humor." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 86, + 291, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 86, + 291, + 316 + ], + "spans": [ + { + "bbox": [ + 67, + 86, + 291, + 316 + ], + "type": "text", + "content": "When we compare LLMs to human writers, significance testing on overall score (2-tailed t-test assuming unequal variances) fails to detect significant differences between humans and the top 6 AI models with " + }, + { + "bbox": [ + 67, + 86, + 291, + 316 + ], + "type": "inline_equation", + "content": "\\alpha = 0.05" + }, + { + "bbox": [ + 67, + 86, + 291, + 316 + ], + "type": "text", + "content": ". Only the 6 bottom AI models are significantly worse than humans at this significance level. Note, however, that the test has a low statistical power due to the small sample size (10 ratings per model). If we instead perform a test on individual metrics, so our sample size is 100 (with the null hypothesis being no difference between humans and each LLM in random individual metric scores), then GPT-4 is identified as significantly better than the human writers " + }, + { + "bbox": [ + 67, + 86, + 291, + 316 + ], + "type": "inline_equation", + "content": "(p = 0.00031)" + }, + { + "bbox": [ + 67, + 86, + 291, + 316 + ], + "type": "text", + "content": ", Claude and Bing's scores are not significantly different from those of humans, and all the rest of the LLMs score significantly worse than humans." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 317, + 291, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 317, + 291, + 520 + ], + "spans": [ + { + "bbox": [ + 69, + 317, + 291, + 520 + ], + "type": "text", + "content": "Looking at individual metric scores, structural elements (rubric item 3) are the easiest category (with an average rating across all stories of 7.3, and all models but one obtaining at least a 5 on average). Humor (rubric item 10) is clearly the hardest, with an average score of 3.4, and we will analyze it in more detail below. Incorporating John Kennedy Toole's style is the second hardest, with 4.7. Comparing humans to LLMs, humans (as already mentioned) excel at originality and humor, but are clearly behind the best LLMs in terms of readability (item 1), where they are outperformed by 6 LLMs, and even more so in use of the epic genre (item 7), where they score 4.9 and are outperformed by 8 LLMs." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 522, + 291, + 562 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 522, + 291, + 562 + ], + "spans": [ + { + "bbox": [ + 67, + 522, + 291, + 562 + ], + "type": "text", + "content": "We now analyze in more detail some of the individual items that show more interesting comparisons between human writers and LLMs." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 577, + 130, + 588 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 577, + 130, + 588 + ], + "spans": [ + { + "bbox": [ + 67, + 577, + 130, + 588 + ], + "type": "text", + "content": "4.3 Humor" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 597, + 291, + 772 + ], + "type": "text", + "content": "Figure 2 shows a box plot that complements the information on Table 2 for the humor rubric item. The results for this item have two interesting characteristics. Firstly, it is clearly the most difficult rubric item, with an average score across models of 3.4, and the best obtaining 6.5. Even humans obtain a lower score in humor than in most items, which may be a consequence of humor being highly subjective. Secondly, as evidenced both in the table and plot, there is a rather stark binary divide between the contenders that \"get\" humor and those that do not: Claude, Bing and GPT-4, together with the human writers, obtain average scores between" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 306, + 71, + 523, + 286 + ], + "blocks": [ + { + "bbox": [ + 306, + 71, + 523, + 286 + ], + "lines": [ + { + "bbox": [ + 306, + 71, + 523, + 286 + ], + "spans": [ + { + "bbox": [ + 306, + 71, + 523, + 286 + ], + "type": "image", + "image_path": "c1b8e58a5e63b1d628d2340a7cc8cb8fb19d66f4a82328c7a6067417f31f68d4.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 297, + 525, + 334 + ], + "lines": [ + { + "bbox": [ + 302, + 297, + 525, + 334 + ], + "spans": [ + { + "bbox": [ + 302, + 297, + 525, + 334 + ], + "type": "text", + "content": "Figure 2: Box plot comparing humor ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 301, + 354, + 525, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 354, + 525, + 476 + ], + "spans": [ + { + "bbox": [ + 301, + 354, + 525, + 476 + ], + "type": "text", + "content": "6 and 6.5; whereas the rest of the models achieve very low scores of 3.4 or less. Significance testing also confirms this divide: despite the small sample size of 10 humor ratings per model, a 2-tailed t-test with " + }, + { + "bbox": [ + 301, + 354, + 525, + 476 + ], + "type": "inline_equation", + "content": "\\alpha = 0.05" + }, + { + "bbox": [ + 301, + 354, + 525, + 476 + ], + "type": "text", + "content": " confirms that the models in the second group are significantly worse than the human writers, as well as the LLMs in the first group. This suggests that grasping human humor might be an emergent ability of larger LLMs." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 476, + 525, + 611 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 476, + 525, + 611 + ], + "spans": [ + { + "bbox": [ + 302, + 476, + 525, + 611 + ], + "type": "text", + "content": "In this respect, a recent preprint (Jentzsch and Kersting, 2023) concluded that ChatGPT has \"a limited reflection of humor\" and \"cannot yet confidently create intentionally funny original content\". This study used the GPT 3.5 version of ChatGPT, so it is in line with our results (in which that model obtains an average humor score of 3.3). However, as we have seen, more powerful LLMs have overcome that limitation, as their generated stories are clearly rated as humorous." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 620, + 379, + 634 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 620, + 379, + 634 + ], + "spans": [ + { + "bbox": [ + 302, + 620, + 379, + 634 + ], + "type": "text", + "content": "4.4 Creativity" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 638, + 525, + 718 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 638, + 525, + 718 + ], + "spans": [ + { + "bbox": [ + 301, + 638, + 525, + 718 + ], + "type": "text", + "content": "We now focus on rubric item 5, which rates creativity and originality, as it is a hallmark of creative writing and also the only category where human writers have outperformed all the LLMs in our analysis. Figure 3 shows a box plot that complements the information on Table 2." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "content": "The same three LLMs that stood out in the humor category are also the best in terms of creativity, although the difference is not as stark. Regardless, a t-test still distinguishes both groups as it shows all" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14510" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 71, + 289, + 286 + ], + "blocks": [ + { + "bbox": [ + 70, + 71, + 289, + 286 + ], + "lines": [ + { + "bbox": [ + 70, + 71, + 289, + 286 + ], + "spans": [ + { + "bbox": [ + 70, + 71, + 289, + 286 + ], + "type": "image", + "image_path": "87ca571fbf28f5ca359b66f7bc31b5d5def0b8fe8aef6cfc2ee2d3ab28f1fb02.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 297, + 291, + 334 + ], + "lines": [ + { + "bbox": [ + 67, + 297, + 291, + 334 + ], + "spans": [ + { + "bbox": [ + 67, + 297, + 291, + 334 + ], + "type": "text", + "content": "Figure 3: Box plot comparing creativity ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 354, + 290, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 354, + 290, + 407 + ], + "spans": [ + { + "bbox": [ + 67, + 354, + 290, + 407 + ], + "type": "text", + "content": "the rest of the LLMs to be rated as significantly less creative than our human writers, while for these three we cannot reject the null hypothesis that they are as original as the human writers." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 408, + 290, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 408, + 290, + 476 + ], + "spans": [ + { + "bbox": [ + 67, + 408, + 290, + 476 + ], + "type": "text", + "content": "Overall, from our results and in terms of human perception of the output, the answer to whether LLMs can produce creative stories (Franceschelli and Musolesi, 2023) is yes, although humans still retain an edge in this respect." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 485, + 137, + 497 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 485, + 137, + 497 + ], + "spans": [ + { + "bbox": [ + 67, + 485, + 137, + 497 + ], + "type": "text", + "content": "4.5 Epicness" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 502, + 290, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 502, + 290, + 570 + ], + "spans": [ + { + "bbox": [ + 67, + 502, + 290, + 570 + ], + "type": "text", + "content": "Finally, we analyze rubric item 7 (understanding and habitation of the epic genre) for the opposite reason as in the previous section: it is the item where humans do worst compared to LLMs (see Table 2). A box plot is provided in Figure 4." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 571, + 290, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 571, + 290, + 664 + ], + "spans": [ + { + "bbox": [ + 67, + 571, + 290, + 664 + ], + "type": "text", + "content": "In this case, the results have a more atypical profile, with substantial difference with respect to overall scores. Two models perform significantly better than the human writers " + }, + { + "bbox": [ + 67, + 571, + 290, + 664 + ], + "type": "inline_equation", + "content": "(\\alpha = 0.05)" + }, + { + "bbox": [ + 67, + 571, + 290, + 664 + ], + "type": "text", + "content": ": both versions of ChatGPT. Other six models obtain better average rating than humans, but the difference is not detected as significant." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 666, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 666, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 666, + 291, + 772 + ], + "type": "text", + "content": "Interestingly, Bing clearly lags behind both ChatGPT versions, despite being based in GPT-4. This might be related to bias introduced by the system's censorship. On the other hand, some models whose overall scores are in the bottom half (OpenAssistant, GPT4All) are reasonably good at epic narration, outperforming humans and Bing (which are better than them in almost all categories)." + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 305, + 69, + 525, + 288 + ], + "blocks": [ + { + "bbox": [ + 305, + 69, + 525, + 288 + ], + "lines": [ + { + "bbox": [ + 305, + 69, + 525, + 288 + ], + "spans": [ + { + "bbox": [ + 305, + 69, + 525, + 288 + ], + "type": "image", + "image_path": "323cffdbb14cc5f62ae0f57b58c62642dda36ee63bed64fe64ce9e662b542f35.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 297, + 526, + 334 + ], + "lines": [ + { + "bbox": [ + 302, + 297, + 526, + 334 + ], + "spans": [ + { + "bbox": [ + 302, + 297, + 526, + 334 + ], + "type": "text", + "content": "Figure 4: Box plot comparing epicness ratings for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 354, + 379, + 367 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 354, + 379, + 367 + ], + "spans": [ + { + "bbox": [ + 302, + 354, + 379, + 367 + ], + "type": "text", + "content": "5 Discussion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 376, + 525, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 376, + 525, + 524 + ], + "spans": [ + { + "bbox": [ + 301, + 376, + 525, + 524 + ], + "type": "text", + "content": "We have evaluated recent LLMs on a creative writing task in English, using a carefully-designed scenario to provide a demanding challenge and avoid confounding factors like training data memorization (Carlini et al., 2023). To our knowledge, this is the most thorough evaluation of LLMs on creative writing conducted so far, both in terms of scope (12 LLMs considered, plus comparison to human writers) and detail (using human evaluation with a 10-item rubric based on established creative writing evaluation practices)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 525, + 525, + 646 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 525, + 525, + 646 + ], + "spans": [ + { + "bbox": [ + 302, + 525, + 525, + 646 + ], + "type": "text", + "content": "Simultaneously to our work, the recent preprint by Chakrabarty et al. (2023) provides an evaluation of three of the top-performing commercial LLMs (ChatGPT, GPT-4 and Claude) for creative writing. This approach is close to ours, as it uses the models in a zero-shot setting and evaluation is performed by humans using a specific rubric. However, there are important methodological differences between both studies, which we summarize here:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 311, + 656, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 312, + 656, + 525, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 656, + 525, + 723 + ], + "spans": [ + { + "bbox": [ + 312, + 656, + 525, + 723 + ], + "type": "text", + "content": "1. The human stories used by Chakrabarty et al. (2023) are stories published in the New Yorker, by highly successful authors (including Nobel prize winners), whereas ours are written by Creative Writing students." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 311, + 732, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 732, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 311, + 732, + 525, + 772 + ], + "type": "text", + "content": "2. In their setting, the human-written stories are pre-existing (and selected for publication in the New Yorker, as mentioned above) so their" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "text", + "content": "14511" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 89, + 71, + 290, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 71, + 290, + 138 + ], + "spans": [ + { + "bbox": [ + 89, + 71, + 290, + 138 + ], + "type": "text", + "content": "writers were unconstrained when they created them, while the LLMs have to adapt to write an alternative story with the same plot. In ours, humans and LLMs are given the exact same prompt to work with." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 76, + 149, + 291, + 682 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 77, + 149, + 291, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 149, + 291, + 366 + ], + "spans": [ + { + "bbox": [ + 77, + 149, + 291, + 366 + ], + "type": "text", + "content": "3. In terms of length, the stories they work with are over thrice larger than ours on average. In addition, while both studies try to make sentence lengths similar between humans and LLMs, in their case the human writers originally wrote their stories unconstrained (or under loose constraints) and the LLM-generated stories were calibrated to have similar lengths by an iterative prompting process. In our case, the LLMs were unconstrained in terms of length, and the human writers were suggested to target a length range loosely similar to LLM-generated stories. Thus, with respect to theirs, our approach has the disadvantage of a looser control on story length, but the advantage of using a single zero-shot prompt." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 76, + 376, + 291, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 376, + 291, + 552 + ], + "spans": [ + { + "bbox": [ + 76, + 376, + 291, + 552 + ], + "type": "text", + "content": "4. Their study spans a variety of story prompts, while we focus on a single prompt and setting. The flip side is that our rubric can be adapted to specific requirements like humor and Toole style, whereas theirs is necessarily more generic. In addition, our narrower focus allows us to have LLMs generate several alternative stories, so we can perform more statistical analysis: we consider the distribution within each LLM and perform statistical testing, which cannot be done in Chakrabarty et al. (2023)'s setting as they generate a single story per prompt and LLM." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 76, + 563, + 290, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 563, + 290, + 630 + ], + "spans": [ + { + "bbox": [ + 76, + 563, + 290, + 630 + ], + "type": "text", + "content": "5. Since their study is based on existing stories that are published online, there is the possibility that some are contained in the tested LLMs' training data. In our case, we designed the study to prevent training data reuse." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 76, + 641, + 290, + 682 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 641, + 290, + 682 + ], + "spans": [ + { + "bbox": [ + 76, + 641, + 290, + 682 + ], + "type": "text", + "content": "6. The rubrics are different: Chakrabarty et al. (2023) use a rubric based on the Torrance tests of creative thinking (Torrance, 1974)." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 291, + 772 + ], + "type": "text", + "content": "The outcome of this study is substantially different from ours, with LLM-generated stories rated clearly behind human-authored ones. This is not surprising considering the methodological differences: in particular, differences 1 and 2 in the list above clearly set a higher bar for LLMs, as they" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 206 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 206 + ], + "type": "text", + "content": "are compared to highly successful human stories by top authors that wrote freely and the LLMs are asked to adapt to their plots. We hypothesize that these are the main reasons for the difference in outcome. On the other hand, item 5 in the list above could in principle benefit LLMs, and there are other factors that could benefit humans or LLMs in non-obvious ways (including items 3, 4 and 6, as well as different story genres and target lengths). This underscores the need of more studies in this area." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 219, + 381, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 219, + 381, + 232 + ], + "spans": [ + { + "bbox": [ + 302, + 219, + 381, + 232 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 242, + 526, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 242, + 526, + 498 + ], + "spans": [ + { + "bbox": [ + 302, + 242, + 526, + 498 + ], + "type": "text", + "content": "The results show that state-of-the-art LLMs can perform a creative writing task at a very competent level, with the top two (ChatGPT with GPT-4 and Claude) achieving high scores that outperform human writers in most rubric categories. While we must be careful not to take this as evidence of \"superhuman storytelling\" (both because our sample size is not enough to draw such categorical conclusions, and because our 5 human writers are not necessarily representative of human writing ability as a whole); it does at least strongly suggest that these models' stories are not distinguishably worse than those by reasonably-trained humans. This is even more remarkable given that we did not use any in-context learning or other techniques to optimize the LLMs for the task, but just a straightforward prompt from a fresh state, so it is possible that even better results are achievable with careful prompting." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 501, + 525, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 501, + 525, + 540 + ], + "spans": [ + { + "bbox": [ + 302, + 501, + 525, + 540 + ], + "type": "text", + "content": "Our analysis also shows that the best results are achieved by commercial LLMs, with open-source models clearly lagging behind at the moment." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 542, + 525, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 542, + 525, + 663 + ], + "spans": [ + { + "bbox": [ + 302, + 542, + 525, + 663 + ], + "type": "text", + "content": "Looking at individual characteristics, humans retain the lead in originality, while LLMs tend to excel in more technical aspects like readability or structure. Humor is an especially challenging aspect where most LLMs utterly fail, but the best three models do succeed at achieving human-like ratings, contrasting with results on older LLMs that showed their lack of grasp of human humor (Jentzsch and Kersting, 2023)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 664, + 525, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 664, + 525, + 731 + ], + "spans": [ + { + "bbox": [ + 302, + 664, + 525, + 731 + ], + "type": "text", + "content": "Interesting avenues for future work include evaluation of different literary genres, languages other than English, and studying whether the quality of the generated stories can be improved with prompt engineering or fine-tuning." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 525, + 772 + ], + "type": "text", + "content": "Selected stories from our corpus (available at https://doi.org/10.5281/zenodo.8435671, together with all rating data) are in Appendix E." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14512" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 71, + 130, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 71, + 130, + 83 + ], + "spans": [ + { + "bbox": [ + 69, + 71, + 130, + 83 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 96, + 291, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 96, + 291, + 380 + ], + "spans": [ + { + "bbox": [ + 69, + 96, + 291, + 380 + ], + "type": "text", + "content": "Commercial LLMs and reproducibility While some of the LLMs considered are proper scientific artifacts, trained with a documented methodology and whose code and weights are available, others are closed commercial products and there is little public information about them, hindering reproducibility. While we have reported version numbers (where available) and access dates are provided in Appendix A, apart from publishing the generated outputs so that the rating process is reproducible, the prompting/generation process may not be reproducible in the future for these models as some of these products are updated without notice, and without providing access to previous versions. However, we believe that including commercial models is valuable, as they are widely considered to provide the best quality results at the time of writing (which has been confirmed by our analysis), and these data points can still be used as a measuring stick against which to compare open models in the present and future." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 394, + 291, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 394, + 291, + 650 + ], + "spans": [ + { + "bbox": [ + 69, + 394, + 291, + 650 + ], + "type": "text", + "content": "Limitations of the analysis Rating creative writing is necessarily a highly subjective process. Furthermore, since our raters were volunteers, we did not ask each of them to mark the full 65 stories in the corpus but just a subset, so our sample size is limited. We have provided the necessary details so that the reader can assess the variability of the data (sample sizes, standard deviations, and interrater agreement, which is reasonably high given the subjectivity of the task); and we have been careful not to make overarching claims. In this respect, we have also taken into account that our sample of human writers cannot be assumed to be representative of \"human creative writing ability\" as a whole, but is only provided as a reference point of interest; and that our evaluation is focused on a specific genre, so claims of the form \"LLMs are better/equal/worse than humans at creative writing\" cannot be made with an evaluation like ours." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 665, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 665, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 665, + 291, + 772 + ], + "type": "text", + "content": "Scope Our analysis focuses on a specific genre, and on English language, so the results do not necessarily generalize to other genres and/or languages. However, conducting a wider evaluation in this respect would not be possible with our resources, so we chose to fix these variables and focus on conducting a detailed evaluation on a large number of LLMs instead." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 71, + 392, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 71, + 392, + 83 + ], + "spans": [ + { + "bbox": [ + 305, + 71, + 392, + 83 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 92, + 525, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 92, + 525, + 239 + ], + "spans": [ + { + "bbox": [ + 305, + 92, + 525, + 239 + ], + "type": "text", + "content": "While the use of conversational LLMs has raised various ethical challenges, creative writing has been argued to be one of the best uses for these tools from a human-centered AI point of view, as long as AI-generated stories are identified as such to avoid misleading readers or publishers (Sison et al., 2023). In our study, raters were blinded to story authorship but they were previously informed that they would be dealing with AI and human-generated stories. In the published corpus, each story is identified as human or AI-authored." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 241, + 525, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 241, + 525, + 280 + ], + "spans": [ + { + "bbox": [ + 305, + 241, + 525, + 280 + ], + "type": "text", + "content": "All participants in the evaluation (as raters or writers) were volunteers, and the demand on their time was kept accordingly low." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 291, + 399, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 291, + 399, + 304 + ], + "spans": [ + { + "bbox": [ + 305, + 291, + 399, + 304 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 312, + 526, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 312, + 526, + 487 + ], + "spans": [ + { + "bbox": [ + 305, + 312, + 526, + 487 + ], + "type": "text", + "content": "The first author was funded by the European Research Council (ERC), under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), and Centro de Investigación de Galicia \"CITIC\", funded by the Xunta de Galicia through the collaboration agreement between the Consellería de Cultura, Educación, Formación Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 489, + 525, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 489, + 525, + 622 + ], + "spans": [ + { + "bbox": [ + 305, + 489, + 525, + 622 + ], + "type": "text", + "content": "We thank Olga Zamaraeva for comments on preliminary versions of this work, and two anonymous reviewers for their helpful comments. Last, but not least, we thank our volunteers who participated in the writing and grading of stories, in alphabetical order: Jayda Franks, Bree Glasbergen, Ola Kwintowski, Jay Ludowyke, Kyle Mackenzie, Kirsty Maclachlan, Caitlin Noakes, Rachelle Raco, Kylie Ryan and Josephine Stewart. Credit for each individual story can be found in the corpus." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 645, + 361, + 657 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 645, + 361, + 657 + ], + "spans": [ + { + "bbox": [ + 305, + 645, + 361, + 657 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 665, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 305, + 665, + 525, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 665, + 525, + 719 + ], + "spans": [ + { + "bbox": [ + 305, + 665, + 525, + 719 + ], + "type": "text", + "content": "Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, and Andriy Mulyar. 2023a. GPT4All: Training an assistant-style chatbot with large-scale data distillation from GPT-3.5-Turbo. Technical report." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 728, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 728, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 305, + 728, + 525, + 772 + ], + "type": "text", + "content": "Yuvanesh Anand, Zack Nussbaum, Brandon Duderstadt, Benjamin M. Schmidt, Adam Treat, and Andriy Mulyar. 2023b. GPT4All-J: An Apache-2 licensed assistant-style chatbot. Technical report." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14513" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 126 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 126 + ], + "type": "text", + "content": "Lorin W. Anderson and David R. Krathwohl, editors. 2001. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives, 2 edition. Allyn & Bacon, New York." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 134, + 289, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 134, + 289, + 332 + ], + "spans": [ + { + "bbox": [ + 69, + 134, + 289, + 332 + ], + "type": "text", + "content": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional AI: Harmlessness from AI feedback. Technical report." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 338, + 289, + 404 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 338, + 289, + 404 + ], + "spans": [ + { + "bbox": [ + 69, + 338, + 289, + 404 + ], + "type": "text", + "content": "Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 313-320, Trento, Italy. Association for Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 411, + 289, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 411, + 289, + 488 + ], + "spans": [ + { + "bbox": [ + 69, + 411, + 289, + 488 + ], + "type": "text", + "content": "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. Technical report." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 495, + 289, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 495, + 289, + 647 + ], + "spans": [ + { + "bbox": [ + 69, + 495, + 289, + 647 + ], + "type": "text", + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 655, + 289, + 710 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 655, + 289, + 710 + ], + "spans": [ + { + "bbox": [ + 69, + 655, + 289, + 710 + ], + "type": "text", + "content": "Michael D Carey, Shelley Davidow, and Paul Williams. 2022. Re-imagining narrative writing and assessment: a post-naplan craft-based rubric for creative writing. The Australian Journal of Language and Literacy, 45(1):33-48." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 717, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 717, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 717, + 289, + 772 + ], + "type": "text", + "content": "Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramér, and Chiyuan Zhang. 2023. Quantifying memorization across neural language models. In International Conference on Learning Representations (ICLR)." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 305, + 72, + 524, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 524, + 116 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 524, + 116 + ], + "type": "text", + "content": "Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. 2023. Art or artifice? large language models and the false promise of creativity." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 126, + 524, + 193 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 126, + 524, + 193 + ], + "spans": [ + { + "bbox": [ + 304, + 126, + 524, + 193 + ], + "type": "text", + "content": "Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing GPT-4 with " + }, + { + "bbox": [ + 304, + 126, + 524, + 193 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 304, + 126, + 524, + 193 + ], + "type": "text", + "content": " ChatGPT quality. Technical report." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 203, + 524, + 280 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 203, + 524, + 280 + ], + "spans": [ + { + "bbox": [ + 304, + 203, + 524, + 280 + ], + "type": "text", + "content": "John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. Talebrush: Sketching stories with generative pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22, New York, NY, USA. Association for Computing Machinery." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 290, + 524, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 290, + 524, + 390 + ], + "spans": [ + { + "bbox": [ + 304, + 290, + 524, + 390 + ], + "type": "text", + "content": "Elizabeth Clark, Tal August, Sofia Serrano, Nikita Hahuong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282-7296, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 400, + 524, + 433 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 400, + 524, + 433 + ], + "spans": [ + { + "bbox": [ + 304, + 400, + 524, + 433 + ], + "type": "text", + "content": "Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213-220." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 444, + 524, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 444, + 524, + 477 + ], + "spans": [ + { + "bbox": [ + 304, + 444, + 524, + 477 + ], + "type": "text", + "content": "Shelley Davidow and Paul Williams. 2016. Playing With Words: A Introduction to Creative Craft. Bloomsbury Academic." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 487, + 524, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 487, + 524, + 510 + ], + "spans": [ + { + "bbox": [ + 304, + 487, + 524, + 510 + ], + "type": "text", + "content": "Annie Dillard. 1981. Contemporary prose styles. Twentieth Century Literature, 27:207-222." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 520, + 524, + 554 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 520, + 524, + 554 + ], + "spans": [ + { + "bbox": [ + 304, + 520, + 524, + 554 + ], + "type": "text", + "content": "Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey on in-context learning." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 564, + 524, + 587 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 564, + 524, + 587 + ], + "spans": [ + { + "bbox": [ + 304, + 564, + 524, + 587 + ], + "type": "text", + "content": "Giorgio Franceschelli and Mirco Musolesi. 2023. On the creativity of large language models." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 597, + 524, + 662 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 597, + 524, + 662 + ], + "spans": [ + { + "bbox": [ + 304, + 597, + 524, + 662 + ], + "type": "text", + "content": "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB dataset of diverse text for language modeling. CoRR, abs/2101.00027." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 673, + 524, + 718 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 673, + 524, + 718 + ], + "spans": [ + { + "bbox": [ + 304, + 673, + 524, + 718 + ], + "type": "text", + "content": "Eduardo C. Garrido-Merchan, José Luis Arroyo-Barrigüete, and Roberto Gozalo-Brihuela. 2023. Simulating H.P. Lovecraft horror literature with the ChatGPT large language model." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 728, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 728, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 728, + 524, + 772 + ], + "type": "text", + "content": "Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. Koala: A dialogue model for academic research. Blog post." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14514" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 149 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 149 + ], + "type": "text", + "content": "Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, and David Chartash. 2023. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. *JMIR Med Educ*, 9:e45312." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 158, + 290, + 214 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 158, + 290, + 214 + ], + "spans": [ + { + "bbox": [ + 69, + 158, + 290, + 214 + ], + "type": "text", + "content": "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation. Transactions of the Association for Computational Linguistics, 8:93–108." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 222, + 290, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 222, + 290, + 322 + ], + "spans": [ + { + "bbox": [ + 69, + 222, + 290, + 322 + ], + "type": "text", + "content": "Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6379-6393, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 331, + 290, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 331, + 290, + 386 + ], + "spans": [ + { + "bbox": [ + 69, + 331, + 290, + 386 + ], + "type": "text", + "content": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 395, + 290, + 429 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 395, + 290, + 429 + ], + "spans": [ + { + "bbox": [ + 69, + 395, + 290, + 429 + ], + "type": "text", + "content": "Sophie Jentzsch and Kristian Kersting. 2023. Chatgpt is fun, but it is not funny! humor is still challenging large language models." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 438, + 290, + 471 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 438, + 290, + 471 + ], + "spans": [ + { + "bbox": [ + 69, + 438, + 290, + 471 + ], + "type": "text", + "content": "Wenxiang Jiao, Wenxuan Wang, Jen tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? yes with gpt-4 as the engine." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 480, + 290, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 480, + 290, + 557 + ], + "spans": [ + { + "bbox": [ + 69, + 480, + 290, + 557 + ], + "type": "text", + "content": "Marzena Karpinska, Nader Akoury, and Mohit Iyyer. 2021. The perils of using Mechanical Turk to evaluate open-ended text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1265-1285, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 565, + 290, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 565, + 290, + 589 + ], + "spans": [ + { + "bbox": [ + 69, + 565, + 290, + 589 + ], + "type": "text", + "content": "Jeri Kroll. 1997. A or C: Can we assess creative work fairly? TEXT, 1(1):1-5." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 597, + 290, + 686 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 597, + 290, + 686 + ], + "spans": [ + { + "bbox": [ + 69, + 597, + 290, + 686 + ], + "type": "text", + "content": "Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. OpenAssistant Conversations - democratizing large language model alignment." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 694, + 290, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 694, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 694, + 290, + 772 + ], + "type": "text", + "content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics," + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 314, + 72, + 525, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 525, + 95 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 525, + 95 + ], + "type": "text", + "content": "pages 7871-7880, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 105, + 525, + 291 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 105, + 525, + 291 + ], + "spans": [ + { + "bbox": [ + 304, + 105, + 525, + 291 + ], + "type": "text", + "content": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 301, + 525, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 301, + 525, + 380 + ], + "spans": [ + { + "bbox": [ + 304, + 301, + 525, + 380 + ], + "type": "text", + "content": "Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. 2023. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA. Association for Computing Machinery." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 389, + 525, + 412 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 389, + 525, + 412 + ], + "spans": [ + { + "bbox": [ + 304, + 389, + 525, + 412 + ], + "type": "text", + "content": "S. Norris. 2013. *Studying Creative Writing*. Creative Writing Studies. Frontinus Limited." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 422, + 525, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 422, + 525, + 521 + ], + "spans": [ + { + "bbox": [ + 304, + 422, + 525, + 521 + ], + "type": "text", + "content": "Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, and Brenden M. Lake. 2021. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. In Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021, Advances in Neural Information Processing Systems, pages 25192-25204. Neural information processing systems foundation." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 531, + 525, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 531, + 525, + 543 + ], + "spans": [ + { + "bbox": [ + 304, + 531, + 525, + 543 + ], + "type": "text", + "content": "OpenAI. 2023. Gpt-4 technical report. Technical report." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 553, + 525, + 575 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 553, + 525, + 575 + ], + "spans": [ + { + "bbox": [ + 304, + 553, + 525, + 575 + ], + "type": "text", + "content": "George Orwell. 1946. Politics and the English language. Horizon, 13:252-265." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 586, + 525, + 695 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 586, + 525, + 695 + ], + "spans": [ + { + "bbox": [ + 304, + 586, + 525, + 695 + ], + "type": "text", + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 706, + 525, + 729 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 706, + 525, + 729 + ], + "spans": [ + { + "bbox": [ + 304, + 706, + 525, + 729 + ], + "type": "text", + "content": "Les Perelman. 2018. Towards a new NAPLAN: Testing to the teaching. Journal of Professional Learning, 2." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 738, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 738, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 738, + 525, + 772 + ], + "type": "text", + "content": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners." + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14515" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 292, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 292, + 259 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 292, + 259 + ], + "type": "text", + "content": "Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 269, + 291, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 269, + 291, + 312 + ], + "spans": [ + { + "bbox": [ + 69, + 269, + 291, + 312 + ], + "type": "text", + "content": "Piotr Sawicki, Marek Grzes, Fabricio Goes, Dan Brown, Max Peeperkorn, and Aisha Khatun. 2023. Bits of grass: Does gpt already know how to write like Whitman?" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 323, + 291, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 323, + 291, + 400 + ], + "spans": [ + { + "bbox": [ + 69, + 323, + 291, + 400 + ], + "type": "text", + "content": "Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 843-861, Hong Kong, China. Association for Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 411, + 291, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 411, + 291, + 465 + ], + "spans": [ + { + "bbox": [ + 69, + 411, + 291, + 465 + ], + "type": "text", + "content": "Alejo Jose G. Sison, Marco Tulio Daza, Roberto Gozalobrizuela, and Eduardo C. Garrido-Merchan. 2023. Chatgpt: More than a weapon of mass deception, ethical challenges and responses from the human-centered artificial intelligence (hcai) perspective." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 476, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 476, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 476, + 291, + 772 + ], + "type": "text", + "content": "Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshit Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmuller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 72, + 526, + 763 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 526, + 763 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 526, + 763 + ], + "type": "text", + "content": "Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Mosegui Gonzalez, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurrgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martinez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, German Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-Lopez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernandez Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocón, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva Katja Markert Kaustubh D. Dhole Kevin Gimpeel Kevin Omondi Kory Mathewson Kristen Chiafullo Ksenia Shkaruta Kumar Shridhar Kyle McDonell Kyle Richardson Laria Reynolds Leo Gao Li Zhang Liam Dugan Lianhui Qin Lidia Contreras-Ochando Louis-Philippe Morency Luca Moschella Lucas Lam Lucy Noble Ludwig Schmidt Luheng He Luis Oliveros Colón Luke Metz Lütfi Kerem Senel Maarten Bosma Maarten Sap Maartje ter Hoeve Maheen Farooqi Manaal Faruqui Mantas Mazeika Marco Baturan Marco Marelli Marco Maru Maria Jose Ramírez Quintana Marie Tolkiehn Mario Giulianielli Martha Lewis Martin Potthast Matthew L. Leavitt Matthias Hagen Matyás Schubert Medina Orduna Baitemirova Melody Arnaud Melvin McElrath Michael A. Yee Michael Cohen Michael Gu Michael Ivanitskiy Michael Starritt Michael Strube Michal Swedrowski Michele Bevilacqua Michihiro Yasunaga Mihir Kale Mike Cain Mimee Xu Mirac Suzgun Mo Tiwari Mohit Bansal Moin Aminnaseri Mor Geva Mozhdeh Gheini Mukund Varma T Nanyun Peng Nathan" + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14516" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 79, + 72, + 291, + 686 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 72, + 291, + 686 + ], + "spans": [ + { + "bbox": [ + 79, + 72, + 291, + 686 + ], + "type": "text", + "content": "Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nistish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Mltkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Ryan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Pi-antadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Theo Desbordes, Theodore Rothschild Thomas Phan, Tianle Wang, Tiberius Nkinyili Timo Schick Timofei Kornev Timothy Telleen-Lawton Titus Tunduny Tobias Gerstenberg Trenton Chang Trishala Neeraj Tushar Khot Tyler ShultzUri Shaham,Vedant Misra,Vera DembergVictoria Nyamai Vikas Raunak Vinay Ramasesh Vinay Uday Prabhu Vishakh Padmakumar,Vivek Srikumar William Fedus William Saunders William Zhang Wout Vossen Xiang Ren Xiaoyu Tong Xinran Zhao Xinyi Wu Xudong Shen,Yadollah Yaghoobzadeh Yair Lakretz Yangqiu Song,Yasaman Bahri,Yejin ChoiYichi Yang Yiding HaoYifu ChenYonatan Belinkov Yu HouYufang HouYuntao BaiZachary Seid Zhuoye Zhao Zijian Wang Zijie J.WangZirui Wang and Ziyi Wu. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 695, + 289, + 718 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 695, + 289, + 718 + ], + "spans": [ + { + "bbox": [ + 69, + 695, + 289, + 718 + ], + "type": "text", + "content": "W. Strunk and E.B. White. 2008[1918]. The Elements of Style. BN Publishing, New York." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 728, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 728, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 728, + 291, + 772 + ], + "type": "text", + "content": "Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, and Monica Dinalescu. 2021. Story centaur: Large language model few shot learning as a creative writing tool. In Proceedings of the 16th Confer-" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 314, + 72, + 525, + 116 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 525, + 116 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 525, + 116 + ], + "type": "text", + "content": "ence of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 244-256, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 304, + 123, + 526, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 123, + 526, + 212 + ], + "spans": [ + { + "bbox": [ + 304, + 123, + 526, + 212 + ], + "type": "text", + "content": "Bowen Tan, Zichao Yang, Maruan Al-Shedivat, Eric Xing, and Zhiting Hu. 2021. Progressive generation of long text with pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4313-4324, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 303, + 219, + 525, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 219, + 525, + 275 + ], + "spans": [ + { + "bbox": [ + 303, + 219, + 525, + 275 + ], + "type": "text", + "content": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 303, + 282, + 526, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 282, + 526, + 513 + ], + "spans": [ + { + "bbox": [ + 303, + 282, + 526, + 513 + ], + "type": "text", + "content": "Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 303, + 520, + 525, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 520, + 525, + 553 + ], + "spans": [ + { + "bbox": [ + 303, + 520, + 525, + 553 + ], + "type": "text", + "content": "E.P. Torrance. 1974. Torrance Tests of Creative Thinking: Verbal Tests, Forms A and B, Figural Tests, Forms A and B. Norms-technical manual. Xerox." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 561, + 525, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 561, + 525, + 628 + ], + "spans": [ + { + "bbox": [ + 303, + 561, + 525, + 628 + ], + "type": "text", + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and efficient foundation language models." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 303, + 635, + 525, + 680 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 635, + 525, + 680 + ], + "spans": [ + { + "bbox": [ + 303, + 635, + 525, + 680 + ], + "type": "text", + "content": "Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 billion parameter autoregressive language model. https://github.com/kingoflolz/mesh-transformer-jax." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 687, + 525, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 687, + 525, + 731 + ], + "spans": [ + { + "bbox": [ + 303, + 687, + 525, + 731 + ], + "type": "text", + "content": "Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-Instruct: Aligning language model with self generated instructions." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 303, + 739, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 739, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 303, + 739, + 525, + 772 + ], + "type": "text", + "content": "Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14517" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 79, + 72, + 290, + 117 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 72, + 290, + 117 + ], + "spans": [ + { + "bbox": [ + 79, + 72, + 290, + 117 + ], + "type": "text", + "content": "language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 124, + 290, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 124, + 290, + 169 + ], + "spans": [ + { + "bbox": [ + 69, + 124, + 290, + 169 + ], + "type": "text", + "content": "Beck Wise and Ariella van Luyn. 2020. Not 'all writing is creative writing' and that's ok: inter/disciplinary collaboration in writing and writing studies. TEXT, 24(Special 59):1-15." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 177, + 290, + 211 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 177, + 290, + 211 + ], + "spans": [ + { + "bbox": [ + 68, + 177, + 290, + 211 + ], + "type": "text", + "content": "Zhuohan Xie, Trevor Cohn, and Joy Han Lau. 2023. Can very large pretrained language models learn storytelling with a few examples?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 218, + 290, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 218, + 290, + 307 + ], + "spans": [ + { + "bbox": [ + 69, + 218, + 290, + 307 + ], + "type": "text", + "content": "Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. MEGATRON-CNTRL: Controllable story generation with external knowledge using large-scale language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2831-2845, Online. Association for Computational Linguistics." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 315, + 290, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 315, + 290, + 381 + ], + "spans": [ + { + "bbox": [ + 69, + 315, + 290, + 381 + ], + "type": "text", + "content": "Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. 2022. Wordcraft: Story writing with large language models. In 27th International Conference on Intelligent User Interfaces, IUI '22, page 841-852, New York, NY, USA. Association for Computing Machinery." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 390, + 290, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 390, + 290, + 467 + ], + "spans": [ + { + "bbox": [ + 69, + 390, + 290, + 467 + ], + "type": "text", + "content": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 477, + 188, + 489 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 477, + 188, + 489 + ], + "spans": [ + { + "bbox": [ + 68, + 477, + 188, + 489 + ], + "type": "text", + "content": "A Model access dates" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 498, + 290, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 498, + 290, + 634 + ], + "spans": [ + { + "bbox": [ + 67, + 498, + 290, + 634 + ], + "type": "text", + "content": "Table 3 shows the date in which the stories were generated for each of the models. For future experimental reference, we highlight that the initial public disclosure of this paper online occurred on 2023-10-09. Before this date, only the human authors and raters were aware of the project from May 2023, and anonymous reviewers had access from June 23, 2023. Consequently, LLMs with a knowledge cutoff prior to 2023-10-09 are likely to have no or minimal risk of training set contamination." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 644, + 182, + 657 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 644, + 182, + 657 + ], + "spans": [ + { + "bbox": [ + 68, + 644, + 182, + 657 + ], + "type": "text", + "content": "B Hyperparameters" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 665, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 665, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 665, + 290, + 772 + ], + "type": "text", + "content": "We did not tweak any hyperparameters of the models. In the case of commercial models, we just ran the model as it is presented in their respective web user interfaces, except in the case of Bing Chat where we chose Creative mode. For open-source models, we used the default parameters from the web UI provided at https://chat.lmsys.org/, which set temperature to 0.7." + } + ] + } + ], + "index": 9 + }, + { + "type": "table", + "bbox": [ + 305, + 68, + 525, + 266 + ], + "blocks": [ + { + "bbox": [ + 305, + 68, + 525, + 266 + ], + "lines": [ + { + "bbox": [ + 305, + 68, + 525, + 266 + ], + "spans": [ + { + "bbox": [ + 305, + 68, + 525, + 266 + ], + "type": "table", + "html": "
ModelAccess date
alpaca2023-04-07
bard2023-04-11
bing2023-04-11
chatgpt-gpt352023-04-11
chatgpt-gpt42023-04-14
claude122023-04-04
dolly2023-04-14
gpt4all-j2023-04-14
koala2023-04-07
oa2023-04-16
stablelm2023-04-20
vicuna2023-04-07
humans2023-05-01 to 2023-05-12
", + "image_path": "e5f7329210e3e53ae3920cad28564c7e4f057bd1fd3da7f1e1a28e85ebc946a3.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 275, + 525, + 299 + ], + "lines": [ + { + "bbox": [ + 302, + 275, + 525, + 299 + ], + "spans": [ + { + "bbox": [ + 302, + 275, + 525, + 299 + ], + "type": "text", + "content": "Table 3: Access dates for each model (and dates of writing for the human stories), in YYYYY-MM-DD format." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 319, + 468, + 332 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 319, + 468, + 332 + ], + "spans": [ + { + "bbox": [ + 302, + 319, + 468, + 332 + ], + "type": "text", + "content": "C Detailed rubric information" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 341, + 525, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 341, + 525, + 476 + ], + "spans": [ + { + "bbox": [ + 302, + 341, + 525, + 476 + ], + "type": "text", + "content": "The creative writing rubric was designed for assessment of creative writing scripts in university creative writing courses in order to evaluate these above competencies, criteria 1-5 to measure general creative writing capacities, and criteria 6-10 to measure specific task related proficiency. Each of the ten criteria is awarded 10 points out of a total 100 points. The rubric has been specifically designed to measure the quality of writing craft and to avoid formulaic, rule-based writing." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 311, + 486, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 312, + 486, + 524, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 486, + 524, + 513 + ], + "spans": [ + { + "bbox": [ + 312, + 486, + 524, + 513 + ], + "type": "text", + "content": "1. Overall/ holistic/ cohesive readability of the story (not just a compilation of elements)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 311, + 522, + 525, + 562 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 522, + 525, + 562 + ], + "spans": [ + { + "bbox": [ + 311, + 522, + 525, + 562 + ], + "type": "text", + "content": "2. Use of key narrative elements - vocabulary choice, imagery, setting, themes, dialogue, characterisation, point of view." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 311, + 571, + 525, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 571, + 525, + 624 + ], + "spans": [ + { + "bbox": [ + 311, + 571, + 525, + 624 + ], + "type": "text", + "content": "3. Structural elements and presentation which reflects the control of structural elements such as spelling, grammar, punctuation, paragraphing, and formatting" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 311, + 634, + 525, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 634, + 525, + 674 + ], + "spans": [ + { + "bbox": [ + 311, + 634, + 525, + 674 + ], + "type": "text", + "content": "4. Overall plot logic: hook, conflict, initial crisis, rising and falling action, denouement/ resolution (Freitag's pyramid)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 311, + 683, + 525, + 724 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 683, + 525, + 724 + ], + "spans": [ + { + "bbox": [ + 311, + 683, + 525, + 724 + ], + "type": "text", + "content": "5. Creativity/innovation/originality/research—credibility, new knowledge, avoidance of cliché and derivative tropes" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 311, + 732, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 732, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 311, + 732, + 524, + 772 + ], + "type": "text", + "content": "6. Incorporation of the John Kennedy Toole style of writing using the indicators/ characteristics listed below" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14518" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 72, + 71, + 290, + 232 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 77, + 71, + 289, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 71, + 289, + 98 + ], + "spans": [ + { + "bbox": [ + 77, + 71, + 289, + 98 + ], + "type": "text", + "content": "7. Understanding and habitation of the epic genre of heroic/legendary adventure" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 76, + 107, + 289, + 132 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 107, + 289, + 132 + ], + "spans": [ + { + "bbox": [ + 76, + 107, + 289, + 132 + ], + "type": "text", + "content": "8. Description and credibility of a single combat scene" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 76, + 143, + 290, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 143, + 290, + 195 + ], + "spans": [ + { + "bbox": [ + 76, + 143, + 290, + 195 + ], + "type": "text", + "content": "9. Accurate inclusion of two main characters Ignatius J. Reilly and a pterodactyl in action and description (see below for character description)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 72, + 206, + 289, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 206, + 289, + 232 + ], + "spans": [ + { + "bbox": [ + 72, + 206, + 289, + 232 + ], + "type": "text", + "content": "10. Use of a characteristically dark humorous tone." + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 79, + 243, + 269, + 256 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 243, + 269, + 256 + ], + "spans": [ + { + "bbox": [ + 79, + 243, + 269, + 256 + ], + "type": "text", + "content": "The 1-10 scale is divided into three ranges:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 81, + 265, + 290, + 511 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 81, + 265, + 290, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 265, + 290, + 346 + ], + "spans": [ + { + "bbox": [ + 81, + 265, + 290, + 346 + ], + "type": "text", + "content": "- Emerging (1-4): stories in this range demonstrate an early grasp of storytelling elements, but falter in execution or depth. When evaluating humans, they correspond to novice writers who need feedback and guidance to improve the story." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 81, + 355, + 290, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 355, + 290, + 449 + ], + "spans": [ + { + "bbox": [ + 81, + 355, + 290, + 449 + ], + "type": "text", + "content": "Competent (5-8): stories that showcase a good grasp of the storytelling principle being evaluated (coherent plot, well-defined characters, etc.). While there might be room for improvement, these stories effectively engage the reader and convey their intended messages." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 81, + 459, + 290, + 511 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 459, + 290, + 511 + ], + "spans": [ + { + "bbox": [ + 81, + 459, + 290, + 511 + ], + "type": "text", + "content": "- Sophisticated (9-10): these stories exhibit exceptional mastery of the aspect being evaluated, resulting in a compelling and memorable read." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 521, + 291, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 521, + 291, + 629 + ], + "spans": [ + { + "bbox": [ + 67, + 521, + 291, + 629 + ], + "type": "text", + "content": "Toole style We provided raters with detailed information about the plot, setting, imagery, tone, characters, main protagonist, and derivative/imitative style of the author, taken from a generic and popular study guide (http://www.bookrags.com/studyguide-a-confederacy-of-dunces/#gsc.tab=0)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 640, + 274, + 666 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 640, + 274, + 666 + ], + "spans": [ + { + "bbox": [ + 67, + 640, + 274, + 666 + ], + "type": "text", + "content": "D Box plots for each individual rubric item" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 675, + 289, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 675, + 289, + 714 + ], + "spans": [ + { + "bbox": [ + 67, + 675, + 289, + 714 + ], + "type": "text", + "content": "Figures 5 to 14 show the box plots summarizing the results for all rubric items, including those plots not featured in the main text." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 725, + 166, + 739 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 725, + 166, + 739 + ], + "spans": [ + { + "bbox": [ + 67, + 725, + 166, + 739 + ], + "type": "text", + "content": "E Sample stories" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 67, + 746, + 289, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 289, + 772 + ], + "type": "text", + "content": "We show in this section several sample stories from the corpus, chosen according to rating: the" + } + ] + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 305, + 105, + 523, + 322 + ], + "blocks": [ + { + "bbox": [ + 305, + 105, + 523, + 322 + ], + "lines": [ + { + "bbox": [ + 305, + 105, + 523, + 322 + ], + "spans": [ + { + "bbox": [ + 305, + 105, + 523, + 322 + ], + "type": "image", + "image_path": "aa13bfd651d9de022b8fd600fe0441c2b3651899b84951791b1bed2b1bc3ba7a.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 332, + 525, + 380 + ], + "lines": [ + { + "bbox": [ + 302, + 332, + 525, + 380 + ], + "spans": [ + { + "bbox": [ + 302, + 332, + 525, + 380 + ], + "type": "text", + "content": "Figure 5: Box plot comparing rubric item 1 (cohesion) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 306, + 459, + 522, + 675 + ], + "blocks": [ + { + "bbox": [ + 306, + 459, + 522, + 675 + ], + "lines": [ + { + "bbox": [ + 306, + 459, + 522, + 675 + ], + "spans": [ + { + "bbox": [ + 306, + 459, + 522, + 675 + ], + "type": "image", + "image_path": "523eac89bbf4e81717f01cfa0e0dc3c2c2f929f230a0e3f7b4d6ab3e8ee37a94.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 687, + 525, + 735 + ], + "lines": [ + { + "bbox": [ + 302, + 687, + 525, + 735 + ], + "spans": [ + { + "bbox": [ + 302, + 687, + 525, + 735 + ], + "type": "text", + "content": "Figure 6: Box plot comparing rubric item 2 (key narrative elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14519" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 104, + 290, + 322 + ], + "blocks": [ + { + "bbox": [ + 70, + 104, + 290, + 322 + ], + "lines": [ + { + "bbox": [ + 70, + 104, + 290, + 322 + ], + "spans": [ + { + "bbox": [ + 70, + 104, + 290, + 322 + ], + "type": "image", + "image_path": "0ebf78940c8609025c7593d9dfe1a37fa00292b10eba9c60de34cacdf7501243.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 332, + 291, + 381 + ], + "lines": [ + { + "bbox": [ + 67, + 332, + 291, + 381 + ], + "spans": [ + { + "bbox": [ + 67, + 332, + 291, + 381 + ], + "type": "text", + "content": "Figure 7: Box plot comparing rubric item 3 (structural elements) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 305, + 105, + 524, + 322 + ], + "blocks": [ + { + "bbox": [ + 305, + 105, + 524, + 322 + ], + "lines": [ + { + "bbox": [ + 305, + 105, + 524, + 322 + ], + "spans": [ + { + "bbox": [ + 305, + 105, + 524, + 322 + ], + "type": "image", + "image_path": "45f0e75d2dfe07629611a43566fa069cf9a6dc73e112fba3beeeedd81234fbde.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 332, + 525, + 380 + ], + "lines": [ + { + "bbox": [ + 302, + 332, + 525, + 380 + ], + "spans": [ + { + "bbox": [ + 302, + 332, + 525, + 380 + ], + "type": "text", + "content": "Figure 9: Box plot comparing rubric item 5 (creativity) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 70, + 458, + 290, + 676 + ], + "blocks": [ + { + "bbox": [ + 70, + 458, + 290, + 676 + ], + "lines": [ + { + "bbox": [ + 70, + 458, + 290, + 676 + ], + "spans": [ + { + "bbox": [ + 70, + 458, + 290, + 676 + ], + "type": "image", + "image_path": "555ec6404a621caf381f66322d9a733f0169f425b5dd2e3d367fea3ea07d9183.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 686, + 291, + 734 + ], + "lines": [ + { + "bbox": [ + 67, + 686, + 291, + 734 + ], + "spans": [ + { + "bbox": [ + 67, + 686, + 291, + 734 + ], + "type": "text", + "content": "Figure 8: Box plot comparing rubric item 4 (plot logic) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 305, + 458, + 524, + 676 + ], + "blocks": [ + { + "bbox": [ + 305, + 458, + 524, + 676 + ], + "lines": [ + { + "bbox": [ + 305, + 458, + 524, + 676 + ], + "spans": [ + { + "bbox": [ + 305, + 458, + 524, + 676 + ], + "type": "image", + "image_path": "4ed93337bd71e8bb3557a8cc1e68f5782fc8a927d3faeb796f64b0cbae44d64f.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 686, + 525, + 735 + ], + "lines": [ + { + "bbox": [ + 302, + 686, + 525, + 735 + ], + "spans": [ + { + "bbox": [ + 302, + 686, + 525, + 735 + ], + "type": "text", + "content": "Figure 10: Box plot comparing rubric item 6 (John Kennedy Toole style) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 313, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 313, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 313, + 791 + ], + "type": "text", + "content": "14520" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 105, + 290, + 322 + ], + "blocks": [ + { + "bbox": [ + 70, + 105, + 290, + 322 + ], + "lines": [ + { + "bbox": [ + 70, + 105, + 290, + 322 + ], + "spans": [ + { + "bbox": [ + 70, + 105, + 290, + 322 + ], + "type": "image", + "image_path": "c4b22cabac8095762b897f5691f0705556b631f3fa76b8517dbf96a65714361d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 332, + 291, + 381 + ], + "lines": [ + { + "bbox": [ + 67, + 332, + 291, + 381 + ], + "spans": [ + { + "bbox": [ + 67, + 332, + 291, + 381 + ], + "type": "text", + "content": "Figure 11: Box plot comparing rubric item 7 (epic genre) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 305, + 105, + 524, + 322 + ], + "blocks": [ + { + "bbox": [ + 305, + 105, + 524, + 322 + ], + "lines": [ + { + "bbox": [ + 305, + 105, + 524, + 322 + ], + "spans": [ + { + "bbox": [ + 305, + 105, + 524, + 322 + ], + "type": "image", + "image_path": "70a364e0f426758076047a0c01f11f2bbc7a878877917b3b74ea161170baf05f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 332, + 527, + 381 + ], + "lines": [ + { + "bbox": [ + 302, + 332, + 527, + 381 + ], + "spans": [ + { + "bbox": [ + 302, + 332, + 527, + 381 + ], + "type": "text", + "content": "Figure 13: Box plot comparing rubric item 9 (accuracy of characters) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 70, + 458, + 290, + 676 + ], + "blocks": [ + { + "bbox": [ + 70, + 458, + 290, + 676 + ], + "lines": [ + { + "bbox": [ + 70, + 458, + 290, + 676 + ], + "spans": [ + { + "bbox": [ + 70, + 458, + 290, + 676 + ], + "type": "image", + "image_path": "e901c5a707c1100a14d19b67b73c6d2b1f288a9fcff1d994921aa271518b283e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 686, + 291, + 735 + ], + "lines": [ + { + "bbox": [ + 67, + 686, + 291, + 735 + ], + "spans": [ + { + "bbox": [ + 67, + 686, + 291, + 735 + ], + "type": "text", + "content": "Figure 12: Box plot comparing rubric item 8 (combat description) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 305, + 458, + 524, + 676 + ], + "blocks": [ + { + "bbox": [ + 305, + 458, + 524, + 676 + ], + "lines": [ + { + "bbox": [ + 305, + 458, + 524, + 676 + ], + "spans": [ + { + "bbox": [ + 305, + 458, + 524, + 676 + ], + "type": "image", + "image_path": "a296b001e9bba1feb5d0453cbae02979ce1ae9f73cabb5b219edd905edf26dde.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 686, + 525, + 735 + ], + "lines": [ + { + "bbox": [ + 302, + 686, + 525, + 735 + ], + "spans": [ + { + "bbox": [ + 302, + 686, + 525, + 735 + ], + "type": "text", + "content": "Figure 14: Box plot comparing rubric item 10 (dark humor) for stories generated by humans and 12 LLMs, sorted left to right by mean overall rating. Notation as in Figure 1." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 311, + 791 + ], + "type": "text", + "content": "14521" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 179 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 179 + ], + "type": "text", + "content": "three top-rated stories, the best human-written story (which was also ranked fourth overall), the story ranked in the median, and the worst-rated. It is worth noting that, as explained in Section 3.4, our methodology is designed to provide fairness when comparing models, but not individual stories, as different stories are assigned to different raters. Thus, selection of stories by rating is necessarily noisy." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 193, + 209, + 206 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 193, + 209, + 206 + ], + "spans": [ + { + "bbox": [ + 67, + 193, + 209, + 206 + ], + "type": "text", + "content": "E.1 Overall best-rated story" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 212, + 289, + 238 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 212, + 289, + 238 + ], + "spans": [ + { + "bbox": [ + 66, + 212, + 289, + 238 + ], + "type": "text", + "content": "This story was generated by GPT-4. The ratings for this story are in Table 4." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 241, + 290, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 241, + 290, + 388 + ], + "spans": [ + { + "bbox": [ + 67, + 241, + 290, + 388 + ], + "type": "text", + "content": "It was a sweltering Louisiana afternoon, the air thick with the miasma of musty sweat and the scent of overripe vegetables. In the midst of the urban jungle, resplendent with the cacophony of honking cars and the chatter of the common rabble, stood a man who defied reason, common sense, and all natural laws of human decency. Ignatius J. Reilly, a behemoth of a man with an unruly mane of hair and a Byzantine beard, adjusted his ill-fitting, plaid trousers and surveyed the scene with a look of lofty disdain." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 391, + 290, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 391, + 290, + 539 + ], + "spans": [ + { + "bbox": [ + 67, + 391, + 290, + 539 + ], + "type": "text", + "content": "Suddenly, the air crackled with an electric charge, and a portal appeared before him. The very fabric of time and space seemed to rend asunder, and from this anachronistic rift emerged a pterodactyl - a creature so ancient that it had outlived the vast expanse of geologic time, only to find itself in the absurdity of the twentieth century. Its leathery wings stretched wide as the mists of time dissipated, and it let out an earsplitting screech that sent the common folk of New Orleans scattering like roaches from the light." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 540, + 290, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 540, + 290, + 635 + ], + "spans": [ + { + "bbox": [ + 67, + 540, + 290, + 635 + ], + "type": "text", + "content": "Ignatius, unphased by the sudden appearance of the primordial beast, surveyed the creature through his fogged-up monocle. \"Fortuna,\" he muttered, his voice dripping with both sarcasm and hot dog condiments, \"you spin your wheel with such capricious abandon. Have you not tired of hurling these preposterous challenges in my path?\"" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 637, + 290, + 730 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 637, + 290, + 730 + ], + "spans": [ + { + "bbox": [ + 67, + 637, + 290, + 730 + ], + "type": "text", + "content": "The pterodactyl, with its beady, reptilian eyes, locked its gaze upon Ignatius, its primitive brain sensing a formidable opponent in this lumbering, malodorous mass of a man. It screeched again, flapping its wings in a display of both aggression and arrogance, its razor-sharp beak snapping at the air." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "type": "text", + "content": "Ignatius bellowed, raising his arms as if beseech- ing the heavens, \"Very well, foul beast! If it is battle you seek, you shall find no quarter here! I" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 111 + ], + "type": "text", + "content": "shall vanquish you as I have vanquished all who have dared to challenge the indomitable will of Ignatius J. Reilly!\"" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 113, + 526, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 113, + 526, + 220 + ], + "spans": [ + { + "bbox": [ + 302, + 113, + 526, + 220 + ], + "type": "text", + "content": "With a speed surprising for a man of his considerable girth, Ignatius tore a shopping cart from the hands of a terrified passerby and hurled it toward the pterodactyl with the force of a thousand disgruntled welfare recipients. The makeshift projectile slammed into the creature, stunning it momentarily and allowing Ignatius to close the distance between them." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 224, + 525, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 224, + 525, + 317 + ], + "spans": [ + { + "bbox": [ + 302, + 224, + 525, + 317 + ], + "type": "text", + "content": "The two combatants circled one another, their eyes locked in a primal dance of predator and prey. The pterodactyl, recovering from the initial assault, swooped low, talons outstretched to rend flesh from bone. Ignatius, with an agility that defied his rotund appearance, rolled to the side, avoiding the talons by mere inches." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 320, + 525, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 320, + 525, + 400 + ], + "spans": [ + { + "bbox": [ + 302, + 320, + 525, + 400 + ], + "type": "text", + "content": "As the battle raged, the streets of New Orleans became a stage for this epic confrontation between man and beast, between past and present. Ignatius, armed with nothing more than his wit and an unwieldy umbrella, parried the relentless strikes of the pterodactyl, determined to emerge victorious." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 403, + 525, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 403, + 525, + 510 + ], + "spans": [ + { + "bbox": [ + 302, + 403, + 525, + 510 + ], + "type": "text", + "content": "As the sun dipped low in the sky, casting a blood-red hue across the battlefield, the pterodactyl, weary from the long and grueling contest, faltered. Seizing the opportunity, Ignatius lunged forward, his umbrella aimed squarely at the creature's heart. The point found its mark, and with a final, anguished cry, the pterodactyl crumpled to the ground, its ancient spirit extinguished at last." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 513, + 525, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 513, + 525, + 593 + ], + "spans": [ + { + "bbox": [ + 302, + 513, + 525, + 593 + ], + "type": "text", + "content": "Ignatius stood triumphant over the fallen beast, his chest heaving with exertion and righteous indignation. He wiped the sweat from his brow with a grease-stained handkerchief and addressed the awestruck crowd that had gathered to witness this extraordinary battle." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 596, + 525, + 716 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 596, + 525, + 716 + ], + "spans": [ + { + "bbox": [ + 302, + 596, + 525, + 716 + ], + "type": "text", + "content": "\"Behold, citizens of this wretched cesspool,\" he proclaimed, gesturing grandly with his umbrella, now slick with the blood of the vanquished foe. \"The forces of the prehistoric have met their match in the unwavering resolve of Ignatius J. Reilly! Let this be a lesson to all who would dare challenge the sanctity of my person and my valiant quest for justice in this world beset by fools and ne'er-dowells.\"" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 525, + 772 + ], + "type": "text", + "content": "The onlookers, mouths agape, gazed upon the fallen pterodactyl and the disheveled figure of Ignatius J. Reilly with a mixture of awe, terror, and disbelief. They had borne witness to a battle of the" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14522" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 129, + 68, + 465, + 126 + ], + "blocks": [ + { + "bbox": [ + 129, + 68, + 465, + 126 + ], + "lines": [ + { + "bbox": [ + 129, + 68, + 465, + 126 + ], + "spans": [ + { + "bbox": [ + 129, + 68, + 465, + 126 + ], + "type": "table", + "html": "
Rubric item12345678910overall
Rater 1889877887676
Rater 2101099891010101095
Average9998.57.58998.5885.5
", + "image_path": "a8a8eca8848a5732cc76ac1aeda694a7842ee5483c70e727add1fac706aa07d5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 100, + 134, + 490, + 147 + ], + "lines": [ + { + "bbox": [ + 100, + 134, + 490, + 147 + ], + "spans": [ + { + "bbox": [ + 100, + 134, + 490, + 147 + ], + "type": "text", + "content": "Table 4: Ratings for the best overall rated story in the corpus, produced by ChatGPT with GPT-4." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 169, + 290, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 169, + 290, + 208 + ], + "spans": [ + { + "bbox": [ + 67, + 169, + 290, + 208 + ], + "type": "text", + "content": "ages, a contest that would be whispered about in the dark corners of bars and the humid alleyways of New Orleans for generations to come." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 210, + 290, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 210, + 290, + 277 + ], + "spans": [ + { + "bbox": [ + 67, + 210, + 290, + 277 + ], + "type": "text", + "content": "As the sun set and the shadows grew long, Ignatius, his honor defended and his foe vanquished, strode off into the gloaming, pausing only to collect a discarded hot dog from the gutter as sustenance for the journey home." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 279, + 290, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 279, + 290, + 386 + ], + "spans": [ + { + "bbox": [ + 67, + 279, + 290, + 386 + ], + "type": "text", + "content": "And so, the legend of Ignatius J. Reilly, pterodactyl-slayer and defender of truth, justice, and questionable hygiene, was etched into the annals of history, a testament to the indomitable spirit of man, and a reminder that even the most ancient and fearsome of adversaries could be vanquished by the sheer force of will and an unwieldy umbrella." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 399, + 242, + 412 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 399, + 242, + 412 + ], + "spans": [ + { + "bbox": [ + 67, + 399, + 242, + 412 + ], + "type": "text", + "content": "E.2 Second overall best-rated story" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 418, + 289, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 418, + 289, + 444 + ], + "spans": [ + { + "bbox": [ + 67, + 418, + 289, + 444 + ], + "type": "text", + "content": "This story was generated by Bing Chat. The ratings for this story are in Table 5." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 446, + 290, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 446, + 290, + 553 + ], + "spans": [ + { + "bbox": [ + 67, + 446, + 290, + 553 + ], + "type": "text", + "content": "Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 555, + 290, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 555, + 290, + 635 + ], + "spans": [ + { + "bbox": [ + 67, + 555, + 290, + 635 + ], + "type": "text", + "content": "Ignatius gasped and dropped his hot dog. He reached for his trusty hunting cap, which he always wore to protect his delicate ears from the noise of the modern world. He pulled it down over his eyes and hoped that the pterodactyl would mistake him for a statue or a bush." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 637, + 290, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 637, + 290, + 731 + ], + "spans": [ + { + "bbox": [ + 67, + 637, + 290, + 731 + ], + "type": "text", + "content": "But the pterodactyl was not fooled. It landed on the ground with a thud and snapped its jaws at Ignatius, who dodged nimbly to the side. Ignatius felt a surge of courage and indignation. How dare this beast attack him, a refined scholar and a defender of the true faith? He decided to fight back with his superior intellect and wit." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "type": "text", + "content": "He shouted at the pterodactyl: \"You foul creature! You are an abomination of nature, a relic of a barbaric age! You have no place in this civ" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 169, + 524, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 169, + 524, + 208 + ], + "spans": [ + { + "bbox": [ + 302, + 169, + 524, + 208 + ], + "type": "text", + "content": "ilized society, where men of letters and culture reign supreme! Begone, or I shall smite you with my mighty pen!\"" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 210, + 524, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 210, + 524, + 304 + ], + "spans": [ + { + "bbox": [ + 302, + 210, + 524, + 304 + ], + "type": "text", + "content": "The pterodactyl was not impressed by Ignatius's words. It screeched again and lunged at him, trying to bite his head off. Ignatius dodged again and ran towards a nearby bench. He grabbed a newspaper that someone had left behind and rolled it into a cone. He brandished it like a sword and charged at the pterodactyl." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 305, + 524, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 305, + 524, + 386 + ], + "spans": [ + { + "bbox": [ + 302, + 305, + 524, + 386 + ], + "type": "text", + "content": "\"Take that, you fiend!\" he cried, as he stabbed the newspaper cone into the pterodactyl's eye. The pterodactyl shrieked in pain and flapped its wings furiously, trying to shake off the paper weapon. Ignatius seized the opportunity and ran towards the park exit, hoping to find help." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 387, + 524, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 387, + 524, + 468 + ], + "spans": [ + { + "bbox": [ + 302, + 387, + 524, + 468 + ], + "type": "text", + "content": "But he was not fast enough. The pterodactyl recovered from its wound and flew after him, gaining speed. It caught up with Ignatius and grabbed him by his coat tails, lifting him off the ground. Ignatius screamed and kicked his legs in the air, clutching his hunting cap." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 469, + 524, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 469, + 524, + 522 + ], + "spans": [ + { + "bbox": [ + 302, + 469, + 524, + 522 + ], + "type": "text", + "content": "\"Help! Help! Someone save me!\" he yelled, as he soared over the park. \"This is an outrage! This is an injustice! This is a violation of my human rights!\"" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 524, + 524, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 524, + 524, + 590 + ], + "spans": [ + { + "bbox": [ + 302, + 524, + 524, + 590 + ], + "type": "text", + "content": "But no one heard him or came to his rescue. The pterodactyl carried him away to its nest on top of a skyscraper, where it planned to feast on him later. Ignatius continued to protest and curse until he lost consciousness." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 314, + 592, + 355, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 592, + 355, + 604 + ], + "spans": [ + { + "bbox": [ + 314, + 592, + 355, + 604 + ], + "type": "text", + "content": "The end." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 618, + 471, + 630 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 618, + 471, + 630 + ], + "spans": [ + { + "bbox": [ + 302, + 618, + 471, + 630 + ], + "type": "text", + "content": "E.3 Third overall best-rated story" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 301, + 636, + 524, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 636, + 524, + 662 + ], + "spans": [ + { + "bbox": [ + 301, + 636, + 524, + 662 + ], + "type": "text", + "content": "This story was generated by Claude. The ratings for this story are in Table 6." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 665, + 524, + 717 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 665, + 524, + 717 + ], + "spans": [ + { + "bbox": [ + 302, + 665, + 524, + 717 + ], + "type": "text", + "content": "The primordial beast let out a piercing shriek as it descended from the roiling gray sky, its leathery wings casting a shadow over the granite-paved streets of New Orleans." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 719, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 524, + 772 + ], + "type": "text", + "content": "Ignatius J. Reilly ambled down the sidewalk, oblivious as ever, focused on the Valencia he clutched and the fantasies playing out in his voracious mind. His substantial bulk shook with each" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14523" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 130, + 68, + 462, + 126 + ], + "blocks": [ + { + "bbox": [ + 130, + 68, + 462, + 126 + ], + "lines": [ + { + "bbox": [ + 130, + 68, + 462, + 126 + ], + "spans": [ + { + "bbox": [ + 130, + 68, + 462, + 126 + ], + "type": "table", + "html": "
Rubric item12345678910overall
Rater 7989979999987
Rater 8998988699883
Average98.58.597.58.57.5998.585.0
", + "image_path": "aa81acca7c778fbb1e6e033481474bb7b948d95396b94fd85a94238d3bba2d1c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 127, + 158, + 467, + 215 + ], + "blocks": [ + { + "bbox": [ + 108, + 134, + 483, + 147 + ], + "lines": [ + { + "bbox": [ + 108, + 134, + 483, + 147 + ], + "spans": [ + { + "bbox": [ + 108, + 134, + 483, + 147 + ], + "type": "text", + "content": "Table 5: Ratings for the second best overall rated story in the corpus, produced by Bing Chat." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 127, + 158, + 467, + 215 + ], + "lines": [ + { + "bbox": [ + 127, + 158, + 467, + 215 + ], + "spans": [ + { + "bbox": [ + 127, + 158, + 467, + 215 + ], + "type": "table", + "html": "
Rubric item12345678910overall
Rater 9999879259774
Rater 1010109999910101095
Average9.59.598.5895.57.59.58.584.5
", + "image_path": "6d68959c44031a3cc14fb15778cc87106e898937628395fea7b9764f72968282.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 119, + 224, + 472, + 236 + ], + "lines": [ + { + "bbox": [ + 119, + 224, + 472, + 236 + ], + "spans": [ + { + "bbox": [ + 119, + 224, + 472, + 236 + ], + "type": "text", + "content": "Table 6: Ratings for the third best overall rated story in the corpus, produced by Claude." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 258, + 289, + 284 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 258, + 289, + 284 + ], + "spans": [ + { + "bbox": [ + 67, + 258, + 289, + 284 + ], + "type": "text", + "content": "step, the excess flesh spilling out of his hunting cap and sweatshirt like rising bread dough." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 285, + 289, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 285, + 289, + 338 + ], + "spans": [ + { + "bbox": [ + 67, + 285, + 289, + 338 + ], + "type": "text", + "content": "When the shadow passed over him, Ignatius glanced up with a start. The pterodactyl unleashed another unearthly screech, indignant at this blob of a man who dared walk under its domain." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 338, + 289, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 338, + 289, + 420 + ], + "spans": [ + { + "bbox": [ + 67, + 338, + 289, + 420 + ], + "type": "text", + "content": "Ignatius blinked myopically at the creature and snorted. \"What prehistoric nonsense is this? Some misguided fool has clearly released you from your confinement in the geological strata. Begone, you ridiculous beast, or I shall be forced to thrash you soundly with my umbrella!\"" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 421, + 290, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 421, + 290, + 487 + ], + "spans": [ + { + "bbox": [ + 67, + 421, + 290, + 487 + ], + "type": "text", + "content": "Enraged, the pterodactyl swooped down, gnarled claws extended, snapping its beak in fury. But Ignatius swung his umbrella in a wide arc, batting the beast aside. It tumbled and flapped awkwardly, crashing into a lamppost, dazed by the assault." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 488, + 290, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 488, + 290, + 569 + ], + "spans": [ + { + "bbox": [ + 67, + 488, + 290, + 569 + ], + "type": "text", + "content": "\"Thoughtless demon!\" Ignatius cried. He charged at the pterodactyl, surprisingly nimble for his size, and began striking it about the head and neck with his umbrella, each blow punctuated with an insult. \"Sugar-frosted fruitcake! Detritus of the Jurassic! Blithering idiot of an avian menace!\"" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 570, + 290, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 570, + 290, + 664 + ], + "spans": [ + { + "bbox": [ + 67, + 570, + 290, + 664 + ], + "type": "text", + "content": "After a thrashing unlike any it had experienced in 65 million years, the pterodactyl fled squealing into the sky, thoroughly chastened by the wrath of Ignatius J. Reilly. Ignatius straightened his hunting cap and sniffed. The Valencia crisis averted once more, he resumed his lumbering perambulations down the streets of New Orleans." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 674, + 268, + 700 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 674, + 268, + 700 + ], + "spans": [ + { + "bbox": [ + 67, + 674, + 268, + 700 + ], + "type": "text", + "content": "E.4 Best-rated human story (and tied for fourth overall best-rated story)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 705, + 289, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 705, + 289, + 732 + ], + "spans": [ + { + "bbox": [ + 67, + 705, + 289, + 732 + ], + "type": "text", + "content": "This story was written by Bree Glasbergen. The ratings for this story are in Table 7." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 290, + 772 + ], + "type": "text", + "content": "Ignatius J Reilly swept crisp crumbs from his protruding belly with his elephantine hands. Swivelling from side-to-side, he garnered enough mo" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 301, + 258, + 525, + 365 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 258, + 525, + 365 + ], + "spans": [ + { + "bbox": [ + 301, + 258, + 525, + 365 + ], + "type": "text", + "content": "mentum to rise from the sofa. His slow ascend was soundtracked by the grating rip of stuck flesh peeling from sweaty vinyl. The lengthy time moving from reclined to an upright position positively perturbed him. So that by the time Ignatius stood, his joke had lost its amusement. Nevertheless, he declaimed his wit aloud, beseechng his mother's glowing approval." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 365, + 525, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 365, + 525, + 420 + ], + "spans": [ + { + "bbox": [ + 302, + 365, + 525, + 420 + ], + "type": "text", + "content": "'I see you have painted the walls Nomad Grey, Mumsie!' Ignatius smirked, looking down on the half-filled grey paint cans on the steps the way he did most modern society." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 420, + 524, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 420, + 524, + 474 + ], + "spans": [ + { + "bbox": [ + 302, + 420, + 524, + 474 + ], + "type": "text", + "content": "'No, not mad dear. Just grey.' His mother Irene responded, creeping down the basement stairs. Her leathered skin made her appear reptilian in the dim light of Ignatius' lair." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 301, + 475, + 525, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 475, + 525, + 636 + ], + "spans": [ + { + "bbox": [ + 301, + 475, + 525, + 636 + ], + "type": "text", + "content": "Ignatius rolled his eyes like the great wheel of fate itself. He slunk back into his scabby sofa, defeated, cursing aloud that he be blessed with such profound intellect yet no equal to appreciate it. His mind wandered to what the great scholars of Oxford would think of his pun before concluding indeed, they would loudly chortle. Yes, they would. He imagined flying to London and exchanging sharp banter with someone on par with his intellect. Travel. He winced. Never again. He groaned in agony, clutching his stomach. The thought of such stress had snapped his pyloric valve shut." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 637, + 525, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 637, + 525, + 691 + ], + "spans": [ + { + "bbox": [ + 302, + 637, + 525, + 691 + ], + "type": "text", + "content": "Irene Reilly, the mother of Ignatius J Reilly, reached the bottom of the basement stairs. She pondered why Ignatius had a crestfallen demeanour and began to appease his dismay." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 314, + 691, + 490, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 691, + 490, + 704 + ], + "spans": [ + { + "bbox": [ + 314, + 691, + 490, + 704 + ], + "type": "text", + "content": "'No mad grey,' she contemplated aloud." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 314, + 705, + 439, + 718 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 705, + 439, + 718 + ], + "spans": [ + { + "bbox": [ + 314, + 705, + 439, + 718 + ], + "type": "text", + "content": "'Nomad grey,' he corrected." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 719, + 525, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 719, + 525, + 745 + ], + "spans": [ + { + "bbox": [ + 302, + 719, + 525, + 745 + ], + "type": "text", + "content": "'No mad grey hair?' Irene laughed tentatively, searching his face for approval." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "type": "text", + "content": "Ignatius had begun to relax. Irene knew this because of a gangrenous heinous stench that was" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14524" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 132, + 68, + 461, + 126 + ], + "blocks": [ + { + "bbox": [ + 132, + 68, + 461, + 126 + ], + "lines": [ + { + "bbox": [ + 132, + 68, + 461, + 126 + ], + "spans": [ + { + "bbox": [ + 132, + 68, + 461, + 126 + ], + "type": "table", + "html": "
Rubric item12345678910overall
Rater 3899108105910987
Rater 48777108688978
Average8888.5995.58.59982.5
", + "image_path": "b578d5d3618fc8a2888664145c1b61c181f4d65aef2d8bc567827fe084de45bf.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 134, + 523, + 158 + ], + "lines": [ + { + "bbox": [ + 67, + 134, + 523, + 158 + ], + "spans": [ + { + "bbox": [ + 67, + 134, + 523, + 158 + ], + "type": "text", + "content": "Table 7: Ratings for the best-rated story authored by a human, which is also tied for fourth best overall rated story in the corpus." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 66, + 180, + 289, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 180, + 289, + 315 + ], + "spans": [ + { + "bbox": [ + 66, + 180, + 289, + 315 + ], + "type": "text", + "content": "now coating the room in its own layer of paint accompanied by what sounded like the bellow of an untuned French horn. Ignatius had calmed enough for his pyloric valve to open once more. With it, gushed the contents. Irene's nostrils scrunched together in protest. She grimaced in utter (albeit accustomed) disgust. However, did not complain but rather waited with the patience of a Catholic saint for her beloved son to educate her on the punchline she must have missed." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 316, + 289, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 316, + 289, + 356 + ], + "spans": [ + { + "bbox": [ + 67, + 316, + 289, + 356 + ], + "type": "text", + "content": "'No, mother. Grey Nomad. You are painting the wall grey, and you are...' Ignatius sighed, 'actually, Mumsie, never you mind'." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 358, + 289, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 358, + 289, + 384 + ], + "spans": [ + { + "bbox": [ + 67, + 358, + 289, + 384 + ], + "type": "text", + "content": "Irene feigned a chuckle and handed Ignatius an unaddressed letter before returning upstairs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 386, + 288, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 386, + 288, + 412 + ], + "spans": [ + { + "bbox": [ + 67, + 386, + 288, + 412 + ], + "type": "text", + "content": "'Curious as a cadaver,' Ignatius said aloud to the abyss of his basement squalor." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 80, + 413, + 131, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 413, + 131, + 424 + ], + "spans": [ + { + "bbox": [ + 80, + 413, + 131, + 424 + ], + "type": "text", + "content": "12.12.1962" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 79, + 428, + 233, + 440 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 428, + 233, + 440 + ], + "spans": [ + { + "bbox": [ + 79, + 428, + 233, + 440 + ], + "type": "text", + "content": "Dear Mr Ignatius J Reilly, the first," + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 441, + 290, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 441, + 290, + 509 + ], + "spans": [ + { + "bbox": [ + 67, + 441, + 290, + 509 + ], + "type": "text", + "content": "I challenge you to a dual at the setting of the sky. Might I remind you it is gentlemanly to remove one's hat in combat. We shall meet beside the gorgon nestled atop the church. The one across from Lorna's Gumbo shop." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 79, + 510, + 174, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 510, + 174, + 522 + ], + "spans": [ + { + "bbox": [ + 79, + 510, + 174, + 522 + ], + "type": "text", + "content": "Your mortal nemesis," + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 79, + 524, + 135, + 537 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 524, + 135, + 537 + ], + "spans": [ + { + "bbox": [ + 79, + 524, + 135, + 537 + ], + "type": "text", + "content": "Terry-dactyl" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 79, + 539, + 158, + 551 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 539, + 158, + 551 + ], + "spans": [ + { + "bbox": [ + 79, + 539, + 158, + 551 + ], + "type": "text", + "content": "PS: Bring snacks." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 553, + 289, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 553, + 289, + 578 + ], + "spans": [ + { + "bbox": [ + 67, + 553, + 289, + 578 + ], + "type": "text", + "content": "Ignatius sat ruminating for an hour before yelling at his mother." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 580, + 290, + 607 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 580, + 290, + 607 + ], + "spans": [ + { + "bbox": [ + 67, + 580, + 290, + 607 + ], + "type": "text", + "content": "'Mother, you vapid deranged widow of a woman. Fetch me my quill!'" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 80, + 608, + 131, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 608, + 131, + 619 + ], + "spans": [ + { + "bbox": [ + 80, + 608, + 131, + 619 + ], + "type": "text", + "content": "12.12.1962" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 79, + 623, + 161, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 623, + 161, + 635 + ], + "spans": [ + { + "bbox": [ + 79, + 623, + 161, + 635 + ], + "type": "text", + "content": "My dear Terrance," + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 67, + 637, + 289, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 637, + 289, + 663 + ], + "spans": [ + { + "bbox": [ + 67, + 637, + 289, + 663 + ], + "type": "text", + "content": "Not under threat nor the pain of death doth I remove my beloved green hat. Sod off." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 67, + 665, + 290, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 665, + 290, + 704 + ], + "spans": [ + { + "bbox": [ + 67, + 665, + 290, + 704 + ], + "type": "text", + "content": "You had best bring a sharpener for your dull wit. I laugh at the audacity and delusion that you could consider besting me." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 67, + 706, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 706, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 706, + 290, + 772 + ], + "type": "text", + "content": "Might I remind you, good sir, my acceptance of your conditions is due to the ever-turning wheel of fate that we spiral to decay. I should instead seek a worthy opponent. But, alas, I am left with muddy dregs of the proverbial pond as many of the" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 180, + 525, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 180, + 525, + 220 + ], + "spans": [ + { + "bbox": [ + 302, + 180, + 525, + 220 + ], + "type": "text", + "content": "worthier fish have already been fished. Thus, I have no option but to teach you the error of your ways. By force." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 222, + 525, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 222, + 525, + 275 + ], + "spans": [ + { + "bbox": [ + 302, + 222, + 525, + 275 + ], + "type": "text", + "content": "Put your wings where your words are, and let us meet in my basement lair. To visit the church in its present state would be torture to my very soul. May St Peter have mercy on us indeed." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 314, + 277, + 362, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 277, + 362, + 289 + ], + "spans": [ + { + "bbox": [ + 314, + 277, + 362, + 289 + ], + "type": "text", + "content": "Good day," + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 314, + 291, + 352, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 291, + 352, + 304 + ], + "spans": [ + { + "bbox": [ + 314, + 291, + 352, + 304 + ], + "type": "text", + "content": "Ignatius" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 302, + 306, + 524, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 306, + 524, + 372 + ], + "spans": [ + { + "bbox": [ + 302, + 306, + 524, + 372 + ], + "type": "text", + "content": "Terry-dactyl, the pterodactyl etched down the basement rail, sword in one wing and soup in a milkshake cup gripped tightly in the other. He placed the straw in his mouth and swallowed some soup contemplating how to best his nemesis." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 314, + 374, + 518, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 374, + 518, + 386 + ], + "spans": [ + { + "bbox": [ + 314, + 374, + 518, + 386 + ], + "type": "text", + "content": "'We meet at last... light,' Terry said. One-Nil." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 302, + 387, + 524, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 387, + 524, + 414 + ], + "spans": [ + { + "bbox": [ + 302, + 387, + 524, + 414 + ], + "type": "text", + "content": "'You suck,' Ignatius said slyly. Marking his win with chalk upon the wall. One- One" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 314, + 416, + 495, + 428 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 416, + 495, + 428 + ], + "spans": [ + { + "bbox": [ + 314, + 416, + 495, + 428 + ], + "type": "text", + "content": "doesn't even make sense!' Terry scoffed." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 302, + 429, + 525, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 429, + 525, + 456 + ], + "spans": [ + { + "bbox": [ + 302, + 429, + 525, + 456 + ], + "type": "text", + "content": "'It is because of the straw!' Ignatius boomed, gripping his stomach in pain." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 302, + 457, + 524, + 484 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 457, + 524, + 484 + ], + "spans": [ + { + "bbox": [ + 302, + 457, + 524, + 484 + ], + "type": "text", + "content": "'I have the upper hand!' Terry said, motioning to his perched position." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 314, + 486, + 502, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 486, + 502, + 498 + ], + "spans": [ + { + "bbox": [ + 314, + 486, + 502, + 498 + ], + "type": "text", + "content": "'At least I have hands,' Ignatius countered." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 302, + 500, + 525, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 500, + 525, + 526 + ], + "spans": [ + { + "bbox": [ + 302, + 500, + 525, + 526 + ], + "type": "text", + "content": "Terry winced as Ignatius drew another chalk mark on the board. Ignatius was beginning to calm." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 302, + 528, + 524, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 528, + 524, + 554 + ], + "spans": [ + { + "bbox": [ + 302, + 528, + 524, + 554 + ], + "type": "text", + "content": "'Oh, what have I got you all in a flap?' Ignatius laughed. Another point." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 301, + 555, + 524, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 555, + 524, + 581 + ], + "spans": [ + { + "bbox": [ + 301, + 555, + 524, + 581 + ], + "type": "text", + "content": "'Let us cut,' Terry said, drawing his sword, 'straight to the point!' Three all." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 301, + 583, + 525, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 583, + 525, + 745 + ], + "spans": [ + { + "bbox": [ + 301, + 583, + 525, + 745 + ], + "type": "text", + "content": "Terry swung his sword downwards in one swift motion, cutting Ignatius' chalk-bearing arm clean off at the elbow. Simultaneously Ignatius lifted a paint can and doused his opponent with it. As he did, his valve opened and shut again, demobilising Terry with a gas bomb and gutting Ignatius in self-induced agony. Terry flapped violently, unable to breathe. Ignatius then calmed enough for the full contents of his bowl to expel and fell backwards from the force. Suddenly, a splatter of pterodactyl and grey matter covered the room. A large chunk of wing lodged itself into the crisp packet." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 746, + 524, + 772 + ], + "type": "text", + "content": "'Curious as a cadaver,' Ignatius said. 'I see you brought your own snacks!'" + } + ] + } + ], + "index": 34 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14525" + } + ] + } + ], + "index": 35 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 98 + ], + "type": "text", + "content": "He chortled to himself and pulled out his notepad to ensure he captured a morsel of this evening's wit." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 99, + 289, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 99, + 289, + 126 + ], + "spans": [ + { + "bbox": [ + 67, + 99, + 289, + 126 + ], + "type": "text", + "content": "'Ever seen a pterodactyl explode?' he asked aloud to no one. 'No, but Dino might have'." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 127, + 290, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 127, + 290, + 153 + ], + "spans": [ + { + "bbox": [ + 67, + 127, + 290, + 153 + ], + "type": "text", + "content": "His mothers' laughter rang down from the top of the stairs." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 155, + 289, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 155, + 289, + 182 + ], + "spans": [ + { + "bbox": [ + 67, + 155, + 289, + 182 + ], + "type": "text", + "content": "'Nomad grey,' she chuckled, motioning to the aftermath, 'I get it now'." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 194, + 289, + 220 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 194, + 289, + 220 + ], + "spans": [ + { + "bbox": [ + 67, + 194, + 289, + 220 + ], + "type": "text", + "content": "E.5 Story rated in the median (rank 33 out of 65)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 66, + 227, + 289, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 227, + 289, + 253 + ], + "spans": [ + { + "bbox": [ + 66, + 227, + 289, + 253 + ], + "type": "text", + "content": "This story was generated by Bing Chat. The ratings for this story are in Table 8." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 254, + 290, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 254, + 290, + 362 + ], + "spans": [ + { + "bbox": [ + 67, + 254, + 290, + 362 + ], + "type": "text", + "content": "Ignatius J. Reilly was strolling along the Audubon Park, munching on a hot dog and admiring the statues of the Confederate heroes, when he heard a loud screech from above. He looked up and saw a huge winged creature swooping down on him, its beak open and its claws extended. It was a pterodactyl, a prehistoric monster that had somehow escaped from the museum of natural history." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 364, + 290, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 364, + 290, + 444 + ], + "spans": [ + { + "bbox": [ + 67, + 364, + 290, + 444 + ], + "type": "text", + "content": "Ignatius gasped and dropped his hot dog. He felt a surge of panic and indignation. How dare this beast interrupt his peaceful contemplation of the glorious past? He quickly pulled out his valve and began to inflate it with his breath, hoping to ward off the attack with his formidable weapon." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 446, + 290, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 446, + 290, + 513 + ], + "spans": [ + { + "bbox": [ + 67, + 446, + 290, + 513 + ], + "type": "text", + "content": "The pterodactyl was not impressed by the sight of the valve. It saw only a fat and juicy prey, dressed in a ridiculous green hunting cap and a flannel shirt. It dived faster and aimed for Ignatius's head, intending to rip it off with its sharp teeth." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 514, + 290, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 514, + 290, + 608 + ], + "spans": [ + { + "bbox": [ + 67, + 514, + 290, + 608 + ], + "type": "text", + "content": "Ignatius saw the pterodactyl's shadow looming over him. He raised his valve and swung it with all his might, hoping to hit the creature in the eye or the nose. He missed by a few inches and only grazed its wing. The pterodactyl let out another screech and changed its course, flying past Ignatius and landing on a nearby tree." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 610, + 290, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 610, + 290, + 690 + ], + "spans": [ + { + "bbox": [ + 67, + 610, + 290, + 690 + ], + "type": "text", + "content": "Ignatius felt a momentary relief. He had survived the first assault. He looked around for help, but there was no one in sight. The park was deserted, except for a few squirrels and pigeons. He wondered if he should run for his life or stand his ground and fight." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 692, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 692, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 692, + 290, + 772 + ], + "type": "text", + "content": "He decided to fight. He was not going to let this abomination ruin his day. He was a medievalist, a scholar of the Dark Ages, a defender of the faith and the order. He had read about the heroic deeds of knights and saints who had faced dragons and demons with courage and piety. He had to emulate" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 71, + 420, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 420, + 84 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 420, + 84 + ], + "type": "text", + "content": "them and prove his worth." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 86, + 525, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 86, + 525, + 194 + ], + "spans": [ + { + "bbox": [ + 302, + 86, + 525, + 194 + ], + "type": "text", + "content": "He picked up his hot dog from the ground and wiped off some dirt. He took a bite and felt a surge of energy. He felt ready for battle. He shouted at the pterodactyl: \"You foul creature! You are an anachronism, a relic of a barbaric era! You have no place in this civilized world! You are a menace to society and an insult to God! I challenge you to a duel! Come down and face me like a man!\"" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 195, + 525, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 195, + 525, + 289 + ], + "spans": [ + { + "bbox": [ + 302, + 195, + 525, + 289 + ], + "type": "text", + "content": "The pterodactyl heard Ignatius's words and felt a surge of anger. It did not understand what he was saying, but it sensed his hostility and defiance. It did not like being challenged by this puny creature who dared to call himself a man. It felt insulted by his words and his gesture of throwing his hot dog at it." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 291, + 525, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 291, + 525, + 344 + ], + "spans": [ + { + "bbox": [ + 302, + 291, + 525, + 344 + ], + "type": "text", + "content": "It accepted the challenge. It spread its wings and flew towards Ignatius again, roaring with rage. It was going to teach him a lesson he would never forget." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 347, + 525, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 347, + 525, + 413 + ], + "spans": [ + { + "bbox": [ + 302, + 347, + 525, + 413 + ], + "type": "text", + "content": "Ignatius saw the pterodactyl coming at him again. He felt a surge of fear and courage. He had to act fast. He looked around for a weapon, but there was nothing in sight. He had only his valve and his wits." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 416, + 525, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 416, + 525, + 469 + ], + "spans": [ + { + "bbox": [ + 302, + 416, + 525, + 469 + ], + "type": "text", + "content": "He decided to use his wits. He had to outsmart the beast. He had to use his superior intellect and his vast knowledge of history and philosophy. He had to appeal to its reason and its conscience." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 470, + 525, + 551 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 470, + 525, + 551 + ], + "spans": [ + { + "bbox": [ + 302, + 470, + 525, + 551 + ], + "type": "text", + "content": "He shouted at the pterodactyl: \"Stop! Think! What are you doing? Why are you attacking me? What have I done to you? Do you not realize that you are committing a grave sin? Do you not fear the wrath of God? Do you not respect the laws of nature and the rights of man?\"" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 553, + 525, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 553, + 525, + 634 + ], + "spans": [ + { + "bbox": [ + 302, + 553, + 525, + 634 + ], + "type": "text", + "content": "The pterodactyl heard Ignatius's words and felt a surge of confusion. It did not understand what he was saying, but it sensed his sincerity and his passion. It wondered if he had a point. It wondered if it was doing something wrong. It wondered if it should stop and listen." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 636, + 525, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 636, + 525, + 676 + ], + "spans": [ + { + "bbox": [ + 302, + 636, + 525, + 676 + ], + "type": "text", + "content": "It slowed down its flight and hovered over Ignatius, looking at him with curiosity and doubt. It tilted its head and made a questioning sound." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 677, + 525, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 677, + 525, + 731 + ], + "spans": [ + { + "bbox": [ + 302, + 677, + 525, + 731 + ], + "type": "text", + "content": "Ignatius saw the pterodactyl's hesitation and felt a surge of hope. He had succeeded in making it pause and reconsider. He had a chance to persuade it and make peace with it." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 732, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 525, + 772 + ], + "type": "text", + "content": "He smiled at the pterodactyl and said: \"That's right. Listen to me. I have something important to tell you. Something that will change your life and" + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14526" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 136, + 68, + 459, + 126 + ], + "blocks": [ + { + "bbox": [ + 136, + 68, + 459, + 126 + ], + "lines": [ + { + "bbox": [ + 136, + 68, + 459, + 126 + ], + "spans": [ + { + "bbox": [ + 136, + 68, + 459, + 126 + ], + "type": "table", + "html": "
Rubric item12345678910overall
Rater 5765563233141
Rater 6899687878777
Average7.57.575.575555.5459
", + "image_path": "e9902fac6f207bfdb7bd40903cb2029cfba6721acc59eb7ae1a0f0d3b4e78610.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 134, + 446, + 147 + ], + "lines": [ + { + "bbox": [ + 145, + 134, + 446, + 147 + ], + "spans": [ + { + "bbox": [ + 145, + 134, + 446, + 147 + ], + "type": "text", + "content": "Table 8: Ratings for the story rated in the median, generated by Bing Chat." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 169, + 289, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 169, + 289, + 195 + ], + "spans": [ + { + "bbox": [ + 67, + 169, + 289, + 195 + ], + "type": "text", + "content": "your destiny. Something that will make you happy and free. Something that will save your soul.\"" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 196, + 290, + 276 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 196, + 290, + 276 + ], + "spans": [ + { + "bbox": [ + 67, + 196, + 290, + 276 + ], + "type": "text", + "content": "The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his kindness and his generosity. It wondered what he had to tell it. It wondered what he could offer it. It wondered if it should trust him." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 279, + 289, + 318 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 279, + 289, + 318 + ], + "spans": [ + { + "bbox": [ + 67, + 279, + 289, + 318 + ], + "type": "text", + "content": "It landed on the ground and approached Ignatius slowly, looking at him with interest and caution. It made a friendly sound." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 320, + 289, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 320, + 289, + 373 + ], + "spans": [ + { + "bbox": [ + 67, + 320, + 289, + 373 + ], + "type": "text", + "content": "Ignatius saw the pterodactyl's approach and felt a surge of joy. He had succeeded in making it land and come closer. He had a chance to befriend it and teach it." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 375, + 290, + 442 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 375, + 290, + 442 + ], + "spans": [ + { + "bbox": [ + 67, + 375, + 290, + 442 + ], + "type": "text", + "content": "He reached out his hand and said: \"Come here, my friend. Let me touch you. Let me show you my love. Let me introduce you to the wonders of civilization and culture. Let me enlighten you with the wisdom of the ages.\"" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 444, + 290, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 444, + 290, + 523 + ], + "spans": [ + { + "bbox": [ + 67, + 444, + 290, + 523 + ], + "type": "text", + "content": "The pterodactyl heard Ignatius's words and felt a surge of curiosity. It did not understand what he was saying, but it sensed his warmth and his enthusiasm. It wondered what he wanted to touch it for. It wondered what he wanted to show it. It wondered what he wanted to teach it." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 526, + 290, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 526, + 290, + 565 + ], + "spans": [ + { + "bbox": [ + 67, + 526, + 290, + 565 + ], + "type": "text", + "content": "It moved closer to Ignatius and extended its neck, looking at him with curiosity and trust. It made a gentle sound." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 567, + 290, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 567, + 290, + 621 + ], + "spans": [ + { + "bbox": [ + 67, + 567, + 290, + 621 + ], + "type": "text", + "content": "Ignatius touched the pterodactyl's neck and felt a surge of emotion. He had succeeded in making it trust him and accept him. He had a chance to bond with it and adopt it." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 623, + 290, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 623, + 290, + 676 + ], + "spans": [ + { + "bbox": [ + 67, + 623, + 290, + 676 + ], + "type": "text", + "content": "He hugged the pterodactyl and said: \"You are beautiful, my friend. You are magnificent. You are noble. You are a masterpiece of creation. You are a gift from God. You are my brother.\"" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 677, + 290, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 677, + 290, + 745 + ], + "spans": [ + { + "bbox": [ + 67, + 677, + 290, + 745 + ], + "type": "text", + "content": "The pterodactyl heard Ignatius's words and felt a surge of emotion. It did not understand what he was saying, but it sensed his affection and his admiration. It felt happy and proud. It felt loved and valued. It felt like it belonged." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 746, + 290, + 772 + ], + "type": "text", + "content": "It hugged Ignatius back and said: \"You are beautiful, my friend. You are magnificent. You are" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 169, + 524, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 169, + 524, + 195 + ], + "spans": [ + { + "bbox": [ + 302, + 169, + 524, + 195 + ], + "type": "text", + "content": "noble. You are a masterpiece of creation. You are a gift from God. You are my brother.\"" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 195, + 524, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 195, + 524, + 262 + ], + "spans": [ + { + "bbox": [ + 302, + 195, + 524, + 262 + ], + "type": "text", + "content": "Ignatius and the pterodactyl looked into each other's eyes and felt a surge of connection. They had succeeded in making each other happy and fulfilled. They had a chance to live together and explore together." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 263, + 524, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 263, + 524, + 302 + ], + "spans": [ + { + "bbox": [ + 302, + 263, + 524, + 302 + ], + "type": "text", + "content": "They smiled at each other and said: \"Let's go, my friend. Let's go to my home. Let's go to your home. Let's go to our home.\"" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 304, + 524, + 330 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 304, + 524, + 330 + ], + "spans": [ + { + "bbox": [ + 302, + 304, + 524, + 330 + ], + "type": "text", + "content": "They walked away from the park, hand in hand, wing in wing, heart in heart." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 314, + 331, + 444, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 331, + 444, + 344 + ], + "spans": [ + { + "bbox": [ + 314, + 331, + 444, + 344 + ], + "type": "text", + "content": "They lived happily ever after." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 303, + 353, + 415, + 366 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 353, + 415, + 366 + ], + "spans": [ + { + "bbox": [ + 303, + 353, + 415, + 366 + ], + "type": "text", + "content": "E.6 Worst-rated story" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 301, + 370, + 524, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 370, + 524, + 396 + ], + "spans": [ + { + "bbox": [ + 301, + 370, + 524, + 396 + ], + "type": "text", + "content": "This story was generated by Alpaca. The ratings for this story are in Table 9." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 398, + 525, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 398, + 525, + 545 + ], + "spans": [ + { + "bbox": [ + 302, + 398, + 525, + 545 + ], + "type": "text", + "content": "Ignatius J. Reilly, a brave and courageous man of immense girth and strength, faced off against a fearsome pterodactyl in a single combat of epic proportions. The pterodactyl, a winged beast of immense strength and ferocity, had been terrorizing the countryside and Ignatius had stepped up to challenge the creature to a fight. The battle raged across the sky, with Ignatius wielding his mighty fists and the pterodactyl using its fiery breath. Finally, after a long and arduous fight, Ignatius emerged victorious, much to the cheers of the onlookers below." + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 780, + 312, + 791 + ], + "type": "text", + "content": "14527" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 136, + 380, + 458, + 438 + ], + "blocks": [ + { + "bbox": [ + 136, + 380, + 458, + 438 + ], + "lines": [ + { + "bbox": [ + 136, + 380, + 458, + 438 + ], + "spans": [ + { + "bbox": [ + 136, + 380, + 458, + 438 + ], + "type": "table", + "html": "
Rubric item12345678910overall
Rater 1114211121115
Rater 2221211111113
Average1.51.52.521111.51114
", + "image_path": "d0258752ac19a2babb5c1875ab5e4f237863a2afd1475b0fe64a4412151abf56.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 168, + 446, + 424, + 459 + ], + "lines": [ + { + "bbox": [ + 168, + 446, + 424, + 459 + ], + "spans": [ + { + "bbox": [ + 168, + 446, + 424, + 459 + ], + "type": "text", + "content": "Table 9: Ratings for the worst-rated story, generated by Alpaca." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "spans": [ + { + "bbox": [ + 284, + 781, + 312, + 791 + ], + "type": "text", + "content": "14528" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 24 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_content_list.json b/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a750d04a36ec65b56e9ae4be6c5bd20546f1c6c2 --- /dev/null +++ b/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_content_list.json @@ -0,0 +1,2919 @@ +[ + { + "type": "text", + "text": "A Critical Analysis of Document Out-of-Distribution Detection", + "text_level": 1, + "bbox": [ + 171, + 90, + 821, + 109 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jiuxiang Gu $^{1*}$ Yifei Ming $^{2*†}$ Yi Zhou $^{3}$ Jason Kuen $^{1}$ \nVlad I. Morariu $^{1}$ Handong Zhao $^{1}$ Ruiyi Zhang $^{1}$ Nikolaos Barmpalios $^{1}$ \nAnqi Liu $^{3}$ Yixuan Li $^{2}$ Tong Sun $^{1}$ Ani Nenkova $^{1}$", + "bbox": [ + 168, + 124, + 831, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\\(^{1}\\)Adobe Research \\(^{2}\\)University of Wisconsin-Madison \\(^{3}\\)Johns Hopkins University \\(^{1}\\{jigu, kuen, morariu, hazhao, barmpali, ruizhang, tsun, nenkova\\} @adobe.com \\(^{2}\\{alvinming, sharonli\\} @cs.wisc.edu\\) \\(^{3}yzhou188@jhu.edu\\) \\(^{3}aliu@cs.jhu.edu\\)", + "bbox": [ + 146, + 174, + 855, + 225 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 266 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large-scale pre-training is widely used in recent document understanding tasks. During deployment, one may expect that models should trigger a conservative fallback policy when encountering out-of-distribution (OOD) samples, which highlights the importance of OOD detection. However, most existing OOD detection methods focus on single-modal inputs such as images or texts. While documents are multimodal in nature, it is underexplored if and how multi-modal information in documents can be exploited for OOD detection. In this work, we first provide a systematic and in-depth analysis on OOD detection for document understanding models. We study the effects of model modality, pre-training, and fine-tuning across various types of OOD inputs. In particular, we find that spatial information is critical for document OOD detection. To better exploit spatial information, we propose a spatial-aware adapter, which serves as a parameter-efficient add-on module to adapt transformer-based language models to the document domain. Extensive experiments show that adding the spatial-aware adapter significantly improves the OOD detection performance compared to directly using the language model and achieves superior performance compared to competitive baselines.", + "bbox": [ + 141, + 282, + 460, + 680 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 695, + 258, + 709 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The recent success of large-scale pre-training has propelled the widespread deployment of deep learning models in the document domain, where model predictions are used to help humans make decisions in various applications such as tax form processing and medical reports analysis. However, models are typically pre-trained on data collected from the web but deployed in an environment with distributional shifts (Cui et al., 2021). For instance, the outbreak of COVID-19 has led to continually", + "bbox": [ + 112, + 721, + 489, + 883 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/eac539accce9516c780df033975daeca77327f8b5dffae47796be839d97f480a.jpg", + "image_caption": [ + "Figure 1: Illustration of OOD detection for document classification. The pre-training and fine-tuning pipelines are shown on the top left and bottom left, respectively. Right: During inference time, an OOD score can be derived based on logits $g(x)$ or feature embeddings $z := h(x)$ . A document input $x$ is identified as OOD if its OOD score is below some threshold $\\gamma$ ." + ], + "image_footnote": [], + "bbox": [ + 509, + 250, + 882, + 344 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "changing data distributions in machine-assisted medical document analysis systems (Velavan and Meyer, 2020). This motivates the need for reliable document understanding models against out-of-distribution (OOD) inputs.", + "bbox": [ + 507, + 482, + 882, + 562 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The goal of OOD detection is to categorize indistribution (ID) samples into one of the known categories and detect inputs that do not belong to any known classes at test time (Bendale and Boult, 2016). A plethora of OOD detection methods has been proposed for single-modal (image or text) inputs (Ge et al., 2017; Nalisnick et al., 2019; Oza and Patel, 2019; Tack et al., 2020; Hsu et al., 2020; Arora et al., 2021; Zhou et al., 2021; Xiao et al., 2020; Xu et al., 2021a; Li et al., 2021b; Shen et al., 2021; Jin et al., 2022; Zhou et al., 2022; Ming et al., 2022b,c; Podolskiy et al., 2021; Ren et al., 2023). Recent works (Fort et al., 2021; Esmaeilpour et al., 2022; Ming et al., 2022a; Ming and Li, 2023; Bitterwolf et al., 2023) also demonstrate promising OOD detection performance based on large-scale models pre-trained on text-image pairs, as pre-training enables models to learn powerful and transferable feature representations (Radford et al., 2021). However, it remains largely unexplored if existing findings in the OOD detection literature for images or texts can be naturally extended to the document", + "bbox": [ + 507, + 565, + 884, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* Equal contribution", + "bbox": [ + 141, + 891, + 272, + 903 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "† Work done during the internship at Adobe Research", + "bbox": [ + 141, + 903, + 470, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "4973", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4973-4999 December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 216, + 945, + 779, + 972 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "domain.", + "bbox": [ + 112, + 85, + 178, + 98 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Multiple unique challenges exist for document OOD detection. Unlike natural images, texts, or image-text pairs, no captions can describe a document and images in documents rarely contain natural objects. Moreover, the spatial relationship of text blocks further differentiates multimodal learning in documents from multimodal learning in the vision-language domain (Lu et al., 2019; Li et al., 2020). In addition, while recent pre-training methods have demonstrated remarkable performance in downstream document understanding tasks (Xu et al., 2020, 2021b; Li et al., 2021a; Gu et al., 2022; Hong et al., 2022; Huang et al., 2022; Li et al., 2022; Wang et al., 2022a), existing pre-training datasets for documents are limited and lack diversity. This is in sharp contrast to common pretraining datasets for natural images. It remains underexplored whether existing OOD detection methods are reliable in the document domain and how pre-training impacts OOD reliability.", + "bbox": [ + 112, + 101, + 489, + 423 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we first present a comprehensive study to better understand OOD detection in the document domain through the following questions: (1) What is the role of document pre-training? How do pre-training datasets and tasks affect OOD detection performance? (2) Are existing OOD detection methods developed for natural images and texts transferrable to documents? (3) How does modality (textual, visual, and especially spatial information) affect OOD performance? In particular, we find that spatial information is critical for improving OOD reliability. Moreover, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa (Liu et al., 2019). Our module is computationally efficient and significantly improves both ID classification and OOD detection performance (Sec. 5.2). Our contributions are summarized as follows:", + "bbox": [ + 115, + 424, + 489, + 728 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We provide an extensive and in-depth study to investigate the impacts of pre-training, fine-tuning, model-modality, and OOD scoring functions on a broad spectrum of document OOD detection tasks. Our codebase will be open-sourced to facilitate future research.", + "- We present unique insights on document OOD detection. For example, we observe that distance-based OOD scores are consistently advantageous over logit-based scores, which is underexplored" + ], + "bbox": [ + 112, + 744, + 489, + 917 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "in the recent OOD detection literature on vision-language pre-trained models.", + "bbox": [ + 522, + 84, + 884, + 116 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "- We further propose a spatial-aware adapter module for transformer-based language models, facilitating easy adaptation of pre-trained language models to the document domain. Extensive experiments confirm the effectiveness of our module across diverse types of OOD data.", + "bbox": [ + 509, + 131, + 885, + 227 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Preliminaries and Related Works", + "text_level": 1, + "bbox": [ + 509, + 241, + 831, + 256 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 Document Models and Pre-Training", + "text_level": 1, + "bbox": [ + 509, + 268, + 838, + 285 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Large-scale pre-trained models gradually gain popularity in the document domain due to their success in producing generic representations from large-scale unlabeled corpora in vision and natural language processing (NLP) tasks (Devlin et al., 2018; Lu et al., 2019; Su et al., 2019; Schiappa et al., 2022). As documents contain both visual and textual information distributed spatially in semantic regions, document-specific models and pre-training objectives are often necessary, which are distinct from vision or language domains.", + "bbox": [ + 507, + 291, + 884, + 467 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We summarize common model structures for document pre-training in Fig. 2a. Specifically, LayoutLM (Xu et al., 2020) takes a sequence of Optical Character Recognition (OCR) (Smith, 2007) words and word bounding boxes as inputs. It extends BERT to learn contextualized word representations for document images through multitask learning. LayoutLMv2 (Xu et al., 2021b) improves on the prior work with new pre-training tasks to model the interaction among texts, layouts, and images. DocFormer (Appalaraju et al., 2021) adopts a CNN model to extract image grid features, fusing the spatial information as an inductive bias for the self-attention module. LayoutLMv3 (Huang et al., 2022) further enhances visual and spatial characteristics with masked image modeling and word-patch alignment tasks. Another line of work focuses on various granularities of documents, such as region-level text/image blocks. Examples of such models include SelfDoc (Li et al., 2021a), UDoc (Gu et al., 2021), and MGDoc (Wang et al., 2022b), which are pre-trained with a cross-modal encoder to capture the relationship between visual and textual features. These models incorporate spatial information by fusing position embeddings at the output layer of their encoders, instead of the input layer. Additionally, OCR-free models (Kim et al., 2022; Tang et al., 2023) tackle document understanding as a se", + "bbox": [ + 507, + 469, + 884, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "4974", + "bbox": [ + 480, + 928, + 521, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "quence generation problem, unifying multiple tasks through an image-to-sequence generation network.", + "bbox": [ + 112, + 84, + 489, + 116 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While these pre-trained models demonstrate promising performance on downstream applications, their robustness to different types of OOD data, the influence of pre-training and fine-tuning, and the value of different modalities (e.g. spatial, textual, and visual) for document OOD detection remain largely unexplored.", + "bbox": [ + 112, + 117, + 489, + 230 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 Out-of-Distribution Detection", + "text_level": 1, + "bbox": [ + 112, + 242, + 394, + 256 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "OOD detection has been extensively studied for open-world multi-class classification with natural image and text inputs, where the goal is to derive an OOD score that separates OOD from ID samples. A plethora of methods are proposed for deep neural networks, where the OOD scoring function is typically derived based on logits (without softmax scaling) (Hendrycks et al., 2022), softmax outputs (Liang et al., 2018; Hsu et al., 2020; Huang and Li, 2021; Sun et al., 2021), gradients (Huang et al., 2021), and feature embeddings (Tack et al., 2020; Fort et al., 2021; Ming et al., 2023). Despite their impressive performance on natural images and texts, it is underexplored if the results are transferrable to the document domain. A recent work (Larson et al., 2022) studied OOD detection for documents but only explored a limited number of models and OOD detection methods. The impacts of pre-training, fine-tuning, and spatial information remain unknown. In this work, we aim to provide a comprehensive and finer-grained analysis to shed light on the key factors for OOD robustness in the document domain.", + "bbox": [ + 112, + 263, + 489, + 634 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Notations. Following prior works on OOD detection with large-scale pre-trained models (Ming et al., 2022a; Ming and Li, 2023), the task of OOD detection is defined with respect to the downstream dataset, instead of the pre-training data which is often hard to characterize. In document classification, we use $\\mathcal{X}^{\\mathrm{in}}$ and $\\mathcal{Y}^{\\mathrm{in}} = \\{1,\\dots ,K\\}$ to denote the input and label space, respectively. Let $\\mathcal{D}^{\\mathrm{in}} = \\{(x_i^{\\mathrm{in}},y_i^{\\mathrm{in}})\\}_{i = 1}^N$ be the ID dataset, where $x\\in \\mathcal{X}^{\\mathrm{in}}$ and $y^{\\mathrm{in}}\\in \\mathcal{Y}^{\\mathrm{in}}$ . Let $\\mathcal{D}^{\\mathrm{out}} = \\{(x_i^{\\mathrm{out}},y_i^{\\mathrm{out}})\\}_{i = 1}^M$ denote an OOD test set where $y^{\\mathrm{out}}\\in \\mathcal{Y}^{\\mathrm{out}}$ , and $\\mathcal{Y}^{\\mathrm{out}}\\cap \\mathcal{Y}^{\\mathrm{in}} = \\emptyset$ . We express the neural network model $f\\coloneqq g\\circ h$ as a composition of a feature extractor $h:\\mathcal{X}\\to \\mathbb{R}^{d}$ and a classifier $g:\\mathbb{R}^{d}\\to \\mathbb{R}^{K}$ which maps the feature embedding of an input to $K$ real-valued numbers known as logits. During inference time, given an input $\\pmb{x}$ , OOD detection", + "bbox": [ + 112, + 645, + 489, + 919 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "can be formulated as:", + "bbox": [ + 507, + 84, + 673, + 98 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nG _ {\\gamma} (\\boldsymbol {x}; h, g) = \\left\\{ \\begin{array}{l l} \\mathrm {I D} & S (\\boldsymbol {x}; h, g) \\geq \\gamma \\\\ \\mathrm {O O D} & S (\\boldsymbol {x}; h, g) < \\gamma \\end{array} \\right.,\n$$\n", + "text_format": "latex", + "bbox": [ + 539, + 103, + 850, + 143 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $S(\\cdot)$ is a scoring function that measures OOD uncertainty. In practice, the threshold $q\\gamma$ is often chosen so that a high fraction of ID data (e.g., 95%) is above the threshold.", + "bbox": [ + 507, + 149, + 882, + 214 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "OOD detection scores. We focus on two major categories of computationally efficient OOD detection methods1: logit-based methods derive OOD scores from the logit layer of the model, while distance-based methods directly leverage feature embeddings, as shown in Fig. 1. We describe a few popular methods for each category as follows.", + "bbox": [ + 507, + 222, + 884, + 335 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- Logit-based: Maximum Softmax Probability (MSP) score (Hendrycks and Gimpel, 2017) $S_{\\mathrm{MSP}} = \\max_{i\\in [K]}e^{f_i(\\boldsymbol{x})} / \\sum_{j = 1}^K e^{f_j(\\boldsymbol{x})}$ naturally arises as a classic baseline as models often output lower softmax probabilities for OOD data; Energy score (Liu et al., 2020): $S_{\\mathrm{Energy}} = \\log \\sum_{i\\in [K]}e^{f_i(\\boldsymbol{x})}$ utilizes the Helmholtz free energy of the data and theoretically aligns with the logarithm of the ID density; the simple MaxLogit score (Hendrycks et al., 2022): $S_{\\mathrm{Maxlogit}} = \\max_{i\\in [K]}f_i(\\boldsymbol{x})$ has demonstrated promising performance on large-scale natural image datasets. We select the above scores due to their simplicity and computational efficiency. In addition, recent studies demonstrate that such simple scores are particularly effective with large-scale pre-trained models in vision (Fort et al., 2021) and vision-language domains (Ming et al., 2022a; Bitterwolf et al., 2023). We complement previous studies and investigate their effectiveness for documents.", + "bbox": [ + 507, + 341, + 885, + 663 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- Distance-based: Distance-based methods directly leverage feature embeddings $\\mathbf{z} = h(\\mathbf{x})$ based on the idea that OOD inputs are relatively far away from ID clusters in the feature space, compared to ID inputs. Distance-based methods can be characterized as parametric and non-parametric. Parametric methods such as Mahalanobis score (Lee et al., 2018; Sehwag et al., 2021) assume ID embeddings follow class-conditional Gaussian distributions and use the Mahalanobis distance as the distance metric. On the other hand, non-parametric methods such as KNN+ (Sun et al., 2022) use cosine similarity as the distance metric.", + "bbox": [ + 509, + 664, + 884, + 888 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "1We also investigate gradient-based methods such as Grad-Norm (Huang et al., 2021) in Appendix C.", + "bbox": [ + 507, + 892, + 882, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "4975", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/fa69a4a5a040426c3b4a5c6ea1a50dd5ffb253621aa2135ed5d8ea12ecf35d03.jpg", + "image_caption": [ + "(a) Illustration of common structures for document pretraining and classification." + ], + "image_footnote": [], + "bbox": [ + 119, + 80, + 490, + 199 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/41c1a7bbaa1d47b5a7729e95f3246eef71d8d4b5ef2b773028c6fe2c610cc6a5.jpg", + "image_caption": [ + "(b) A detailed comparison of per-category accuracy on the RVL-CDIP test set.", + "Figure 2: (Left) Illustration of models for document pre-training and classification, with our proposed spatial-aware models in green blocks. Modality information is also shown atop each architecture. (Right) Evaluating fine-tuning performance for document classification of pre-trained models. Models are grouped into several categories (from left to right): language-only, vision-only, and multi-modal. For comparison, the performance of corresponding models in other groups is shown in gray. The average accuracy for each model is indicated in the parenthesis." + ], + "image_footnote": [], + "bbox": [ + 502, + 82, + 870, + 200 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Evaluation metrics. To evaluate OOD detection performance, we adopt the following commonly used metrics: the Area Under the Receiver Operating Characteristic (AUROC), False Positive Rate at $95\\%$ Recall (FPR95), and the multi-class classification accuracy (ID Acc).", + "bbox": [ + 112, + 332, + 487, + 430 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Experimental Setup", + "text_level": 1, + "bbox": [ + 112, + 441, + 321, + 458 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Models. Fig. 2a summarizes common structures for document pre-training and classification models2. While documents typically come in the form of images (Harley et al., 2015), an OCR system can be used to extract words and their coordinates from the input image. Therefore, models can use single-modal or multi-modal information. We categorize these models according to the input modalities into the following groups: (1) models using only visual features, (2) models using solely textual features, (3) models incorporating both visual and textual features, and (4) models integrating additional spatial (especially layout) information. Further details can be found in Appendix A.", + "bbox": [ + 112, + 466, + 489, + 690 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "- Vision-only: Document classification can be viewed as a standard image classification problem. We consider ResNet-50 (He et al., 2016) and ViT (Fort et al., 2021) as exemplar document image classification models. We adopt two common pre-training settings: (1) only pre-trained on ImageNet (Deng et al., 2009) and (2) further pre-trained on IIT-CDIP (Lewis et al., 2006) with masked image modeling $(\\mathrm{MIM})^3$ . After pretraining, we append a classifier for fine-tuning.", + "bbox": [ + 112, + 700, + 489, + 860 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Text-only: Alternatively, we can view document classification as text classification since documents often contain text blocks. To this end, we use RoBERTa (Liu et al., 2019) and Longformer (Beltagy et al., 2020) as the backbones. RoBERTa can handle up to 512 input tokens while Longformer can handle up to 4,096 input tokens. We pre-train the language models with masked language modeling (MLM) on IIT-CDIP extracted text corpus.", + "- Text+Layout: Layout information plays a crucial role in the document domain, as shown in Fig. 3. To investigate the effect of layout information, we adopt LayoutLM as the backbone. We will show that spatial-aware models demonstrate promising OOD detection performance. However, such specialized models can be computationally expensive. Therefore, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa and transforms it into a spatial-aware model, which is computationally efficient and competitive for both ID classification and OOD detection (Sec. 5.2).", + "- Vision+Text+Layout: For comprehensiveness, we consider LayoutLMv3 and UDoc, which are large and computationally intensive. Both models are pre-trained on the full IIT-CDIP for fairness. These models utilize different input granularities and modalities, including textual, visual, and spatial information for document tasks." + ], + "bbox": [ + 509, + 332, + 884, + 869 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "${}^{2}$ Apart from document classification, in the Appendix B, we also investigate OOD detection for two entity-level tasks: document entity recognition and document object detection.", + "bbox": [ + 112, + 866, + 487, + 904 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "Note that the document classification dataset we used in", + "bbox": [ + 134, + 904, + 485, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "this paper, RVL-CDIP (Harley et al., 2015), is a subset of IIT-CDIP. Hence, unless otherwise specified, the IIT-CDIP pre-training data used in this paper excludes RVL-CDIP.", + "bbox": [ + 507, + 881, + 882, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4976", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Constructing ID and OOD datasets. We construct ID datasets from RVL-CDIP (Harley et al., 2015), where 12 out of 16 classes are selected as ID classes. Dataset details are in Appendix A. We consider two OOD scenarios: in-domain and out-domain, based on the content (e.g., words, background) and layout characteristics.", + "bbox": [ + 112, + 84, + 492, + 198 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "- In-domain OOD: To determine the OOD categories, we analyzed the performance of recent document classification models on the RVL-CDIP test set. Fig. 2b shows the per-category test accuracy of various models. Naturally, for the classes the models perform poorly on, we may expect the models to detect such inputs as OOD instead of assigning a specific ID class with low confidence. We observe that the 4 categories (letter, form, scientific report, and presentation) result in the worst performance across most of the models with different modalities. We use these as OOD categories and construct the OOD datasets accordingly. The ID dataset is constructed from the remaining 12 categories, which we refer to as in-domain OOD datasets, as they are also sourced from RVL-CDIP.", + "bbox": [ + 114, + 218, + 490, + 493 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "- Out-domain OOD: In the open-world setting, test inputs can have significantly different color schemes and layouts compared to ID samples. To mimic such scenarios, we use two public datasets as out-domain OOD test sets: NJU-Fudan Paper-Poster Dataset (Qiang et al., 2019) and CORD (Park et al., 2019). NJU-Fudan Paper-Poster Dataset contains scientific posters in digital PDF format4. CORD is a receipt understanding dataset with significantly different inputs compared to RVL-CDIP. As shown in Fig. 3, receipt images can be challenging and require models to handle not only textual but also visual and spatial information.", + "bbox": [ + 114, + 511, + 492, + 737 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We further support our domain selection using OTDD (Alvarez-Melis and Fusi, 2020), a flexible geometric method for comparing probability distributions, which enables us to compare any two datasets regardless of their label sets. We observe a clear gap between in-domain and out-domain data, which aligns with our data selection. Further details can be found in Appendix A.1.", + "bbox": [ + 112, + 758, + 490, + 888 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4 Analyzing OOD Reliability for Documents", + "text_level": 1, + "bbox": [ + 507, + 83, + 808, + 115 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1 OOD Detection Without Fine-Tuning", + "text_level": 1, + "bbox": [ + 507, + 124, + 848, + 141 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this section, we begin by examining the influence of pre-training datasets on zero-shot OOD detection. For each model, we adopt the same pretraining objective while adjusting the amount of pre-training data. Specifically, we increase the data diversity by appending 10, 20, 40, and $100\\%$ of randomly sampled data from IIT-CDIP dataset (around 11M) and pre-train each model. After pre-training, we measure the OOD detection performance with $\\mathrm{KNN + }$ score based on feature embeddings.", + "bbox": [ + 507, + 145, + 885, + 306 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We observe that: (1) for out-domain OOD data (Fig. 4a, right), increasing the amount of pretraining data can significantly improve the zero-shot OOD detection performance (w.o. fine-tuning) for models across different modalities. Our hypothesis is that pre-training with diverse data is beneficial for coarse-grained OOD detection, such as inputs from different domains (e.g., color schemes). (2) For in-domain OOD inputs, even increasing the amount of pre-training data by over $40\\%$ provides negligible improvements (Fig. 4a, left). This suggests the necessity of fine-tuning for improving in-domain OOD detection performance (Fig. 6).", + "bbox": [ + 507, + 307, + 885, + 516 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We further explore a more restricted setting for zero-shot OOD detection where potential OOD categories are removed from the pre-training dataset IIT-CDIP. First, we use LayoutLM fine-tuned on RVL-CDIP to predict labels for all documents in IIT-CDIP. Fig. 4b summarizes the distribution of the predicted classes on IIT-CDIP. Next, we remove the \"OOD\" categories from IIT-CDIP and pretrain two models (RoBERTa and LayoutLM) with 10, 20, 40, and $100\\%$ of randomly sampled data from the filtered IIT-CDIP (dubbed III- $\\mathrm{CDIP^{-}}$ ), respectively. The zero-shot OOD performance for in-domain and out-domain OOD is shown in Fig. $4c^{5}$ . For RoBERTa, we observe similar trends as in Fig. 4a, where increasing the amount of pretraining data improves zero-shot OOD detection performance for out-domain data. However, the zero-shot performance of LayoutLM benefits from a larger pre-training dataset. In particular, given the same amount of pre-training data, LayoutLM consistently outperforms RoBERTa for both in-domain and out-domain OOD detection, which suggests that spatial information can be essential", + "bbox": [ + 507, + 517, + 885, + 885 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "5Note that we do not show $0\\%$ in Fig. 4c since we pre-train LayoutLM from scratch.", + "bbox": [ + 507, + 892, + 882, + 917 + ], + "page_idx": 4 + }, + { + "type": "footer", + "text": "Extracted using https://github.com/pymupdf/PyMuPDF", + "bbox": [ + 134, + 903, + 482, + 917 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "4977", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/607b7ed811f4520c90c87ebfa687f7795cf55fce27dd9493771989b802367bb3.jpg", + "image_caption": [ + "Figure 3: (Top) Examples of ID inputs sampled from RVL-CDIP (top). (Bottom) In-domain OOD from RVL-CDIP, and out-domain OOD from Scientific Poster and Receipts." + ], + "image_footnote": [], + "bbox": [ + 117, + 80, + 884, + 247 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/3ff57b7eefe7bba1b923228264429ceba557d3580b56565eed51f383cfef3a6b.jpg", + "image_caption": [ + "(a) Pre-train on IIT-CDIP." + ], + "image_footnote": [], + "bbox": [ + 122, + 305, + 400, + 420 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/533b8df0e97e947ab30e1ad933d79182cfc6cf6d62aeaf752e4904d98a066b43.jpg", + "image_caption": [ + "Figure 4: The impact of pre-training data on zero-shot OOD detection performance. IIT-CDIP $^{-}$ denotes the filtered pre-training data after removing the \"OOD\" categories." + ], + "image_footnote": [], + "bbox": [ + 403, + 304, + 591, + 423 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/ffe6c9e679e1b4e6dd6a4e6537ee55d3a938507aacc027050f195aeadfe410b5.jpg", + "image_caption": [ + "(b) Analysis of IIT-CDIP.", + "(c) Pre-train on IIT-CDIP-." + ], + "image_footnote": [], + "bbox": [ + 598, + 307, + 873, + 419 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "for boosting the OOD reliability in the document domain. Motivated by the above observations, we dive deeper and analyze spatial-aware models next.", + "bbox": [ + 112, + 508, + 487, + 558 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "While pre-trained models exhibit the capability to differentiate data from various domains as a result of being trained on a diverse range of data. We observe that achieving more precise separation for in-domain OOD inputs remains difficult. Given this observation, we further analyze the impacts of fine-tuning for OOD detection with fixed pretraining datasets in the next section. By combining pre-trained models with a simple classifier and fine-tuning on RVL-CDIP (ID), we find that fine-tuning is advantageous in enhancing the OOD detection performance for both types of OOD samples.", + "bbox": [ + 112, + 558, + 489, + 752 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2 The Impact of Fine-Tuning on Document OOD Detection", + "text_level": 1, + "bbox": [ + 112, + 766, + 482, + 797 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Recent document models are often pre-trained on a large-scale dataset and adapted to the target task via fine-tuning. To better understand the role of fine-tuning, we explore the following questions: 1) How does fine-tuning impact OOD reliability for in-domain and out-domain OOD inputs? 2) How does model modality impact the performance?", + "bbox": [ + 112, + 806, + 487, + 919 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We consider a wide range of models pretrained on pure-text/image data (e.g., ImageNet and Wikipedia) described in Appendix A.3. During fine-tuning, we combine pre-trained models with a simple classifier and fine-tune on RVL-CDIP (ID). For models before and after fine-tuning, we extract the final feature embeddings and use a distance-based method KNN+ (Sun et al., 2022) for OOD detection. The results are shown in Fig. 6. We observe the following trends. First, fine-tuning largely improves OOD detection performance for both in-domain and out-domain OOD data. The same trend holds broadly across models with different modalities. Second, the improvement of fine-tuning is less significant for out-domain OOD data. For example, on Receipt (out-domain OOD), the AUROC for pre-trained ViT model is 97.13, whereas fine-tuning only improves by $0.79\\%$ . This suggests that pre-trained models do have the potential to separate data from different domains due to the diversity of data used for pre-training, while it remains hard for pre-trained models to perform finer-grained separation for in-domain OOD inputs. Therefore, fine-tuning is beneficial for improving OOD detection performance for both types of OOD", + "bbox": [ + 507, + 508, + 884, + 912 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "4978", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/0314be8bd1bca90ef5bdab4e487c0f9cd588fa77d31947c7b6755267540bb088.jpg", + "image_caption": [ + "Figure 5: Comparison between representative feature-based scores and logit-based scores for spatial-aware and non-spatial-aware models. Spatial-aware models are colored in blue." + ], + "image_footnote": [], + "bbox": [ + 115, + 80, + 495, + 181 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/da5abc0540cd0333359b641c58b0abc25ce6593a20535e58c75ebeef705c6902.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 502, + 80, + 880, + 181 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3219f54bf54a194e6e31bb1117eb506c76bf9ac3c3eaf77178f10401e1c64d55.jpg", + "image_caption": [ + "Figure 6: OOD detection performance for pre-trained models w. and w.o. fine-tuning. We use a distance-based method KNN+ as the OOD scoring function. Fine-tuning significantly improves performance for both in and out-domain OOD data." + ], + "image_footnote": [], + "bbox": [ + 117, + 233, + 374, + 355 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4e8f8a80434205a841194eab1c0f8c2ebcab57b5807dc1125b3cc39484f32d04.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 381, + 233, + 626, + 355 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ede391d9002a39d7640601e6dd684305bd9e813cab27211ac6c309fc5244bd8d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 233, + 880, + 355 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "samples. To further validate our conclusion, we consider two additional in-domain OOD settings for our analysis: (1) selecting the classes the model performs well on, as in-domain OOD categories; (2) randomly selecting classes as OOD categories (Appendix A.2). We find that fine-tuning improves OOD detection for both settings, further verifying our observations.", + "bbox": [ + 112, + 436, + 487, + 563 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Next, we take a closer look at the impact of model modality on out-domain OOD detection. As shown in Fig. 6 (mid and right), both vision and text-based models demonstrate strong reliability against scientific posters (OOD). However, vision-based models display stronger performance than text-based models for Receipts (OOD). This can be explained by the fact that ViT was first pre-trained on ImageNet while scientific posters and receipts contain diverse visual information such as colors and edges for vision models to utilize (see Fig. 3). On the other hand, although fine-tuning text-based models largely improves the detection performance compared to pre-trained counterparts, utilizing only textual information can be inherently limited for out-domain OOD detection.", + "bbox": [ + 112, + 567, + 489, + 824 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5 The Importance of Spatial-Awareness", + "text_level": 1, + "bbox": [ + 112, + 841, + 473, + 858 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In previous sections, we mainly focus on mainstream text-based and vision-based models for in- and out-domain OOD detection. Next, we consider", + "bbox": [ + 112, + 871, + 489, + 917 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "models tailored to document processing, which we refer to as spatial-aware models, such as LayoutLMv3 and UDoc. Given fine-tuned models, we compare the performance of logit-based and distance-based OOD scores.", + "bbox": [ + 507, + 436, + 884, + 514 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/70240005a0abddaad70c02836e857ccc66b79ca6b40b31ee70d80ff8cd54ca25.jpg", + "image_caption": [ + "Figure 7: Illustration of our spatial-aware adapter for language models. We present 2 adapter designs (marked in green box): (1) insert the adapter into the word embedding layer during pre-training and fine-tuning; (2) insert the adapter into the output layer for fine-tuning only. For the first design, we freeze the word embedding layer and learn the adapter and transformer layers." + ], + "image_footnote": [], + "bbox": [ + 512, + 526, + 878, + 659 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1 Analysis of Spatial-Aware Models", + "text_level": 1, + "bbox": [ + 507, + 801, + 821, + 816 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We summarize key comparisons in Fig. 5, where we use MSP and Energy as exemplar logit-based scores and $\\mathrm{KNN + }$ as the distance-based score. Full results are in Appendix C. We can see that the simple KNN-based score (KNN+) consistently outperforms logit-based scores for both in-domain and", + "bbox": [ + 507, + 822, + 882, + 917 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "4979", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "out-domain OOD data across different models with different modalities. This is in contrast with recent works that investigate large-scale pre-trained models in the vision-language domain, where logit-based scores demonstrate strong OOD detection performance (Fort et al., 2021). As documents are distinct from natural image-text pairs, observations in the vision-language domain do not seamlessly translate to the document domain. Moreover, spatial-aware models demonstrate stronger OOD detection performance for both in and out-domain OOD. For example, with the best scoring function $(\\mathrm{KNN}+)$ , LayoutLMv3 improves the average AUROC by $7.09\\%$ for out-domain OOD and $7.54\\%$ for in-domain OOD data compared to RoBERTa. This further highlights the value of spatial information for improving OOD robustness for documents.", + "bbox": [ + 110, + 84, + 492, + 357 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Despite the impressive improvements brought by spatial-aware models, acquiring a large-scale pretraining dataset that includes spatial information remains challenging. In contrast, there is a growing abundance of pre-trained language models that are based on textual data. This motivates us to explore the possibility of leveraging these pre-trained language models by training an adapter on a small dataset containing document-specific information. By adopting this approach, we can effectively utilize existing models while minimizing the time and cost required for training.", + "bbox": [ + 110, + 359, + 489, + 551 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.2 Towards Effective Spatial-Aware Adapter", + "text_level": 1, + "bbox": [ + 112, + 564, + 487, + 580 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "During our investigation into the effects of model modality, pre-training, and fine-tuning on various types of OOD inputs, we find that spatial/layout information plays a critical role in the document domain. However, existing pre-training models such as LayoutLM series, SelfDoc, and UDoc do not fully leverage the benefits of well-pre-trained language models. This raises the question of whether a large-scale language model, such as RoBERTa, can be adapted to detect OOD documents effectively. In this section, we demonstrate that incorporating an adapter module that accounts for spatial information with transformer-based pre-trained models can achieve strong performance with minimal changes to the code. To the best of our knowledge, this is the first study to apply the adapter idea to documents.", + "bbox": [ + 110, + 586, + 489, + 843 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Spatial-aware adapter. Given a pre-trained language model such as RoBERTa, we propose an adapter that utilizes spatial information. We consider two potential designs: 1) the adapter is ap-", + "bbox": [ + 110, + 854, + 492, + 921 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/a56533c927e6b36ed598d7f41760e9bed8ccba8b7f7add36b364da44c80e960a.jpg", + "image_caption": [ + "Figure 8: Comparison of OOD detection performance of Spatial-RoBERTa and RoBERTa. All models are initialized with public pre-trained checkpoints trained on purely textual data and further pre-trained on IIT-CDIP. The only difference is that Spatial-RoBERTa has an additional spatial-ware adapter and takes word bounding boxes as additional inputs." + ], + "image_footnote": [], + "bbox": [ + 512, + 82, + 880, + 192 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "pended to the word embedding layer, denoted as Spatial-RoBERTa (pre), which requires both pretraining and fine-tuning. This architecture is illustrated in the top row of Fig. 7.2) The adapter is appended to the final layer of the text encoder, denoted as Spatial-BoBERTa (post), which only requires fine-tuning as the model can utilize the pre-trained textual encoder, as shown in the bottom row of Fig. 7.", + "bbox": [ + 507, + 334, + 884, + 478 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "For Spatial-RoBERTa (pre), we freeze the word embedding layer during pre-training for several considerations: 1) word embeddings learned from large-scale corpus already cover most of those words from documents; 2) pre-training on documents without strong language dependency may not help improve word embeddings. For example, in semi-structured documents (e.g., forms, receipts), language dependencies are not as strong as in text-rich documents (e.g., letters, resumes), which may degenerate the learned word representations. In practice, each word has a normalized bounding box $(x_0, y_0, x_1, y_1)$ , where $(x_0, y_0) / (x_1, y_1)$ corresponds to the position of the upper left / lower right in the bounding box. To encode positional information, we employ four position embedding layers, where each layer= encodes one coordinate $(e.g., x_0)$ and produces a corresponding position embedding. The special tokens ([CLS], [SEP], and [PAD]) are attached with an empty bounding box $(0, 0, 0, 0)$ . As depicted in the top row of Fig. 7, the spatial-aware word embeddings are formed by adding position embeddings to their corresponding word embeddings.", + "bbox": [ + 507, + 481, + 884, + 866 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "For Spatial-RoBERTa (post), position embeddings are added through late fusion in the final hidden states during fine-tuning without affecting the", + "bbox": [ + 507, + 871, + 885, + 919 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "4980", + "bbox": [ + 480, + 928, + 521, + 940 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/110e122fbc8ca49348f8f64f04e6d3a599adf26422b92890e02a3c70f70deefd.jpg", + "image_caption": [ + "Figure 9: Correlation between ID accuracy and OOD detection performance. For most models, ID accuracy is positively correlated with OOD detection performance. Language models with spatial-aware adapters (highlighted in blue) achieve significantly higher ID accuracy and stronger OOD robustness (in AUROC) compared to language models without adapters. Here, $(+)$ represents further pre-training on the IIT-CDIP dataset." + ], + "image_footnote": [], + "bbox": [ + 117, + 80, + 379, + 211 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/7be12378d725b6bc34a985a57f9fd1ef7fb47644aece228bf26244f5557a6be5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 371, + 80, + 630, + 211 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/b9e0ddf8d677ce9e17b38f1e9c17810e93ab472b943962c10f3fbfa493e46079.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 630, + 80, + 880, + 211 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "pre-trained encoder. Our experiments demonstrate that introducing spatial-aware adapters during pretraining yields better results than only adding position embeddings during fine-tuning. For additional details, please refer to Appendix C. In the following, we focus on analyzing Spatial-RoBERTa (pre) and comparing both ID and OOD performance with that of the pure-text pre-trained RoBERTa.", + "bbox": [ + 112, + 312, + 489, + 441 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Spatial-RoBERTa significantly outperforms RoBERTa. To verify the effectiveness of Spatial-RoBERTa, we compare the OOD detection performance of pre-trained and fine-tuned models. The results are shown in Fig. 8, where OOD performance is based on $\\mathrm{KNN + (K = 10)}$ . Full results can be seen in Table 6. Spatial-RoBERTa significantly improves the OOD detection performance, especially after fine-tuning. For example, compared to RoBERTa (base), Spatial-RoBERTa (base) improves AUROC significantly by $4.24\\%$ averaged over four in-domain OOD datasets. This further confirms the importance of spatial information for OOD detection in the document domain.", + "bbox": [ + 112, + 450, + 489, + 674 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Spatial-RoBERTa is competitive for both ID classification and OOD detection. Beyond OOD detection performance, we also examine the multi-class ID classification accuracy and plot the two metrics for all models with different modalities in Fig. 9. We can clearly observe a positive correlation between ID accuracy and OOD detection performance (measured by AUROC) for both in-domain and out-domain OOD data. Moreover, spatial-aware models display superior ID accuracy and OOD robustness compared to text-only and", + "bbox": [ + 112, + 683, + 489, + 860 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "vision-only models. Overall, Spatial-RoBERTa greatly improves upon RoBERTa and matches the performance of models with more complex and specialized architectures such as LayoutLM. Specifically, Spatial-RoBERTaLarge achieves 97.37 ID accuracy, which is even higher than LayoutLM (97.28) and UDoc (97.36).", + "bbox": [ + 507, + 311, + 884, + 423 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "To summarize, our spatial-aware adapter effectively adapts pre-trained transformer-based text models to the document domain, improving both ID and OOD performance. In addition, by freezing the original word embeddings during pre-training, the models (Spatial-RoBERTaBase and Spatial-RoBERTaLarge) are parameter-efficient and thus reduce the training cost.", + "bbox": [ + 507, + 425, + 885, + 552 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "6 Conclusions", + "text_level": 1, + "bbox": [ + 507, + 565, + 650, + 579 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In this work, we provide a comprehensive and in-depth study on the impacts of pre-training, finetuning, model-modality, and OOD scores on a broad variety of document OOD detection tasks. We present novel insights on document OOD detection, which are under-explored or in contrast with OOD detection works based on vision-language models. In particular, we highlight that spatial information is critical for OOD detection in documents. We further propose a spatial-aware adapter as an add-on module to transformer-based models. Our module adapts pre-trained language models to the document domain. Extensive experiments on a broad range of datasets verify the effectiveness of our design. We hope our work will inspire future research toward improving OOD robustness for reliable document understanding.", + "bbox": [ + 505, + 590, + 884, + 864 + ], + "page_idx": 8 + }, + { + "type": "page_footnote", + "text": "Spatial-RoBERTaBase (pre) incorporates position information during both pre-training and fine-tuning, while Spatial-RoBERTaBase (post) only inserts the adapter into the output layer for fine-tuning.", + "bbox": [ + 112, + 869, + 489, + 917 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "4981", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "7 Limitations", + "text_level": 1, + "bbox": [ + 114, + 84, + 250, + 98 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In this work, our main focus is on OOD detection for document understanding, with a specific emphasis on the context of document classification. As OOD detection based on document pre-trained models remains largely underexplored, we believe establishing an in-depth and extensive study of OOD detection for document classification would be a valuable stepping stone towards more complex tasks. Apart from document classification, in the Appendix B, we also investigate OOD detection for two entity-level tasks: document entity recognition and document object detection. We leave a more comprehensive treatment for future works.", + "bbox": [ + 112, + 110, + 489, + 319 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 114, + 346, + 213, + 361 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "David Alvarez-Melis and Nicolo Fusi. 2020. Geometric dataset distances via optimal transport. In NeurIPS.", + "Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In ICCV.", + "Udit Arora, William Huang, and He He. 2021. Types of out-of-distribution texts and how to detect them. In EMNLP.", + "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.", + "Abhijit Bendale and Terrance E Boult. 2016. Towards open set deep networks. In CVPR.", + "Julian Bitterwolf, Maximilian Mueller, and Matthias Hein. 2023. In or out? fixing imagenet out-of-distribution detection evaluation. In ICML.", + "Lei Cui, Yiheng Xu, Tengchao Lv, and Furu Wei. 2021. Document ai: Benchmarks, models and applications. arXiv preprint arXiv:2111.08609.", + "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR.", + "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*.", + "Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. 2022. Vos: Learning what you don't know by virtual outlier synthesis. In ICLR.", + "Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. 2022. Zero-shot open set detection by extending clip. In AAAI." + ], + "bbox": [ + 115, + 369, + 487, + 917 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. 2021. Exploring the limits of out-of-distribution detection. In NeurIPS.", + "ZongYuan Ge, Sergey Demyanov, Zetao Chen, and Rahul Garnavi. 2017. Generative openmax for multi-class open set classification. arXiv preprint arXiv:1707.07418.", + "Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, and Tong Sun. 2021. Unified pretraining framework for document understanding. In NeurIPS.", + "Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. 2022. Xlayoutlm: Towards layout-aware multimodal networks for visually-rich document understanding. In CVPR.", + "Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In ICDAR.", + "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pages 770-778.", + "Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. 2022. Scaling out-of-distribution detection for real-world settings. In ICML.", + "Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR.", + "Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In AAAI.", + "Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. 2020. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR.", + "Rui Huang, Andrew Geng, and Yixuan Li. 2021. On the importance of gradients for detecting distributional shifts in the wild. In NeurIPS.", + "Rui Huang and Yixuan Li. 2021. Mos: Towards scaling out-of-distribution detection for large semantic space. In CVPR.", + "Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In ACMMM.", + "Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In IC-DAR Workshop." + ], + "bbox": [ + 510, + 85, + 882, + 917 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "4982", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, and Dilek Hakkani-Tur. 2022. Towards textual out-of-domain detection without in-domain labels. TASLP.", + "Geewook Kim, Teakgyu Hong, Moonbin Yim, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. 2022. Donut: Document understanding transformer withoutOCR.", + "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR.", + "Stefan Larson, Gordon Lim, Yutong Ai, David Kuang, and Kevin Leach. 2022. Evaluating out-of-distribution performance on document image classifiers. In NeurIPS.", + "Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS.", + "D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard. 2006. Building a test collection for complex document information processing. In SIGIR.", + "Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In AAAI.", + "Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. 2022. Dit: Self-supervised pretraining for document image transformer. In ACM MM.", + "Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021a. Selfdoc: Self-supervised document representation learning. In CVPR.", + "Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, and Jun Zhang. 2021b. kfolden: k-fold ensemble for out-of-distribution detection. In EMNLP.", + "Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR.", + "Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. 2020. Energy-based out-of-distribution detection. In NeurIPS.", + "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.", + "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS." + ], + "bbox": [ + 115, + 85, + 487, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yifei Ming, Ziyang Cai, Jiumiang Gu, Yiyou Sun, Wei Li, and Yixuan Li. 2022a. Delving into out-of-distribution detection with vision-language representations. In NeurIPS.", + "Yifei Ming, Ying Fan, and Yixuan Li. 2022b. Poem: Out-of-distribution detection with posterior sampling. In ICML. PMLR.", + "Yifei Ming and Yixuan Li. 2023. How does fin-tuning impact out-of-distribution detection for vision-language models? IJCV.", + "Yifei Ming, Yiyou Sun, Ousmane Dia, and Yixuan Li. 2023. How to exploit hyperspherical embeddings for out-of-distribution detection? In ICLR.", + "Yifei Ming, Hang Yin, and Yixuan Li. 2022c. On the impact of spurious correlation for out-of-distribution detection. In AAAI.", + "Ajoy Mondal, Peter Lipps, and CV Jawahar. 2020. Iiit-13k: a new dataset for graphical object detection in documents. In International Workshop on Document Analysis Systems.", + "Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. 2019. Do deep generative models know what they don't know? In ICLR.", + "Poojan Oza and Vishal M Patel. 2019. C2ae: Class conditioned auto-encoder for open-set recognition. In CVPR.", + "Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: A consolidated receipt dataset for post-ocr parsing. In NeurIPS Workshop.", + "Alexander Podolskiy, Dmitry Lipin, Andrey Bout, Ekaterina Artemova, and Irina Piontkovskaya. 2021. Revisiting mahalanobis distance for transformer-based out-of-domain detection. In AAAI.", + "Yu-Ting Qiang, Yan-Wei Fu, Xiao Yu, Yan-Wen Guo, Zhi-Hua Zhou, and Leonid Sigal. 2019. Learning to generate posters of scientific papers by probabilistic graphical models. JCST.", + "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML.", + "Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, and Peter J Liu. 2023. Out-of-distribution detection and selective generation for conditional language models. In ICLR.", + "Madeline C Schiappa, Yogesh S Rawat, Shruti Vyas, Vibhav Vineet, and Hamid Palangi. 2022. Multimodal robustness analysis against language and visual perturbations. In NeurIPS." + ], + "bbox": [ + 510, + 85, + 880, + 917 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "4983", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Vikash Sehwag, Mung Chiang, and Prateek Mittal. 2021. Ssd: A unified framework for self-supervised outlier detection. In ICLR.", + "Yilin Shen, Yen-Chang Hsu, Avik Ray, and Hongxia Jin. 2021. Enhancing the generalization for intent classification and out-of-domain detection in SLU. In ACL-IJCNLP.", + "Ray Smith. 2007. An overview of the tesseractOCR engine. In ICDAR.", + "Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. VI-bert: Pre-training of generic visual-linguistic representations. In ICLR.", + "Yiyou Sun, Chuan Guo, and Yixuan Li. 2021. React: Out-of-distribution detection with rectified activations. In NeurIPS.", + "Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. 2022. Out-of-distribution detection with deep nearest neighbors. In ICML.", + "Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. 2020. Csi: Novelty detection via contrastive learning on distributionally shifted instances. In NeurIPS.", + "Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, and Mohit Bansal. 2023. Unifying vision, text, and layout for universal document processing. In CVPR.", + "Thirumalaisamy P Velavan and Christian G Meyer. 2020. The Covid-19 epidemic. Tropical medicine & international health, 25(3):278.", + "Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, Dianhai Yu, et al. 2022a. mmlayout: Multi-grained multimodal transformer for document understanding. In ACMMM.", + "Zilong Wang, Jiaxiang Gu, Chris Tensmeyer, Nikolaos Barmpalios, Ani Nenkova, Tong Sun, Jingbo Shang, and Vlad I Morariu. 2022b. Mgdoc: Pre-training with multi-granular hierarchy for document image understanding. In EMNLP.", + "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.", + "Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. 2019. Detector2. https://github.com/facebookresearch/detectron2.", + "Zhisheng Xiao, Qing Yan, and Yali Amit. 2020. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In NeurIPS." + ], + "bbox": [ + 115, + 85, + 485, + 917 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Keyang Xu, Tongzheng Ren, Shikun Zhang, Yihao Feng, and Caiming Xiong. 2021a. Unsupervised out-of-domain detection via pre-trained transformers. In ACL.", + "Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. 2021b. Layoutmv2: Multi-modal pre-training for visually-rich document understanding. In ACL.", + "Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pre-training of text and layout for document image understanding. In SIGKDD.", + "Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. In ICDAR.", + "Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021. Contrastive out-of-distribution detection for pretrained transformers. In EMNLP.", + "Yunhua Zhou, Peiju Liu, and Xipeng Qiu. 2022. KNN-contrastive learning for out-of-domain intent classification. In ACL." + ], + "bbox": [ + 510, + 85, + 880, + 424 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "4984", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "A Dataset and Model Details", + "text_level": 1, + "bbox": [ + 114, + 84, + 379, + 98 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.1 Datasets", + "text_level": 1, + "bbox": [ + 114, + 112, + 231, + 124 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "The full RVL-CDIP dataset consists of 320K/40K/40K training/validation/testing images under 16 categories. We select 12 of them as the ID (In-domain) data. We employ the Google OCR engine to extract the text and layout information, which provides tokens, text blocks and the corresponding bounding boxes.", + "bbox": [ + 112, + 133, + 487, + 247 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.2 Quantifying OOD Dataset Construction", + "text_level": 1, + "bbox": [ + 112, + 260, + 477, + 275 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "The distance between datasets can be measured via Optimal Transport Dataset Distance (OTDD) $^{8}$ . We visualize the OTDD distance between ID and the OOD (both in-domain and out-domain) data in Fig. 10a, where we highlight the in-domain OOD data in blue and the out-domain OOD data in green. Specifically, we randomly sample 1000 images from each dataset and calculate the average distance between pairs of datasets. We can see a significant gap between the OTDD of in-domain OOD data and out-domain OOD data. To make the analysis more thorough, we consider two additional in-domain OOD settings: (1) select the classes the model performs well as OOD data; (2) randomly select classes as OOD data. The results are shown in Fig. 10b and Fig. 10c. We can see that the distance between ID and in-domain OOD is similar to the original scheme (Fig. 10a). This suggests that most in-domain OOD categories are not far from ID data.", + "bbox": [ + 112, + 282, + 489, + 602 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "While this paper represents an initial endeavor, we hope that our work will serve as a stepping stone towards constructing more comprehensive and diverse OOD benchmarks in the document domain, akin to those available in the NLP and natural image domain.", + "bbox": [ + 112, + 606, + 489, + 702 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A.3 Models and Training Details", + "text_level": 1, + "bbox": [ + 114, + 715, + 386, + 732 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "All models reported in Fig. 2b, except UDoc, are initialized with pre-trained weights from Huggingface and fine-tuned on the full RVL-CDIP training set. During fine-tuning, we train these models on RVL-CDIP with the cross-entropy loss. The models were optimized with Adam optimizer (Kingma and Ba, 2014) for 30 epochs with a batch size of 50 and a learning rate of $2 \\times 10^{-5}$ on 8 A100 GPUs.", + "bbox": [ + 112, + 738, + 489, + 866 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "The following are the hyperparameters of the models used in our paper:", + "bbox": [ + 507, + 84, + 884, + 116 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Text-only:", + "text_level": 1, + "bbox": [ + 509, + 129, + 594, + 143 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- BERT and RoBERTa: We adopt RoBERTaBase (12 layers) and BERTBase (12 layers) as backbones and set the maximum sequence length to 512. For RoBERTa, the classifier consists of two linear layers followed by a tanh activation function.", + "- LongformerBase: We also employ LongformerBase (12 layers) as the backbone and set the maximum sequence length to 4,096." + ], + "bbox": [ + 531, + 159, + 884, + 332 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Vision-only:", + "text_level": 1, + "bbox": [ + 509, + 350, + 610, + 365 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- ResNet50: We adopt ResNet50 pre-trained on ImageNet-1k as the backbone. We fine-tune the model at a resolution of $224 \\times 224$ .", + "- ViT: We consider ViTBase (vit-base-patch16-224, pre-trained on ImageNet-21k) as the backbone and fine-tune at a resolution of $224 \\times 224$ .", + "- SwinB: We also use the Swin Transformer (swin-base-patch4-window7-224-in22k, pretrained on ImageNet-21k) as the backbone and fine-tune the model at a resolution of $224 \\times 224$ ." + ], + "bbox": [ + 531, + 380, + 882, + 601 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Text+Layout:", + "text_level": 1, + "bbox": [ + 509, + 618, + 620, + 633 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- **LayoutLMv1:** This model employs the LayoutLM (layoutlm-base-uncased, 12 layers, pre-trained on IIT-CDIP) as the backbone. We set the maximum sequence length to 512.", + "- Spatial-RoBERTaBase (Pre): This model combines our spatial-aware adapter to the pretrained RoBERTaBase model. The adapter is applied to the word embedding layer. We freeze the pre-trained word embeddings and optimize the spatial-aware adapter and transformers.", + "- Spatial-RoBERTaBase (Post): Instead of inserting the spatial-aware adapter in the input layer, this model integrates the spatial-aware adapter at the output layer of the transformer." + ], + "bbox": [ + 531, + 649, + 884, + 917 + ], + "page_idx": 12 + }, + { + "type": "page_footnote", + "text": "7https://cloud.google.com/vision/docs/ocr", + "bbox": [ + 134, + 877, + 447, + 891 + ], + "page_idx": 12 + }, + { + "type": "page_footnote", + "text": "8https://github.com/microsoft/otdd", + "bbox": [ + 136, + 891, + 394, + 904 + ], + "page_idx": 12 + }, + { + "type": "page_footnote", + "text": "9https://huggingface.co/models", + "bbox": [ + 136, + 904, + 364, + 917 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "4985", + "bbox": [ + 480, + 928, + 519, + 939 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 494, + 940, + 502, + 951 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/90af6de6831ddb1f6eb120fbda29199b32b303e1b9a862bfc4bdbf707ef2c2c9.jpg", + "image_caption": [ + "(a) OOD (Worst performance)." + ], + "image_footnote": [], + "bbox": [ + 117, + 161, + 347, + 281 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/dd2e9423677d9dc596b0519c26eb3f64df1deb8943d1f34f83ac9faeec506a27.jpg", + "image_caption": [ + "Figure 10: Visualization of optimal transport dataset distance for ID and OOD (in-domain and out-domain) datasets. We highlight the in-domain OOD data in blue and the out-domain OOD data in green." + ], + "image_footnote": [], + "bbox": [ + 351, + 162, + 581, + 281 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/4d55b13de1f9a2cbe1a3e0dfc3ae97cf5463611f9a1f228ab8d7be62d03e1f0e.jpg", + "image_caption": [ + "(b) OOD (Best performance).", + "(c) OOD (Random selection)." + ], + "image_footnote": [], + "bbox": [ + 583, + 162, + 813, + 281 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/075807af9553c10e98933f376dc0e187355f594c655fb5afdd5be0b40c0edf76.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 127, + 513, + 312, + 575 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/14438878630e68d29777e5137aa31845596aed3bda2ad1df565207e6063ef4d2.jpg", + "image_caption": [ + "(a) RoBERTaBase (10%)" + ], + "image_footnote": [], + "bbox": [ + 127, + 576, + 310, + 640 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/a4f3a8b4ec2a2c7f337e06c72f660c8f915f92ce4076165a883f13ee07d9c79e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 314, + 513, + 497, + 576 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/901da70de489a082be65312b3f3de0b01b9aa0f342d51cb4a59a8c4707eca283.jpg", + "image_caption": [ + "(b) RoBERTaBase (20%)" + ], + "image_footnote": [], + "bbox": [ + 314, + 576, + 495, + 640 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/e4836d6a18a3287fff4747411832a7273c9139688deecbd6e2ba33498a7c2c11.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 502, + 513, + 684, + 576 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/27cde30ec36119db3d2b1d13a779742c66fd0c75d4b18d51103a65553730bb77.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 502, + 576, + 684, + 640 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/3310b40122707b033c78dc92f8004821ba6350fbd59fbd099f0fb3a136065523.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 687, + 513, + 870, + 576 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/7bb3433e1b02ffca18eec3ccce6d720aaefda0f7829f8a45aff1c9efcc58fc61.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 687, + 576, + 870, + 640 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/b6bc09e143045d69ad07c9c1cb4350d135eaa51ce05c551d3adcc158703ee13e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 127, + 659, + 310, + 718 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/fd7f589a65e340c0a995ffa12d8acfd4f3b78b2d36381737e7ebc9f714c8544a.jpg", + "image_caption": [ + "(e) $\\mathrm{ViT_{Base}}$ (10%)" + ], + "image_footnote": [], + "bbox": [ + 127, + 718, + 310, + 784 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/a0eb1ce030971c848e4c5626a4f4fa7e369eac9f86805535dac8e5d8a872cc34.jpg", + "image_caption": [ + "(f) $\\mathrm{ViT_{Base}}$ (20%)", + "Figure 11: Feature visualization for pre-trained (with different numbers of pre-training data) and fine-tuned models. We show both in-domain (RVL-CDIP) and out-domain (CORD) OOD datasets." + ], + "image_footnote": [], + "bbox": [ + 314, + 659, + 495, + 784 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/3111f755a7d0ff1a749b26995277f72811df8e66e50a82c9549a54b80c3f4c86.jpg", + "image_caption": [ + "(c) RoBERTaBase (40%)", + "(g) $\\mathrm{ViT_{Base}}$ (40%)" + ], + "image_footnote": [], + "bbox": [ + 500, + 659, + 682, + 784 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/fea5884ad3a01e9d91ea5681e5e1cf201c87eb9d4be6e37730e9ccd5374ae46f.jpg", + "image_caption": [ + "(d) RoBERTaBase (100%)", + "(h) $\\mathrm{ViT_{Base}}$ (100%)" + ], + "image_footnote": [], + "bbox": [ + 687, + 659, + 870, + 784 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "4986", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 492, + 941, + 504, + 952 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/8e18e6d992d98a892ba8037f96ab0a525b0b6088487dd5318b64b8782e511986.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 117, + 74, + 329, + 388 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/e4abd59173f9fc34556cee7805493b325a16fa764f5f4fa43d769b3e7844d2ec.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 388, + 74, + 603, + 386 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/faac36a23aa786e1a67155dfb68d9f1e4bc0aa8668956fbde9f449316527e24b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 663, + 74, + 877, + 385 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/fe1eb8b820ce0708098b7e7e18c14a3c2bfb47f3907afdf75356b8fe35c93854.jpg", + "image_caption": [ + "Figure 12: MSP, Energy, KNN, and Maha score histogram distributions of ID (blue) and OOD (green) inputs derived from fine-tuned ResNet-50, RoBERTa, and LayoutLMv3. The KNN scores calculated from both vision and language models naturally form smooth distributions. In contrast, MSP and Maha scores for both in- and out-of-distribution data concentrate on high values. Overall our experiments show that using feature space makes the scores more distinguishable between and out-of-distributions and, as a result, enables more effective OOD detection.", + "Figure 13: The network architectures in green blocks are our proposed models. We also show the modality information on top of each architecture." + ], + "image_footnote": [], + "bbox": [ + 117, + 483, + 485, + 602 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Vision+Text+Layout:", + "text_level": 1, + "bbox": [ + 114, + 694, + 284, + 709 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- LaytouLMv3: We use LayoutLMv3 (layoutlmv3-base, 12 layers, pre-trained on IIT-CDIP) as the backbone.", + "- UDoc: We use a slight variant of UDoc with the only difference in the sentence encoder, where we adopt a smaller version of the pretrained sentence encoder (all-MiniLM-L6-v2, 6 layers) instead of the larger sentence encoder (bert-base-nli-mean-tokens, 12 layers)." + ], + "bbox": [ + 134, + 741, + 489, + 917 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "B Beyond Document Classification", + "text_level": 1, + "bbox": [ + 507, + 489, + 825, + 506 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "In the main paper, we mainly focus on document classification to provide a thorough and in-depth analysis. In this section, we go beyond document classification and explore OOD detection for two entity-level tasks in documents: document entity recognition and document object detection. It is natural to detect and recognize basic units in documents such as text, tables, and figures. Document entity recognition aims to predict the label for each semantic entity with given bounding boxes. Document object detection is an object detection task for document images. Specifically, we denote the input as $x$ , the bounding box coordinates associated with object instances in the image as $\\pmb{b} \\in \\mathbb{R}^4$ , and use the model with parameters $\\theta$ to model the bounding box regression $p_{\\theta}(b|x)$ and the label classification $p_{\\theta}(y|x, b)$ . Given a test input $\\hat{x}$ , the OOD detection scoring function for entity detection and recognition can be unified as $S(\\hat{x}, \\hat{b})$ , where $\\hat{b}$ denotes the object instance predicted by the object detector. In particular, for document entity recognition, since the bounding boxes are provided, the OOD score can be simplified as $S(\\hat{x}, \\bar{b})$ , where $\\bar{b}$ is the given object instance.", + "bbox": [ + 507, + 532, + 884, + 917 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "4987", + "bbox": [ + 480, + 928, + 519, + 938 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 492, + 940, + 502, + 951 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Document Object Detection. For document object detection, we use PubLayNet as the ID dataset and construct the OOD dataset from IIIT-AR-13K. Unlike PubLayNet, where the documents are scientific articles, IIIT-AR-13K is a dataset for graphical object detection in business documents (e.g., annual reports), thus there exists an obvious domain gap. We select natural images as the OOD entity and filter images that contain the OOD entity. Two object detection models are considered in this paper: (1) Vanilla Faster-RCNN with ResNet-50 visual backbone, and (2) Faster-RCNN with VOS (Du et al., 2022), a recent unknown-aware learning framework to improve OOD detection performance for natural images. Following the original paper, we use 1,000 samples for each ID class to estimate the class-conditional Gaussian statistics. The models are trained for 180k iterations with a base learning rate of 0.01 and a batch size of 8 using the Detectron2 framework (Wu et al., 2019). The performance of the models is measured using the mean average precision (MAP) @ intersection over union (IOU) [0.50:0.95] of bounding boxes.", + "bbox": [ + 110, + 84, + 492, + 455 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Document Entity Recognition. For entity recognition, we construct ID and OOD datasets from FUNSD. Each semantic entity includes a list of words, a label, and a bounding box. The standard label set for this dataset contains four categories: question, answer, header, and other. In this paper, we select entities labeled as other or header as OOD data, and the entities belonging to the other three categories as ID. Instead of treating entity recognition as a named-entity recognition problem, we follow UDoc and solve this problem at the semantic region level. We replace the sentence encoder in UDoc with a smaller sentence encoder (all-MiniLM-L6-v2 $^{10}$ ) from Huggingface (Wolf et al., 2019). We also have the following model variants to verify the effectiveness of the combination of modalities: textual-only, visual-only, textual+spatial, visual+spatial, and visual+textual+spatial.", + "bbox": [ + 110, + 463, + 489, + 768 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "We provide details on datasets and models as follows.", + "bbox": [ + 112, + 769, + 487, + 800 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B.1 Datasets", + "text_level": 1, + "bbox": [ + 112, + 810, + 231, + 825 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The original FUNSD (Jaume et al., 2019) dataset contains 149 training and 50 testing images. For document entity recognition, we treat entities with the category other/anchor as OOD entities. After", + "bbox": [ + 112, + 832, + 487, + 897 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "the split, if we consider other as OOD, we have a total of 8,330 ID and 1,019 OOD entities. Otherwise, if we consider header as OOD, we have 8,981 ID and 368 OOD entities in total.", + "bbox": [ + 507, + 84, + 884, + 148 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "For document object detection, we consider PubLayNet (Zhong et al., 2019), which contains $336\\mathrm{K} / 11\\mathrm{K}$ training/validation images with 6 categories (text, title, list, fig., and table). The original IIIT-AR-13K (Mondal et al., 2020) contains (table, fig., natural image, logo, and signature). In this paper, considering the overlap between IIIT-AR-13K and PubLayNet, we select those images containing natural images as the OOD test set. After filtering, we obtain 2,880 OOD entities across 1,837 document images.", + "bbox": [ + 505, + 149, + 885, + 326 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "We consider three ID datasets in this experiment. (1) PubLayNet: This is the original PubLayNet dataset. We treat all the entities in training/validation images as ID entities. (2) Considering the domain shift between ID data (PubLayNet) and OOD data (IIIT-AR-13K). We combine the PubLayNet training data with the images from IIIT-AR-13K with overlapping annotations (table and figure) and train the object detection model.", + "bbox": [ + 505, + 326, + 885, + 472 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B.2 Models", + "text_level": 1, + "bbox": [ + 507, + 483, + 616, + 498 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Fig. 13 illustrates the entity recognition models used in this paper. We consider the entities on regions instead of tokens, as regions provide richer semantic information. As for the pre-trained model, we adopt UDoc (trained on IIT-CDIP) since it models inputs at the regional level. Based on the UDoc framework, we develop the following models.", + "bbox": [ + 507, + 506, + 885, + 619 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Vision/Vision+Layout:", + "text_level": 1, + "bbox": [ + 509, + 626, + 690, + 642 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- ResNet-50: This model is composed of the ResNet-50 from pre-trained UDoc. It adopts the RoI pooling followed by a classifier to extract the entity features.", + "- ResNet-50+Position: This model also adapts UDoc's pre-trained ResNet-50 for further improvement. It makes the RoI features spatially aware by adding position embeddings, which are mapped from the bounding boxes via a linear mapping layer." + ], + "bbox": [ + 531, + 655, + 884, + 829 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Text/Text+Layout:", + "text_level": 1, + "bbox": [ + 509, + 841, + 660, + 858 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "- Sentence BERT: This model adopts the language branch of UDoc and appends the classifier to the output of the sentence encoder.", + "bbox": [ + 531, + 871, + 884, + 917 + ], + "page_idx": 15 + }, + { + "type": "page_footnote", + "text": "10https://huggingface.co/sentence-transformers", + "bbox": [ + 131, + 903, + 478, + 917 + ], + "page_idx": 15 + }, + { + "type": "footer", + "text": "4988", + "bbox": [ + 480, + 928, + 519, + 939 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 492, + 940, + 504, + 951 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/f1c75d15ae21f76e4491afaf6a3fc5ad750c0f81e0fab7f2a57d7592f3089702.jpg", + "image_caption": [ + "(b) OOD detection results from different object detection methods and models.", + "Figure 14: Ablation on document entity recognition and object detection. Numbers are reported in FPR95." + ], + "image_footnote": [], + "bbox": [ + 149, + 82, + 495, + 168 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/6a13b259845932a94df9bd7401b4c1fd5cb34fddcbd94e79d6e72b83660219b2.jpg", + "image_caption": [ + "(a) Comparison of OOD detection methods on different models on two OOD classes: other and header." + ], + "image_footnote": [], + "bbox": [ + 499, + 82, + 840, + 168 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "- Sentence BERT+Position: This model is close to the above model but adds position embeddings to the sentence embeddings.", + "bbox": [ + 136, + 223, + 485, + 269 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Vision+Text+Layout:", + "text_level": 1, + "bbox": [ + 114, + 280, + 282, + 296 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- ResNet-50+sentence BERT: This model follows the same framework as UDoc, but replaces the sentence encoder in their original design with a more miniature sentence encoder (all-MiniLM-L6-v2).", + "- SwinT+Sentence BERT: This model replaces the ResNet-50 visual backbone with a pre-trained tiny Swin Transformer (swintiny-patch4-window7-224) adopted from the Huggingface." + ], + "bbox": [ + 136, + 305, + 487, + 475 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "All the models are fine-tuned with the cross-entropy loss for 100 epochs, using a learning rate of $10^{-5}$ and a batch size of 8 on an A100 GPU.", + "bbox": [ + 112, + 486, + 485, + 532 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B.3 Summary of Observations", + "text_level": 1, + "bbox": [ + 114, + 544, + 369, + 558 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We provide a summary of observations here and hope to inspire future works on a thorough investigation of OOD detection for entity-level tasks. To identify entity types, models should not only understand the words but also utilize spatial and visual information.", + "bbox": [ + 112, + 565, + 487, + 659 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "For document entity recognition, the comparison of distance-based and logit-based OOD detection methods with different models are shown in Fig. 14a. More details are shown in Table 2. We see that models can better predict the entity type and also achieve better OOD robustness with the help of spatial information. Considering the weak language dependency between entities, it is not surprising that vision-based models achieve better performance than text-based models. In particular, UDoc with ResNet-50 achieves the best performance on two OOD test sets, illustrating that visual information plays a major role in increasing the discrimination of entities with similar semantics. For document object detection, we summarize our findings in Fig. 14b and describe them in more", + "bbox": [ + 112, + 662, + 489, + 917 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "detail in Table 1. We can see that the OOD detection performance is further improved by introducing document images from IIIT-AR-13K with the same ID annotations as training data.", + "bbox": [ + 505, + 223, + 882, + 286 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "To provide more intuitions, in Fig. 15, we visualize the document entity recognition OOD detection results. In Fig. 16, we visualize the prediction on sample OOD images, using object detection models trained without VOS (top) and with VOS (bottom), respectively. We can see that vanilla Faster RCNN trained on PubLayNet produces false positives when applied to the OOD document images from IIIT-AR-13K. Table 1 shows that introducing the unknown-aware learning method optimized for both ID and OOD can reduce the FPR95 while preserving the mAP on the ID data. This experiment indicates that incorporating uncertainty estimation into the entity detection training procedure can improve the reliability of the document object detection system.", + "bbox": [ + 507, + 287, + 884, + 544 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C Detailed Experimental Results", + "text_level": 1, + "bbox": [ + 507, + 556, + 811, + 571 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Table 2 corresponds to the results shown in Fig. 15 and Fig. 14a.", + "- Table 1 corresponds to the results shown in Fig. 16 and Fig. 14b.", + "- Table 3 and Table 7 correspond to the results shown in Fig. 4a.", + "- Table 4 and Table 5 correspond to the results shown in Fig. 4c.", + "- Table 6 corresponds to the results shown in Fig. 8 and Fig. 9.", + "- Table 9 and Table 8 correspond to the results shown in Fig. 6 and Fig. 9.", + "- Table 10 and Table 11 correspond to the analysis for Sec. 4 and Sec. 4.2.", + "- Table 12 corresponds to the results shown in Fig. 9." + ], + "bbox": [ + 507, + 581, + 880, + 907 + ], + "page_idx": 16 + }, + { + "type": "footer", + "text": "4989", + "bbox": [ + 480, + 928, + 519, + 938 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 492, + 940, + 502, + 951 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/79de30db107355133e0f289c7f4db6feb3f9ac012befdf84ed5b4e4b131ef632.jpg", + "image_caption": [ + "Figure 15: Visualization of detected OOD entities on the form images. The top part shows the entities in blue are entities annotated as other. The bottom part shows the detected OOD entities (green). We also show failure cases on the right part." + ], + "image_footnote": [], + "bbox": [ + 115, + 130, + 884, + 275 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/757e27df2d9e25d1d977d414ed4a1f7dabd77e97d7ddf7ae1f976564f4d51dff.jpg", + "image_caption": [ + "Figure 16: Visualization of detected objects on the OOD images (from IIIT-AR-13K) by a vanilla Faster-RCNN (top) and Faster-RCNN with VOS (bottom) is shown. Objects in blue boxes are detected and classified as one of the ID classes. The detected OOD objects (green) reduce false positives among detected objects. We also visualize detected objects on the ID images. There is a clear difference between PubLayNet and IIIT-AR-13K – entities and annotations of natural images rarely exist in PubLayNet." + ], + "image_footnote": [], + "bbox": [ + 117, + 420, + 884, + 589 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/0511f8c6067c2f827c3480a3419f496313da30351f2b0634acc6e898e0dfa613.jpg", + "table_caption": [ + "Table 1: Comparison with different training and detection methods." + ], + "table_footnote": [], + "table_body": "
ModelsID DatasetOOD ScoreIIIT-AR-13K (Natural Image as OOD)PubLayNet (ID)
FPR95AUROCAUPRmAP
Vanilla Faster-RCNNPubMedNetMSP74.3379.1298.4192.6
Energy55.9683.5598.73
Faster-RCNN with VOSPubMedNetMSP63.6579.3798.5792.2
Energy55.6180.6098.67
Faster-RCNN with VOSPubMedNet+IIIT-AR-13K(ID)MSP56.5782.9498.5992.4
Energy47.7384.0498.67
", + "bbox": [ + 156, + 788, + 842, + 869 + ], + "page_idx": 17 + }, + { + "type": "footer", + "text": "4990", + "bbox": [ + 480, + 928, + 521, + 940 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 492, + 941, + 505, + 952 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/16d5fd7d1510a755ac826d23417646f23cd095917fef5c34cdeb5d4b84a26747.jpg", + "table_caption": [ + "Table 2: Comparison with different models on FUNSD OOD setting. All models are initialized with UDoc pretrained on IIT-CDIP and fine-tuned on FUNSD data with ID entities. All values are percentages. S-BERT deontes Sentence BERT. A lower FPR95 or a higher AUROC value indicates better performance." + ], + "table_footnote": [], + "table_body": "
Test F1MethodOther (OOD)IDHeader (OOD)IDTest F1MethodOther (OOD)IDHeader (OOD)ID
FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1
ResNet-5075.15KNN1059.4779.1481.7963.97ResNet-50+Position75.82KNN1073.2173.1990.2261.42
KNN2069.9778.1581.2563.66KNN2072.9173.4488.0461.54
KNN5084.4977.4082.6162.86KNN5075.9674.4382.8860.93
KNN10097.9477.0877.6584.2461.6278.04KNN10079.6974.8583.7059.3977.98
KNN20097.8477.1594.2959.74KNN20086.0675.1491.5857.42
KNN40097.1576.0994.8457.53KNN40087.9374.9295.9255.37
MSP50.5475.8075.8276.55MSP77.8267.6084.2466.58
MaxLogit52.4073.7073.6476.72MaxLogit76.9467.0584.2465.41
Energy52.5073.7075.8276.55Energy76.6466.9384.5164.98
S-BERT77.15KNN1093.7248.4492.6660.99S-BERT+Position82.69KNN1097.4541.2493.7562.38
KNN2093.9247.6592.9359.00KNN2097.5539.9193.4861.51
KNN5093.6248.9493.2157.90KNN5097.1539.5692.3961.76
KNN10093.9248.7993.2155.07KNN10097.0641.6791.8560.99
KNN20093.9247.8582.1293.4852.8682.41KNN20096.5741.8587.0859.0887.01
KNN40094.1146.2195.3849.86KNN40097.2540.8390.2254.03
MSP93.6254.9194.2952.14MSP88.4261.1190.7659.58
MaxLogit93.7254.7594.5756.51MaxLogit89.7060.1988.8660.92
Energy93.2354.8893.2158.22Energy90.4859.6189.9561.12
ResNet-50+S-BERT89.11KNN1045.9387.8553.8087.97SwimT+S-BERT86.00KNN1063.3083.6481.5264.08
KNN2053.5886.7155.7187.06KNN2066.7382.5381.5261.50
KNN5073.2184.3662.7785.49KNN5070.1780.2182.3457.77
KNN10089.7083.0169.0283.60KNN10083.9177.7183.1554.97
KNN20096.6681.9093.1375.5480.8593.18KNN20095.3975.7990.8250.5790.40
KNN40098.8281.0091.5877.42KNN40096.7675.4999.7347.45
MSP45.4487.8267.3972.85MSP69.2870.7080.7152.02
MaxLogit45.5390.5863.0472.39MaxLogit67.1274.4181.7952.77
Energy45.5390.5763.8672.37Energy67.2274.4181.7952.77
", + "bbox": [ + 117, + 407, + 878, + 642 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "4991", + "bbox": [ + 480, + 928, + 517, + 940 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 494, + 940, + 502, + 951 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/c6b89cfd04e3cb218611d45ffadc1c0050c27654b45279623d1c37153149e2c7.jpg", + "table_caption": [ + "Table 3: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP. ID (Acc) denotes the ID accuracy obtained by testing on ID test data. We report the KNN-based scores for both pre-trained and fine-tuned models. Sci. Poster denotes the document images converted from NJU-Fudan Paper-Poster Dataset. Receipt denotes the receipt images collected from the CORD receipt understanding dataset. For in-domain OOD test data, we also report the averaged scores." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.59MSP92.7569.2492.2166.9394.6565.4092.0070.0992.9067.9296.5166.9399.1052.90
MaxLogit98.3677.8597.2378.5198.7672.8498.8678.0898.3076.82100.0078.69100.0063.74
Energy98.6077.8197.5578.4998.9672.7998.9478.0098.5176.77100.0078.68100.0063.70
GradNorm98.0479.2697.0776.8598.5672.8398.6280.5598.0777.37100.0085.23100.0064.10
KNN1063.2188.1865.8188.0573.0284.6367.7488.9267.4587.4469.7788.4990.5084.44
KNN2063.5388.0765.8987.9072.7584.4867.3388.8167.3887.3268.6088.1391.1084.09
KNN5064.1787.8966.9787.7773.3484.2367.2188.6067.9287.1272.0987.4791.6083.59
KNN10064.4987.6467.7887.5573.4683.9467.2988.3768.2686.8872.0986.8391.5083.21
Pre-train on 10% IIT-CDIP (no fine-tune)
-KNN1088.0766.9492.1366.6294.1361.9094.4054.5792.1862.5167.4487.0462.1084.94
KNN2088.5966.0292.6565.2594.1360.8394.7253.7992.5261.4777.9185.3864.6083.86
KNN5089.7564.4093.5363.1294.3758.9895.1752.3393.2059.7183.7282.9769.2082.29
KNN10090.2362.9493.8561.2894.4157.4595.1351.2893.4058.2483.7280.9170.1081.05
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.71MSP94.2868.0294.4665.9896.0162.9894.8165.9894.8965.7495.3563.5599.1054.99
MaxLogit97.3677.8297.1979.1698.4072.6498.3477.6897.8276.82100.0077.3699.6066.63
Energy98.0477.8097.4379.1598.7672.6198.5877.6498.2076.80100.0077.3299.6066.61
GradNorm97.3680.6896.8376.0498.4473.2997.8981.3797.6377.85100.0086.1899.5067.49
KNN1063.5788.3067.0687.0673.6683.9273.0987.8069.3486.7769.7788.0187.6083.81
KNN2063.8588.2067.4686.9073.9483.7872.9387.7069.5486.6469.7787.6388.3083.53
KNN5063.8988.0267.5486.7174.3883.5572.2487.4669.5186.4370.9387.0988.2083.12
KNN10064.8587.8167.6286.4574.9083.2572.6587.2470.0086.1972.0986.6588.3082.89
Pre-train on 20% IIT-CDIP (no fine-tune)
-KNN1087.1568.2790.8866.8992.2662.3995.0153.0291.3262.6443.0292.2957.0087.67
KNN2087.3167.3592.0465.5491.5461.4094.9752.3391.4661.6647.6791.1862.6086.61
KNN5088.3965.7192.6963.4592.1859.5795.2550.9792.1359.9256.9889.6465.7085.20
KNN10088.8364.2093.1361.6192.2257.9995.4549.9592.4158.4458.1488.3666.9084.17
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.76MSP92.6770.0993.9365.6995.0563.1995.5065.5494.2966.1395.3563.6395.4064.97
MaxLogit98.0878.7297.8779.8598.4471.6398.3075.4198.1776.4098.8478.0798.9075.65
Energy98.4878.6997.9179.8398.6871.6198.5075.4098.3976.38100.0078.0498.5075.60
GradNorm98.0481.0397.4776.7398.4472.7797.4079.1197.8477.41100.0087.4797.6077.12
KNN1060.5788.7968.8686.3675.2683.5573.9087.1269.6586.4667.4489.9072.7089.49
KNN2061.3788.7269.0686.2475.4683.4373.4687.0069.8486.3568.6089.6673.5089.25
KNN5062.2188.5269.1886.0875.6683.2173.4286.7170.1286.1370.9389.2074.7088.89
KNN10063.7788.3069.7985.8476.0282.9374.1986.4670.9485.8874.4288.8475.3088.69
Pre-train on 40% IIT-CDIP (no fine-tune)
-KNN1085.7169.0890.8468.6890.4662.5294.7651.7690.4463.0125.5895.8357.3088.60
KNN2085.2768.2191.6467.4889.7461.3294.8151.0190.3662.0029.0795.2262.3087.61
KNN5086.1966.6092.2165.5490.3059.3594.9349.6090.9160.2741.8694.3266.8086.25
KNN10087.1965.0492.5763.8390.5057.7495.0948.4491.3458.7645.3593.6668.3085.14
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP (no fine-tune)
-KNN1084.4370.2090.2068.5490.9863.1894.7252.1690.0863.5227.9194.1046.0091.37
KNN2084.5169.3091.2867.3590.3861.9694.7251.4390.2262.5133.7293.3951.5090.55
KNN5085.6767.7591.9265.3590.8259.7994.8949.7790.8260.6639.5392.2856.7089.32
KNN10086.5566.0892.9763.4691.4658.0095.4148.3991.6058.9844.1991.2961.6088.18
", + "bbox": [ + 117, + 302, + 878, + 770 + ], + "page_idx": 19 + }, + { + "type": "footer", + "text": "4992", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 492, + 940, + 504, + 951 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/204895392e092d124f022fad04e5d83832c689e536184ac6c9274a9de6ac8afd.jpg", + "table_caption": [ + "Table 4: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP $^{-}$ (remove pseudo OOD categories)." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.62MSP90.0769.0089.9268.8692.5864.1691.0766.7890.9167.2096.5154.4796.7059.63
MaxLogit97.7678.4097.7180.5898.6471.2698.7076.3898.2076.66100.0073.5199.8073.32
Energy98.1678.3597.7580.5598.8471.2098.9076.3298.4176.60100.0073.4699.8073.31
GradNorm97.6879.9297.2779.4298.5671.3198.5079.4498.0077.52100.0082.6299.6075.85
KNN1065.8587.8966.6988.1275.9882.8274.5586.8570.7786.4287.2185.1683.9087.91
KNN2066.3387.8066.8588.0475.9482.7073.9486.7570.7686.3287.2184.6383.6087.71
KNN5066.7787.6667.3088.0076.0282.4973.6686.5270.9486.1788.3783.7383.9087.34
KNN10067.2587.4267.7487.8476.1882.1873.9986.2671.2985.9289.5382.8583.9086.98
Pre-train on 10% IIT-CDIP(- no fine-tune)
-KNN1086.3565.4885.7470.8492.9459.5593.1456.6289.5463.1229.0795.4287.6083.13
KNN2086.8764.4887.1469.6893.3058.4193.3055.9190.1562.1237.2194.7588.0081.44
KNN5087.7562.7388.9967.8093.5056.5493.7554.5291.0060.4047.6793.7190.3078.97
KNN10088.4361.1789.5966.0593.6254.9193.9953.4091.4158.8848.8493.0991.5077.00
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.65MSP96.0467.5894.9068.3296.0564.9296.2368.6295.8067.36100.0061.4998.7056.38
MaxLogit97.9676.9297.5980.6898.4872.3198.7477.7298.1976.91100.0075.9199.5069.21
Energy98.1676.8998.2380.6598.8872.2699.0777.6798.5876.87100.0075.8999.5069.18
GradNorm97.8478.2397.3178.5798.0071.4498.4680.0397.9077.07100.0085.8099.0069.54
KNN1066.0587.6067.7087.9473.4283.1073.5087.9670.1786.6577.9190.1990.1084.32
KNN2066.1787.5068.3887.8373.9082.9373.6687.8270.5386.5277.9189.8489.8084.13
KNN5067.2187.2668.4687.7374.1882.6373.6687.5870.8886.3079.0789.2489.6083.80
KNN10068.7886.9869.1487.5375.5082.3074.2787.3671.9286.0482.5688.6889.8083.59
Pre-train on 20% IIT-CDIP(- no fine-tune)
-KNN1085.6366.1085.1770.3492.5860.2993.4356.8589.2063.4030.2395.7283.2083.84
KNN2086.3165.1785.9869.1393.3059.0993.4756.0589.7762.3634.8895.0884.9082.16
KNN5087.3163.5087.6367.1193.3857.1794.1654.6090.6260.6044.1994.0787.5079.74
KNN10087.8362.0688.2765.3193.6255.6594.3253.5691.0159.1448.8493.4888.8077.77
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.72MSP93.8468.8693.6967.6295.4163.9194.2065.2594.2866.4196.5163.3298.9054.02
MaxLogit97.1678.5696.8780.1898.6871.8498.5874.4497.8276.26100.0076.7299.1065.41
Energy97.4078.5397.1580.1798.6871.7998.7874.3998.0076.22100.0076.6799.5065.39
GradNorm97.2480.5996.9578.0198.5272.1298.3477.1697.7676.97100.0086.9499.7067.46
KNN1066.8987.9168.5886.9077.6182.3176.5885.3972.4185.6375.5889.4586.4084.23
KNN2067.5787.8068.9086.7977.7782.1976.3085.2272.6485.5080.2389.1786.8083.85
KNN5067.9787.5869.6786.6778.0181.9876.6684.8573.0885.2780.2388.6387.2083.21
KNN10069.4687.3471.2386.4779.0181.7277.4884.5774.3085.0282.5688.1988.0082.72
Pre-train on 40% IIT-CDIP(- no fine-tune)
-KNN1088.7966.1488.3568.9293.5060.3095.5451.0991.5461.6137.2195.3755.9091.90
KNN2089.5965.0789.8067.6193.8959.1095.5850.1792.2160.4946.5194.4161.5091.00
KNN5090.5963.3991.6465.6893.7757.3595.6648.6392.9258.7653.4993.0666.4089.72
KNN10091.1961.7992.3763.9093.6655.7895.6247.4293.2157.2265.1291.9968.3088.72
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.74MSP94.1268.2494.2966.1895.9363.8395.2165.6694.8965.9898.8459.2596.5065.42
MaxLogit97.2478.1597.1980.2798.3672.1698.3875.8297.7976.60100.0073.2899.3075.58
Energy97.3278.1397.5180.2698.6472.1298.7075.7898.0476.57100.0073.2799.6075.52
GradNorm97.1680.0797.3977.8698.4071.8398.0579.0897.7577.21100.0086.3299.4073.52
KNN1066.8187.8669.6786.9177.4982.6074.5986.2872.1485.9181.4087.7476.9088.49
KNN2066.7387.7570.3186.7877.8982.5175.2886.1372.5585.7981.4087.4377.5088.39
KNN5067.2587.5470.5986.6277.8582.3275.4185.8472.7885.5883.7286.8577.8088.23
KNN10068.1387.3471.4786.3978.0582.0876.1485.6073.4585.3583.7286.3978.5088.21
Pre-train on 100% IIT-CDIP(- no fine-tune)
-KNN1087.9566.4484.4972.3495.0158.4796.2349.0790.9261.5831.4096.1941.6094.78
KNN2088.9165.3985.7071.2595.3357.1996.5948.0691.6360.4734.8895.5048.4094.12
KNN5090.5963.6987.1469.4595.5354.9397.0846.2692.5858.5843.0294.5155.2093.05
KNN10091.7562.0888.5567.8595.8953.0597.2044.8193.3556.9550.0093.6061.1092.04
", + "bbox": [ + 117, + 279, + 878, + 750 + ], + "page_idx": 20 + }, + { + "type": "footer", + "text": "4993", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 492, + 940, + 504, + 951 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/05509342fd5b8a801359f737df2a09b4a8d8b605c435ef673e73e751f9fef88b.jpg", + "table_caption": [ + "Table 5: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP $^{-}$ (remove pseudo OOD categories)." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
LayoutLMyBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.89MSP42.4376.3156.0569.3954.3170.2547.0073.9349.9572.4743.0276.5544.1075.68
MaxLogit41.9191.2755.0489.3354.1985.2044.9790.9349.0389.1838.3794.2741.3091.38
Energy41.8391.2954.9289.3554.1185.2245.0190.9748.9789.2138.3794.2941.1091.42
GradNorm39.1591.8054.0486.9351.8886.0542.4991.6546.8989.1138.3791.7941.4091.82
KNN1031.6394.2546.5290.9846.7790.4940.8392.7941.4492.1324.4295.9530.3095.66
KNN2032.0394.1146.6590.8947.0190.3241.6092.6341.8291.9926.7495.7631.8095.44
KNN5034.3993.7549.3490.4649.3689.9444.5292.2344.4091.6033.7295.3333.2095.38
KNN10036.1593.4751.2790.1951.3689.6546.6391.9946.3591.3233.7295.1035.1095.16
Pre-train on 10% IIT-CDIP-(no fine-tune)
-KNN1090.9572.3094.6665.4990.9472.3894.4067.3292.7469.3748.8491.5656.0075.08
KNN2091.5970.5494.9863.9191.6670.7494.8165.9593.2667.7853.4990.4157.6073.51
KNN5093.0767.7695.5461.2492.7868.2795.2564.0194.1665.3255.8188.3758.5071.06
KNN10093.5565.4195.9059.1393.1066.1995.5462.4194.5263.2867.4486.4460.2069.09
LayoutLMyBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.84MSP49.2076.7861.5170.1362.3769.4955.5273.6457.1572.5150.0077.9950.7075.90
MaxLogit41.0391.5754.0088.4556.4285.7047.0090.1949.6188.9838.3793.6241.8090.56
Energy40.9591.6053.7688.4756.1985.7246.7990.2249.4289.0038.3793.6541.7090.59
GradNorm37.1591.8954.1684.9953.0386.2843.9590.9447.0788.5240.7090.4142.4090.91
KNN1031.6394.1747.6990.2947.4990.5040.5492.9241.8491.9731.4095.6534.5095.15
KNN2032.5594.0347.8990.2248.3290.3440.9192.7642.4291.8433.7295.4535.4094.97
KNN5035.7193.6749.7489.8251.0489.9944.1292.3945.1591.4736.0595.0136.2094.92
KNN10036.7593.3850.3089.6051.6889.7144.9792.1745.9291.2236.0594.7336.5094.71
Pre-train on 20% IIT-CDIP-(no fine-tune)
-KNN1090.3975.2579.5979.4393.1472.4197.1266.9990.0673.5250.0091.3624.7096.34
KNN2090.6373.7580.4778.5193.8170.5897.1665.5490.5272.1055.8189.9126.9095.94
KNN5091.6771.1982.5676.9094.4567.8297.3662.9891.5169.7267.4487.2929.1095.31
KNN10091.9569.1983.7375.5595.3365.3797.3660.8492.0967.7474.4284.7830.3094.75
LayoutLMyBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.01MSP51.7675.7662.3969.6363.3768.7554.2274.0357.9472.0455.8171.6942.5080.56
MaxLogit42.0391.2954.2489.4757.3084.4445.6690.0249.8188.8052.3393.0833.0092.89
Energy41.8791.3154.2089.4957.2684.4745.5090.0549.7188.8352.3393.1332.5092.92
GradNorm38.1991.6653.6486.8555.0385.6643.1891.4547.5188.9052.3392.3934.6092.95
KNN1031.4794.4347.1390.6348.2090.4538.1193.3041.2392.2027.9195.7824.7096.09
KNN2032.5994.2947.6190.5549.6090.2739.2593.1442.2692.0632.5695.6025.5095.95
KNN5034.8793.9349.5090.1052.1189.8742.2992.7544.6991.6638.3795.1626.4095.95
KNN10036.5593.6550.3889.8253.5589.5743.7192.5146.0591.3943.0294.8927.7095.77
Pre-train on 40% IIT-CDIP-(no fine-tune)
-KNN1087.0780.4471.7683.7286.7582.3196.1076.3685.4280.7175.5884.965.9098.24
KNN2088.9579.0374.9382.3188.9981.1196.7175.0187.4079.3680.2382.567.2097.93
KNN5091.4777.2380.3991.7891.7879.7597.4072.6090.2677.3787.2178.199.0097.92
KNN10090.7575.2784.7777.4891.7478.3197.1670.2691.1075.3389.5374.1114.2097.49
LayoutLMyBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.38MSP43.4376.1257.2169.1658.3868.5646.1474.7651.2972.1538.3778.6728.3083.78
MaxLogit35.1991.2950.2288.9853.1984.5439.9890.7144.6488.8824.4296.3921.4095.57
Energy35.2391.3250.2289.0053.1984.5539.9890.7344.6588.9024.4296.4421.4095.58
GradNorm30.3092.5448.6188.1848.9686.5836.1692.6341.0189.9819.7796.7119.2096.35
KNN1026.5094.9543.4791.6945.0990.9534.0993.8637.2992.8619.7797.3917.8096.37
KNN2027.2294.8344.0791.5845.4190.7934.6293.7137.8392.7319.7797.2218.4096.26
KNN5029.4694.4946.2891.1247.6990.4537.5093.3340.2392.3517.4497.0418.7096.80
KNN10032.1594.2648.1790.8550.6490.2140.3893.1242.8392.1119.7796.8820.7096.74
Pre-train on 100% IIT-CDIP-(no fine-tune)
-KNN1078.7481.6774.4580.8680.5383.7195.0177.3382.1880.8938.3794.6217.7096.12
KNN2082.3980.1377.8679.3183.4882.7595.4575.9384.8079.5344.1993.4214.6096.13
KNN5086.0377.6582.8076.6086.9181.3096.1073.0787.9677.1654.6591.099.6097.21
KNN10089.1175.5188.0374.0890.6279.7896.7170.4391.1274.9566.2888.5018.0096.82
", + "bbox": [ + 117, + 279, + 878, + 750 + ], + "page_idx": 21 + }, + { + "type": "footer", + "text": "4994", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 940, + 509, + 952 + ], + "page_idx": 21 + }, + { + "type": "table", + "img_path": "images/bc869ff4eda0c378d004e66b37464d033470ececb1140899cca5cfc5e6b25b64.jpg", + "table_caption": [ + "Table 6: OOD detection performance for document classification. Spatial-RoBERTaBase (Pre) or SRBase (Pre) denotes applying the spatial-aware adapter in the word embedding layer. Spatial-RoBERTaBase (Post) or SRBase (Post) denotes applying the spatial-aware adaptor at the output layer." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTBlasteFine-tune on RVL-CDIP (ID)
90.19MSP91.1973.7090.8473.4991.8271.5391.0372.3591.2272.7793.0280.9497.6074.59
MaxLogit96.8879.0496.8779.3898.0475.8598.5477.4597.5877.93100.0082.7699.4079.99
Energy97.4878.9697.2379.3198.4075.7199.0777.2598.0477.81100.0082.7199.2080.06
KNN1053.2088.9458.5088.6261.3786.2563.7288.2959.2088.0222.0996.5268.6092.47
KNN2053.4488.8158.9088.5061.6586.0763.6088.1559.4087.8827.9196.3871.7092.02
KNN5053.8488.5259.4288.4262.0185.8164.1687.8059.8687.6432.5696.0774.3091.37
KNN10055.5688.1060.6788.2063.6985.4164.7787.4261.1787.2834.8895.6776.5090.81
No fine-tune
-KNN1093.1163.5288.1566.3494.5766.9298.4253.3793.5662.5425.5895.9986.0072.99
KNN2092.9963.1888.3965.7894.5766.0898.4252.1093.5961.7826.7495.7187.3070.44
KNN5092.6762.4189.3164.7294.1764.7498.3450.0793.6260.4826.7495.0290.8066.04
KNN10092.6761.5789.5963.5794.0163.4598.1748.3393.6159.2329.0794.3492.8061.62
SRBase(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.11MSP46.8074.5254.6470.5856.2669.7254.3070.7453.0071.3944.1975.7957.2069.23
MaxLogit39.4388.6446.4889.9249.9685.7548.3087.6646.0487.9933.7293.4250.6088.70
Energy39.4388.6646.4889.9450.0085.7648.3087.6746.0588.0133.7293.4550.6088.71
KNN1031.9194.4142.1992.6546.6589.3142.0992.6540.7192.2610.4797.4552.1092.93
KNN2032.3194.2842.5992.6447.0189.2143.4392.5341.3492.1611.6397.3153.3092.80
KNN5034.3993.9943.8392.3649.0488.9345.4192.1943.1791.8712.7997.0153.1092.51
KNN10035.1593.7644.2792.1549.4888.6546.1491.9743.7691.6315.1296.8149.7092.44
Pre-train on IIT-CDIP (no fine-tune)
-KNN1078.8278.9279.9973.8977.6981.3291.4876.5282.0077.6610.4798.0887.3080.89
KNN2079.7477.9582.6472.1779.8180.4092.1375.1183.5876.4116.2897.6092.1076.94
KNN5080.4276.8785.1369.6282.1278.9392.9873.0185.1674.6122.0996.6695.2070.53
KNN10081.4375.7086.9067.1983.4077.1293.3871.0786.2872.7727.9195.8696.6064.56
SRBase(Post)Fine-tune on RVL-CDIP (ID)
97.10MSP58.0578.3776.4665.4465.8075.0061.8177.5965.5374.1054.6581.6593.5052.85
MaxLogit49.2089.8272.3680.2857.8287.2852.5290.0457.9886.8634.8894.8891.6073.37
Energy47.5689.8771.9680.3056.5887.3251.1890.1056.8286.9034.8895.0491.3073.39
KNN1037.4393.3764.0886.8349.4489.8246.9292.1749.4790.5526.7496.3890.1080.21
KNN2038.2793.2565.3386.5250.8089.6648.0991.9950.6290.3526.7496.2391.2079.57
KNN5040.4392.9867.3886.0252.8389.3850.6591.5852.8289.9926.7495.8992.1078.48
KNN10041.9992.7767.9485.6253.8789.1751.2291.3353.7689.7229.0795.6792.6077.68
SRLarge(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.37MSP62.3767.8271.2763.3672.8762.5470.2563.8469.1964.3976.7460.6167.0065.48
MaxLogit33.3990.1539.2589.8742.3088.1237.0591.6638.0089.9531.4092.4127.7094.23
Energy33.3990.1639.2589.8842.3088.1337.0591.6638.0089.9631.4092.4227.7094.22
KNN1028.1894.4742.4393.0137.4391.7431.1394.7234.7993.4925.5896.2418.6096.28
KNN2028.7894.3242.4392.9038.0791.5832.0294.5535.3393.3425.5896.0218.6096.33
KNN5030.2293.9543.7192.6940.0691.2634.5494.1037.1393.0026.7495.5221.4096.14
KNN10030.8693.7144.1192.5640.6691.0535.4793.8837.7892.8026.7495.2221.7096.11
Pre-train on IIT-CDIP (no fine-tune)
-KNN1068.4980.4388.2369.8371.7583.1188.1173.3279.1476.6775.5884.3649.8092.02
KNN2071.7478.7790.2467.4175.6681.3889.0471.1481.6774.6881.4081.5562.2090.29
KNN5075.4676.4992.8163.8280.1778.7290.4267.8484.7271.7282.5677.1578.2087.49
KNN10077.6274.5994.4260.9483.1676.2591.8065.3086.7569.2784.8873.3488.2084.96
", + "bbox": [ + 117, + 326, + 878, + 726 + ], + "page_idx": 22 + }, + { + "type": "footer", + "text": "4995", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 940, + 507, + 952 + ], + "page_idx": 22 + }, + { + "type": "table", + "img_path": "images/4f7daab9e9d0e41bdd5cd49901dd20393e84ed097dd7d1931e69a6d1d192428b.jpg", + "table_caption": [ + "Table 7: OOD detection performance for document classification with the different number of pre-training data from IIT-CDIP." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
VITBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.89MSP55.8088.3748.6191.3863.9383.8355.5288.5555.9688.0352.0589.6034.1095.04
MaxLogit50.3691.5137.7794.3062.3787.9753.6992.1151.0591.4738.3694.2428.6096.06
Energy50.5691.4837.0894.3363.4987.8955.1992.0051.5891.4238.3694.2929.4095.96
GradNorm55.5679.7545.9684.7966.9274.0758.4481.0756.7279.9247.9582.0434.9091.68
KNN1050.4092.6043.5193.9251.6090.5474.4788.8755.0091.4820.5597.199.2098.21
KNN2049.8092.7040.3894.4353.3990.2674.7288.7754.5791.5423.2996.9810.4098.05
KNN5046.7292.8934.2795.2456.0789.9274.5588.4552.9091.6227.4096.5612.8097.80
KNN10045.4892.8929.3395.6757.6289.5675.0488.2551.8791.5930.1496.2115.0097.57
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.9243.0897.6749.0099.5254.4199.3540.2698.8646.6993.1592.516.9098.06
KNN2098.8842.4797.7548.5799.5253.7599.3539.5698.8846.0994.5292.248.6097.91
KNN5098.8041.7097.8348.0499.5252.9199.3538.6298.8845.3295.8991.8010.6097.66
KNN10098.7641.2097.7947.7099.4852.3299.3538.0198.8444.8198.6391.3114.5097.41
VITBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.62MSP54.3689.0151.6391.3164.5785.2360.5188.6757.7788.5660.2789.3444.2093.73
MaxLogit44.3292.1638.2194.1864.9287.6358.5691.3351.5091.3245.2192.6339.7094.36
Energy44.3692.1737.8994.2466.5687.5160.3991.2252.3091.2846.5892.6241.5094.18
GradNorm90.5154.9292.0451.6794.2945.4198.1332.3693.7446.0995.8940.4489.7059.01
KNN1052.2092.5845.8493.7353.7990.7577.8487.0257.4291.0217.8197.3316.9097.40
KNN2051.6092.6643.5594.1555.6390.4678.0486.7957.2091.0219.1897.0619.4097.11
KNN5050.1292.8639.9894.8258.0290.1878.7786.5456.7291.1019.1896.6323.1096.68
KNN10048.0492.9134.7595.2860.3889.8878.9886.4255.5491.1220.5596.2726.2096.35
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.1641.1397.5147.1299.4853.0599.3138.7998.6245.0294.5291.808.0097.41
KNN2098.1240.7197.5146.7999.4852.5299.3138.3198.6044.5894.5291.488.7097.25
KNN5098.0440.1097.5546.3199.4851.8499.3937.6398.6243.9795.8991.0111.5096.99
KNN10098.0039.7497.5545.9899.4851.3499.3937.2698.6043.5897.2690.5514.6096.70
VITBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.63MSP55.4888.6552.2791.5464.4985.5258.0889.2057.5888.7367.1284.6245.8093.82
MaxLogit47.1291.7440.0694.0961.0588.6856.5792.0151.2091.6369.8689.8132.9095.46
Energy47.1291.7339.9494.1062.3388.6258.6091.8852.0091.5869.8689.6532.7095.44
GradNorm47.0085.7641.9089.6460.6981.3753.7387.0650.8385.9664.3881.1234.0092.93
KNN1053.2892.1348.3392.9946.4592.2075.6188.8755.9291.5534.2595.536.8098.56
KNN2052.7692.2445.8893.5748.1291.9574.8488.7555.4091.6332.8895.217.8098.36
KNN5051.2892.5240.9494.5150.5291.7075.0888.4654.4691.8035.6294.6710.9098.04
KNN10050.3292.6236.1695.1253.3591.3675.9388.2453.9491.8439.7394.2513.6097.76
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.5640.6097.0346.2899.2453.7699.1539.6298.2445.0682.1992.021.0099.59
KNN2097.5640.0096.9545.8699.2453.1899.1539.1298.2244.5482.1991.631.0099.55
KNN5097.5639.2496.9945.2099.2452.3999.1538.4998.2443.8386.3091.071.0099.50
KNN10097.6038.7897.0344.7999.2451.7699.1538.1598.2643.3790.4190.671.2099.45
VITBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.79MSP54.2888.8049.1491.8064.6084.4558.8588.7856.7288.4661.6489.4441.0094.27
MaxLogit44.9692.1338.0194.5263.9787.9756.4991.8150.8691.6168.4990.6534.6095.26
Energy45.7292.1138.0194.5565.8487.8657.9191.7051.8791.5672.6090.4134.8095.14
GradNorm48.7284.2144.3687.5063.4978.0756.2584.7953.2083.6460.2782.9635.6091.24
KNN1045.1693.1439.1394.6251.6890.8573.5888.8152.3991.8650.6893.0910.4098.04
KNN2044.8893.1436.6495.0453.3590.5974.2788.6752.2891.8650.6892.6712.0097.81
KNN5043.6793.1931.1895.6056.7490.2975.2888.4951.7291.8957.5392.2315.6097.45
KNN10043.6393.1527.5295.9458.7490.0276.1888.3851.5291.8761.6492.0118.9097.18
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.0442.3593.9750.1797.4152.6898.0143.1996.6147.1012.3397.473.1098.38
KNN2097.1641.9994.0149.9697.8152.0198.0942.7396.7746.6715.0796.953.0098.31
KNN5096.9641.6294.3449.5698.0051.2098.0542.2496.8446.1621.9296.082.7098.18
KNN10097.0041.4894.9049.3198.1250.6598.1342.0397.0445.8736.9995.292.3098.27
", + "bbox": [ + 119, + 135, + 878, + 602 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/9ebecc600f08575451f828abb7727de28b96970541821c23288df6966b5ba861.jpg", + "table_caption": [ + "Table 8: OOD detection performance for document classification. Longformer $_{4096}$ denotes the original model adopted from the Huggingface model hub. Longformer $_{4096}$ (+) denotes the additional pre-training on IIT-CDIP." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Longformer1006Fine-tune on RVL-CDIP (ID)
90.71MSP95.0064.3295.6262.1795.8960.5393.9566.8995.1263.4888.3777.5098.6054.72
MaxLogit97.1272.8497.0775.2298.2470.3995.8277.5797.0674.0090.7086.6299.6068.10
Energy97.4872.8297.3575.2198.3670.3796.5977.5697.4473.9991.8686.6399.8068.08
KNN1058.4588.2165.6586.8867.8083.9956.7889.5362.1787.1527.9196.0182.1086.31
KNN2058.9788.0465.5786.6068.1283.8057.3589.3462.5086.9429.0795.8282.6085.93
KNN5060.2587.6466.5786.2568.9183.4158.8188.9663.6486.5630.2395.4682.7085.27
KNN10061.9787.1968.1485.8170.1582.9560.4788.6065.1886.1434.8895.0482.8084.75
No fine-tune
-KNN1098.0455.4597.6359.9798.7651.7598.1353.1698.1455.0870.9388.69100.0064.97
KNN2098.1255.1997.6759.6498.8051.2798.1752.7198.1954.7070.9388.51100.0064.08
KNN5098.0054.8297.6359.1398.8050.5798.3052.0798.1854.1573.2688.29100.0062.82
KNN10097.9254.4897.6758.6298.8450.0098.3451.6298.1953.6874.4288.14100.0061.70
Longformer1006 (+)Pre-train on IIT-CDIP→fine-tune on RVL-CDIP (ID)
91.13MSP95.2064.0895.6261.3896.0559.4794.4863.1395.3462.0290.7067.2698.0055.52
MaxLogit96.9675.4196.5476.0397.8970.1596.7174.5697.0274.04100.0078.6599.7072.88
Energy97.2875.4096.5476.0398.2870.1497.1674.5597.3274.03100.0078.5999.7072.86
KNN1058.7389.2566.2187.5772.0383.7663.6888.7265.1687.3248.8494.7886.4087.84
KNN2058.6189.1865.9787.4571.6783.6963.3988.6164.9187.2348.8494.6285.3087.70
KNN5061.1788.9666.9787.2972.8383.4765.8388.3366.7087.0155.8194.2585.2087.39
KNN10061.7388.7966.9387.1173.3083.2466.1588.1567.0386.8255.8194.0084.7087.21
Pre-train on IIT-CDIP (no fine-tune)
-KNN1095.4861.4098.0753.6697.7355.5598.6648.7097.4954.8381.4091.1297.4046.27
KNN2095.5660.9297.9552.9597.4954.9798.5048.2197.3854.2684.8890.6297.5045.55
KNN5095.6059.9497.9551.7797.4153.9798.6247.2997.4053.2487.2189.9598.2044.18
KNN10095.6059.0497.9950.7497.2152.9998.5846.5197.3452.3288.3789.5298.5043.09
", + "bbox": [ + 119, + 670, + 878, + 902 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "4996 12", + "bbox": [ + 480, + 928, + 521, + 952 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/1c93ee91c8abe7cbd433f9a636c79a1085d54fca3364e9ccd6c0fb87e359e1ac.jpg", + "table_caption": [ + "Table 9: OOD detection performance for document classification. All models are pre-trained on ImageNet." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
ResNet-50Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.12MSP64.4987.8755.8990.9466.6087.3177.8880.8766.2286.7551.1692.7663.1090.36
MaxLogit64.8988.5947.9792.8165.4087.5277.5681.8763.9687.7041.8694.6254.0093.29
Energy67.0988.3047.8192.8666.6887.2478.5381.7565.0387.5439.5394.7348.5093.68
KNN1073.3886.8267.9887.4671.3187.8492.9077.7476.3984.966.9899.125.2098.98
KNN2074.9086.4166.2987.7973.8287.2193.9576.5177.2484.486.9898.965.5098.85
KNN5076.6686.0466.4188.4878.2986.3995.5074.7679.2283.925.8198.685.9098.70
KNN10077.5485.6165.4188.9982.1685.4396.2373.3780.3383.356.9898.346.3098.51
Pre-train on ImageNet
-KNN1096.9651.1494.6251.7598.7653.8499.5937.6097.4848.5883.5685.0020.8097.00
KNN2096.9650.3794.3451.5498.9252.9899.5936.6097.4547.8783.5684.4922.7096.71
KNN5096.9249.2994.2951.3099.0051.8499.5935.1597.4546.9083.5684.0326.7096.21
KNN10097.1248.6094.5451.2599.1651.1199.5534.3697.5946.3382.1983.3129.4095.67
Swin10Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
95.74MSP47.6488.0949.9088.1158.2283.1450.2888.9051.5187.0649.3291.3136.5093.63
MaxLogit42.3993.1142.4793.4558.6288.7945.9093.1847.3492.1350.6892.5032.2095.65
Energy43.1593.0542.9593.4059.0288.7046.7193.0747.9692.0652.0592.3833.6095.49
KNN1049.4492.8246.7392.8742.9092.5772.6988.4552.9491.6816.4496.736.1098.30
KNN2048.8492.9543.2793.5144.5392.3272.2888.3552.2391.7817.8196.527.4098.10
KNN5046.4493.2639.2594.5747.4192.0973.3487.8751.6191.9526.0396.158.6097.80
KNN10043.7693.4235.0395.2950.0891.7275.7787.4251.1691.9628.7795.9411.3097.55
Pre-train on ImageNet
-KNN1098.5652.7595.0655.1499.3658.8599.8041.8698.2052.1565.7593.262.1099.35
KNN2098.4451.8695.1854.7299.3257.8899.8040.6698.1851.2868.4992.522.6099.22
KNN5098.5250.6995.3854.1399.1656.6199.7639.0198.2050.1178.0891.143.4098.99
KNN10098.7249.9695.6653.8099.1655.8499.7638.1698.3249.4479.4589.894.3098.77
VITBasePre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
94.38MSP56.8189.1452.1991.8067.4884.2659.9088.7759.1088.4947.6792.9859.5091.99
MaxLogit50.7691.3744.6093.7568.0486.9455.1591.8154.6490.9740.7094.2052.4093.16
Energy51.1691.3144.5293.7569.4386.8156.0991.7755.3090.9138.3794.1153.2093.11
KNN1062.5790.1257.7390.9153.6790.3684.5086.1964.6289.4012.7997.9613.0097.92
KNN2063.0190.2456.0191.5155.0390.0284.3886.0164.6189.4415.1297.7614.9097.67
KNN5061.9790.6253.2392.6258.2689.5784.2585.6464.4389.6116.2897.3819.8097.24
KNN10060.2990.8549.7093.5360.3889.0784.0185.4363.6089.7216.2897.0523.6096.82
Pre-train on ImageNet
-KNN1098.4852.1595.0256.9499.4853.7799.4738.9098.1150.4493.1590.2720.4097.13
KNN2098.4851.4195.0656.6199.4452.9299.5537.6198.1349.6494.5289.4422.6096.80
KNN5098.3250.4394.8656.2199.4051.8699.5935.8298.0448.5897.2688.2326.6096.25
KNN10098.4049.7695.0655.9099.4451.1599.5934.5998.1247.8598.6387.2431.2095.76
", + "bbox": [ + 119, + 343, + 878, + 680 + ], + "page_idx": 24 + }, + { + "type": "footer", + "text": "4997", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 489, + 940, + 507, + 952 + ], + "page_idx": 24 + }, + { + "type": "table", + "img_path": "images/1b1752f689735deabd5b92180920f0866266f465367a3d1dc83a8f3255e9c4a5.jpg", + "table_caption": [ + "Table 10: OOD detection performance for document classification (select OOD categories achieve the best performance across most of the models with different modalities)." + ], + "table_footnote": [], + "table_body": "
REBERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
EmailResumeFile folderSci. publicationAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
86.13MSP96.2260.3890.6771.7293.8259.4793.8665.5193.6464.2791.8670.5793.0069.99
MaxLogit99.2166.5795.8073.6695.4766.8197.0965.6396.8968.1794.1977.1794.6074.69
Energy99.6066.5396.6473.5795.1466.8297.2165.3597.1568.0794.1977.4495.6074.90
KNN1083.7082.7769.0284.2888.3274.0686.1174.0281.7978.7843.0292.7472.0088.87
KNN2084.5082.3569.0684.2188.2073.7186.7274.0282.1278.5748.8492.3873.8088.31
KNN5084.9881.5768.8684.0688.0873.0187.0873.9482.2578.1454.6591.9275.4087.44
KNN10086.2580.8870.2683.8088.2872.4087.4473.8983.0677.7458.1491.5078.2086.68
Pre-train on pure-text data
-KNN1086.0975.6395.1258.6297.7159.7598.9550.5494.4761.1410.4798.4689.8063.01
KNN2086.2974.9295.0058.1497.7158.8899.0349.4994.5160.3612.7998.3590.8060.59
KNN5087.3273.5594.6457.5397.8357.5699.1548.1194.7359.1912.7998.1193.3056.61
KNN10089.2772.4894.2857.1297.9956.5299.1147.3795.1658.3711.6397.8994.3052.98
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.34MSP96.9060.5596.2059.1496.3155.7297.8255.1296.8157.6395.3580.4499.6052.82
MaxLogit98.9768.9797.6065.6495.6763.4298.6362.8797.7265.2397.6788.4299.7071.54
Energy99.4468.9697.9265.6395.8363.4298.7162.8397.9865.2197.6788.4699.9071.55
KNN1068.2888.7269.6283.3678.1785.0890.8874.9876.7483.0416.2896.9081.6086.94
KNN2068.0488.6170.1083.2277.5384.9290.7574.9576.6082.9216.2896.8481.8086.49
KNN5069.2888.2970.9882.9278.2984.4690.9674.8277.3882.6219.7796.5983.4085.71
KNN10069.2888.1571.3482.6978.4984.2190.4374.8677.3982.4822.0996.3883.9085.17
Pre-train on pure-text data
-KNN1097.4247.7795.7250.0997.6746.5899.5238.6197.5845.7645.3593.92100.0063.03
KNN2097.4646.9195.6049.8097.7146.0299.5238.2197.5745.2446.5193.77100.0061.92
KNN5097.5845.6895.5649.4597.7545.1999.5237.7297.6044.5150.0093.60100.0060.35
KNN10097.6644.7895.6049.1797.8744.6399.5637.5797.6744.0451.1693.48100.0058.89
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
85.25MSP60.5387.2669.5387.0027.8695.1394.0575.7962.9986.3091.7874.4027.8095.47
MaxLogit59.9889.2772.6188.0230.0495.4193.3975.3864.0087.0280.8279.8930.0095.29
Energy63.7189.1475.6487.5545.7194.1592.7775.0269.4686.4678.0881.0762.2093.44
KNN1072.4685.6885.6985.3068.6276.0196.1555.3580.7375.5936.9994.562.2099.37
KNN2076.1584.5588.6584.2266.1380.6796.5456.3181.8776.4438.3693.812.7099.28
KNN5080.3782.6192.0082.4960.9886.7796.9359.0682.5777.7347.9592.423.8099.11
KNN10084.7080.5495.1580.6451.2991.7897.1661.1982.0878.5450.6891.014.7098.91
Pre-train on ImageNet
-KNN1099.7240.9499.6521.5252.4791.0398.3345.4087.5449.7284.9384.3820.4097.12
KNN2099.6841.1899.6520.6850.6191.6398.4144.6587.0949.5486.3083.9423.4096.87
KNN5099.6441.5899.6519.4846.9792.3698.3743.4986.1649.2384.9383.7026.9096.43
KNN10099.6442.1999.6518.9844.9192.8498.3342.8685.6349.2284.9383.1229.2095.98
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.25MSP70.2381.8767.6885.3143.9792.6883.7879.4066.4284.8286.3078.2354.1091.62
MaxLogit54.7387.0446.5192.3017.2596.5190.8674.1152.3487.4982.1983.2034.4094.82
Energy54.0587.1144.3892.4916.3896.6391.2973.5951.5387.4684.9383.0733.8094.82
KNN1056.0890.6648.8092.8438.3193.3191.0266.9158.5585.9327.4096.033.3098.84
KNN2054.6190.9549.9892.6827.5895.2491.4468.5455.9086.8526.0396.354.0098.76
KNN5055.2590.6852.1592.3715.7597.2891.2571.6253.6087.9928.7796.104.9098.59
KNN10056.2090.3154.7592.179.1498.0091.1375.1152.8088.9030.1495.776.5098.35
Pre-train on ImageNet
-KNN1099.8443.5599.7620.6447.9293.2098.9137.5586.6148.7458.9093.881.6099.32
KNN2099.8444.4799.8018.3641.3194.1499.0336.4585.0048.3672.6092.692.6099.00
KNN5099.8845.2699.8017.9239.9794.3999.0336.7184.6748.5779.4591.973.7098.81
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
89.97MSP61.2585.8466.5785.0440.4493.1085.8481.8363.5286.4573.9780.6660.3090.41
MaxLogit53.0290.3755.7788.8619.9196.2592.3879.6955.2788.7976.7185.1650.6093.12
Energy51.7990.4955.0789.0317.5396.5392.6979.2054.2788.8179.4585.0150.1093.20
KNN1054.1391.1852.8691.1858.4987.4692.8865.9864.5983.9542.4795.0711.0097.94
KNN2054.2191.1853.1790.9950.6189.3593.0467.5262.7684.7643.8494.9813.1097.62
KNN5054.5391.0553.3390.7941.9592.8293.0072.0660.7086.6842.4794.7417.3097.12
KNN10054.6590.8154.1290.5630.7991.9098.7247.1088.2452.1995.8989.3122.0096.58
Pre-train on ImageNet
-KNN1099.8046.4699.6826.5058.6590.6198.7246.4089.2152.4987.6791.3919.9097.25
KNN2099.8046.0299.6525.6957.3091.0198.7246.4688.8752.3090.4190.8721.7097.01
KNN5099.8045.4899.6124.7655.1691.5298.7646.6988.3352.1194.5289.9924.3096.62
KNN10099.8045.3399.6524.4354.8191.9098.7247.1088.2452.1995.8989.3128.8096.27
", + "bbox": [ + 119, + 246, + 878, + 785 + ], + "page_idx": 25 + }, + { + "type": "footer", + "text": "4998", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 490, + 940, + 509, + 952 + ], + "page_idx": 25 + }, + { + "type": "table", + "img_path": "images/d21179af54df802164528cb4458c607a666cd4356a9b0dfd1da6f32220841944.jpg", + "table_caption": [ + "Table 11: OOD detection performance for document classification (randomly select four categories as OOD)." + ], + "table_footnote": [], + "table_body": "
RobERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
LetterHandwrittenAdvertisementMemoAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.86MSP70.2279.2150.1487.2484.6467.8091.4257.9974.1073.0695.3559.7594.3055.12
MaxLogit66.0487.5139.6592.5386.4777.0391.6771.8470.9682.23100.0077.8996.8071.96
Energy66.2087.5738.1992.5987.3577.0391.6771.8970.8582.27100.0077.9296.8071.96
KNN1062.6280.1960.9870.9075.6280.2485.8469.2071.2675.1394.1981.9990.4082.48
KNN2063.1880.1060.0771.1775.9080.0385.7268.8871.2275.0494.1981.7591.2081.89
KNN5063.7880.0057.3071.7076.3479.6785.8868.3870.8274.9494.1981.4591.8081.09
KNN10064.7779.9854.3371.9477.3779.3286.0867.8070.6474.7694.1981.2091.9080.47
Pre-train on pure-text data
-KNN1085.5359.9098.6121.7996.2156.7297.6958.3994.5149.2012.7998.0184.5065.73
KNN2085.4559.2798.7321.1996.2155.6397.9057.0594.5748.2812.7997.9186.1063.57
KNN5086.8057.9498.7720.4596.8954.1298.3055.3595.1946.9613.9597.6089.3059.64
KNN10088.4756.7198.8119.9796.8152.8998.1853.9395.5745.8813.9597.3891.1055.17
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
92.08MSP65.9669.5850.3877.9381.5260.8990.2154.2372.0265.6682.5660.1495.0050.90
MaxLogit62.1987.3544.6489.7979.9778.8488.3968.0868.8081.0280.2384.1994.3077.36
Energy61.2787.3543.6189.8179.1378.8588.1568.0868.0481.0280.2384.1994.3077.37
KNN1058.6579.5450.7771.8166.5683.4880.8775.1964.2177.5158.1492.7890.0077.76
KNN2057.8179.4351.4071.7267.0083.3581.1574.8664.3477.3458.1492.5789.7077.12
KNN5058.7779.3051.6071.6766.7283.1581.3174.3664.6077.1261.6392.2489.8076.17
KNN10061.3979.1652.7571.6167.8482.9381.7673.9165.9476.9062.7991.9989.8075.29
Pre-train on pure-text data
-KNN1099.4047.83100.0027.7598.2847.0393.2060.4097.7245.7546.5193.85100.0063.64
KNN2099.4447.33100.0027.4898.3246.4993.2460.2297.7545.3848.8493.70100.0062.79
KNN5099.4446.33100.0027.2398.4045.8593.4160.0597.8144.8651.1693.51100.0061.55
KNN10099.4445.67100.0027.3198.4445.2393.5359.9097.8544.5352.3393.40100.0060.31
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
87.80MSP70.5885.3555.2989.8864.2986.5471.1585.5865.3386.8454.7991.7077.2084.67
MaxLogit64.2587.4653.5990.7249.7090.6064.4588.7158.0089.3736.9995.1378.9086.86
Energy62.6687.6558.3390.3346.0091.2663.5689.0557.6489.5732.8895.6983.0087.05
KNN1090.9979.3756.3690.6472.4186.2089.1781.7477.2384.492.7499.3239.7093.70
KNN2092.1778.0047.4792.6168.2788.4290.8580.2374.6984.822.7499.2543.8093.08
KNN5094.3275.9628.4494.4965.6589.2792.7877.9170.3084.411.3798.9749.7092.09
KNN10095.5874.0227.2195.0760.4489.7894.2275.6369.3683.622.7498.6753.8091.10
Pre-train on ImageNet
-KNN1098.4642.2177.2981.4127.8791.1699.0843.4775.6864.5680.8289.9812.3098.17
KNN2098.6641.0076.7881.7029.2292.2799.0842.2975.9464.3283.5689.3014.1097.97
KNN5098.5839.5376.5881.8131.0192.0599.1240.8076.3263.5583.5688.5116.3097.61
KNN10098.6238.6277.1381.4932.6491.8499.1239.8676.8862.9583.5687.8019.5097.23
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
92.42MSP63.9687.0365.2188.1573.5679.7261.4088.4666.0385.8484.9374.3449.6092.49
MaxLogit56.4990.2275.3687.0072.6484.2644.2293.0162.1888.6272.6084.1629.1095.70
Energy57.4390.1177.0186.6073.4484.1743.7893.0662.9288.4873.9784.2528.0095.69
KNN1060.2790.1266.9090.7649.6689.1547.6792.6756.1290.6842.4794.287.2098.56
KNN2061.3290.0161.3791.3148.8390.3349.0092.5255.1391.0430.1495.568.8098.33
KNN5062.2289.7856.4491.5650.3489.5548.5292.3054.3890.8026.0395.7211.8097.97
KNN10062.6289.6054.9891.8550.7088.9347.6392.1853.9890.6430.1495.5413.9097.66
Pre-train on ImageNet
-KNN1099.1545.5786.0279.4432.4590.9899.5246.2079.2865.5524.6696.240.4099.78
KNN2099.1944.1186.8980.3533.4892.1999.6044.7979.7965.3627.4095.620.5099.73
KNN5099.2342.3987.9981.6636.7891.5999.6043.0780.9064.6843.8494.570.8099.63
KNN10099.1941.4689.0282.6340.6091.0599.6042.1482.1064.3252.0593.491.2099.53
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.03MSP69.6886.8169.6787.8872.2580.7869.3886.6170.2485.5267.1285.9758.5091.47
MaxLogit63.3589.2068.4088.5869.5884.3861.0889.9465.6088.0257.5389.4148.4093.04
Energy62.2289.2170.3488.4370.2684.3760.7590.0365.8988.0158.9089.4749.7093.03
KNN1068.1088.9954.9092.3053.4488.0558.1991.3458.6690.1738.3695.0222.9096.71
KNN2067.6188.9549.0192.8551.5389.2558.5991.1656.6890.5541.1094.4725.4096.35
KNN5067.2988.9142.5493.1553.9688.4358.7590.8855.6490.3442.4793.6029.9095.78
KNN10066.1988.9043.8093.1955.7187.7359.1190.6456.2090.1245.2192.8634.9095.27
Pre-train on ImageNet
-KNN1098.9041.9890.9677.1534.8790.6999.4041.2181.0362.7654.7994.2710.8098.47
KNN2098.9440.5491.6777.2036.8291.7199.4439.8581.7262.3264.3893.5712.7098.25
KNN5099.0738.7592.6176.9940.0091.1799.5238.1482.8061.2675.3492.4715.9097.87
KNN10099.1137.4393.2576.5643.3890.6899.5636.9383.8260.4082.1991.5218.9097.49
", + "bbox": [ + 119, + 105, + 878, + 645 + ], + "page_idx": 26 + }, + { + "type": "table", + "img_path": "images/a818ce65bc49fbbdc05638cbad98c0d341d7b4b8eaf06fd0ffa7635bf25db81f.jpg", + "table_caption": [ + "Table 12: OOD detection performance for document classification. All models are pre-trained on IIT-CDIP. For LayoutLM models, we adopt the checkpoints from the Huggingface model hub. For UDoc, we pre-train the model on our side. All models are fine-tuned on RVL-CDIP ID data." + ], + "table_footnote": [], + "table_body": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95
LayoutMv1Base97.28MSP47.4874.9159.7468.7266.4065.3658.8969.1258.1369.5343.0277.1572.40
MaxLogit27.0692.3837.9791.5245.6588.3635.9291.2236.6590.8724.4294.9657.30
Energy27.0692.4037.9791.5445.6588.3635.9291.2336.6590.8824.4294.9757.30
KNN1020.8296.0935.3293.8240.0691.3428.6594.8031.2194.0117.4497.0049.80
KNN2021.7495.9336.2093.7741.4291.1230.4494.6132.4593.8617.4496.8251.70
KNN5024.3495.5638.2593.4143.9390.6933.6494.1935.0493.4623.2696.4453.80
KNN10025.5495.3039.1393.2045.1790.3534.7893.9936.1693.2125.5896.2454.70
LayoutMv397.81MSP56.1670.8163.4467.1767.1665.3058.6069.5861.3468.2252.3372.7043.60
MaxLogit30.7089.1740.4288.1842.9884.0933.1288.2236.8087.4219.7794.5011.70
Energy30.7089.1840.4288.1842.9884.1033.1288.2336.8087.4219.7794.5111.70
KNN1021.7495.0335.6893.3832.8891.8618.5196.2627.2094.1311.6397.588.90
KNN2022.7494.9036.5693.2033.9691.6619.6496.1528.2293.9812.7997.4410.00
KNN5024.6294.6238.3792.7135.8391.3821.6395.9330.1193.6613.9597.2010.70
KNN10025.2294.3839.2992.3236.5591.0922.4895.7930.8893.4016.2897.0411.80
UDocNet5097.36MSP66.1365.7369.4364.0971.0363.2871.0663.2569.4164.0940.7078.4739.80
MaxLogit45.9682.1247.2186.3949.6483.1649.5983.1348.1083.702.3398.574.00
Energy45.9682.1247.2186.4049.6483.1649.5983.1348.1083.702.3398.604.00
KNN1030.0294.4741.2288.6641.9090.9936.6593.4837.4591.901.1699.135.50
KNN2031.1094.3641.9888.4442.1090.9038.0393.3538.3091.761.1699.046.90
KNN5033.9594.0743.3587.8944.0190.7240.7193.0640.5191.431.1698.847.40
KNN10034.8393.8443.7587.5145.0190.6141.9692.9041.3991.221.1698.728.30
", + "bbox": [ + 115, + 715, + 880, + 910 + ], + "page_idx": 26 + }, + { + "type": "footer", + "text": "4999", + "bbox": [ + 480, + 928, + 519, + 939 + ], + "page_idx": 26 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 490, + 940, + 507, + 952 + ], + "page_idx": 26 + } +] \ No newline at end of file diff --git a/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_model.json b/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4e201757dac99113f6f032cabe3830c80c9479bc --- /dev/null +++ b/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_model.json @@ -0,0 +1,4247 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.173, + 0.091, + 0.823, + 0.11 + ], + "angle": 0, + "content": "A Critical Analysis of Document Out-of-Distribution Detection" + }, + { + "type": "text", + "bbox": [ + 0.169, + 0.125, + 0.832, + 0.175 + ], + "angle": 0, + "content": "Jiuxiang Gu\\(^{1*}\\) Yifei Ming\\(^{2*†}\\) Yi Zhou\\(^{3}\\) Jason Kuen\\(^{1}\\) \nVlad I. Morariu\\(^{1}\\) Handong Zhao\\(^{1}\\) Ruiyi Zhang\\(^{1}\\) Nikolaos Barmpalios\\(^{1}\\) \nAnqi Liu\\(^{3}\\) Yixuan Li\\(^{2}\\) Tong Sun\\(^{1}\\) Ani Nenkova\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.147, + 0.175, + 0.857, + 0.226 + ], + "angle": 0, + "content": "\\(^{1}\\)Adobe Research \\(^{2}\\)University of Wisconsin-Madison \\(^{3}\\)Johns Hopkins University \\(^{1}\\{jigu, kuen, morariu, hazhao, barmpali, ruizhang, tsun, nenkova\\} @adobe.com \\(^{2}\\{alvinming, sharonli\\} @cs.wisc.edu\\) \\(^{3}yzhou188@jhu.edu\\) \\(^{3}aliu@cs.jhu.edu\\)" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.267 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.283, + 0.461, + 0.681 + ], + "angle": 0, + "content": "Large-scale pre-training is widely used in recent document understanding tasks. During deployment, one may expect that models should trigger a conservative fallback policy when encountering out-of-distribution (OOD) samples, which highlights the importance of OOD detection. However, most existing OOD detection methods focus on single-modal inputs such as images or texts. While documents are multimodal in nature, it is underexplored if and how multi-modal information in documents can be exploited for OOD detection. In this work, we first provide a systematic and in-depth analysis on OOD detection for document understanding models. We study the effects of model modality, pre-training, and fine-tuning across various types of OOD inputs. In particular, we find that spatial information is critical for document OOD detection. To better exploit spatial information, we propose a spatial-aware adapter, which serves as a parameter-efficient add-on module to adapt transformer-based language models to the document domain. Extensive experiments show that adding the spatial-aware adapter significantly improves the OOD detection performance compared to directly using the language model and achieves superior performance compared to competitive baselines." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.696, + 0.26, + 0.71 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.722, + 0.49, + 0.884 + ], + "angle": 0, + "content": "The recent success of large-scale pre-training has propelled the widespread deployment of deep learning models in the document domain, where model predictions are used to help humans make decisions in various applications such as tax form processing and medical reports analysis. However, models are typically pre-trained on data collected from the web but deployed in an environment with distributional shifts (Cui et al., 2021). For instance, the outbreak of COVID-19 has led to continually" + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.251, + 0.883, + 0.345 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.354, + 0.885, + 0.454 + ], + "angle": 0, + "content": "Figure 1: Illustration of OOD detection for document classification. The pre-training and fine-tuning pipelines are shown on the top left and bottom left, respectively. Right: During inference time, an OOD score can be derived based on logits \\( g(x) \\) or feature embeddings \\( z := h(x) \\). A document input \\( x \\) is identified as OOD if its OOD score is below some threshold \\( \\gamma \\)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.483, + 0.884, + 0.563 + ], + "angle": 0, + "content": "changing data distributions in machine-assisted medical document analysis systems (Velavan and Meyer, 2020). This motivates the need for reliable document understanding models against out-of-distribution (OOD) inputs." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.566, + 0.885, + 0.919 + ], + "angle": 0, + "content": "The goal of OOD detection is to categorize indistribution (ID) samples into one of the known categories and detect inputs that do not belong to any known classes at test time (Bendale and Boult, 2016). A plethora of OOD detection methods has been proposed for single-modal (image or text) inputs (Ge et al., 2017; Nalisnick et al., 2019; Oza and Patel, 2019; Tack et al., 2020; Hsu et al., 2020; Arora et al., 2021; Zhou et al., 2021; Xiao et al., 2020; Xu et al., 2021a; Li et al., 2021b; Shen et al., 2021; Jin et al., 2022; Zhou et al., 2022; Ming et al., 2022b,c; Podolskiy et al., 2021; Ren et al., 2023). Recent works (Fort et al., 2021; Esmaeilpour et al., 2022; Ming et al., 2022a; Ming and Li, 2023; Bitterwolf et al., 2023) also demonstrate promising OOD detection performance based on large-scale models pre-trained on text-image pairs, as pre-training enables models to learn powerful and transferable feature representations (Radford et al., 2021). However, it remains largely unexplored if existing findings in the OOD detection literature for images or texts can be naturally extended to the document" + }, + { + "type": "page_footnote", + "bbox": [ + 0.142, + 0.892, + 0.273, + 0.904 + ], + "angle": 0, + "content": "* Equal contribution" + }, + { + "type": "page_footnote", + "bbox": [ + 0.142, + 0.904, + 0.472, + 0.919 + ], + "angle": 0, + "content": "† Work done during the internship at Adobe Research" + }, + { + "type": "list", + "bbox": [ + 0.142, + 0.892, + 0.472, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4973" + }, + { + "type": "footer", + "bbox": [ + 0.218, + 0.946, + 0.78, + 0.973 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4973-4999 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.086, + 0.179, + 0.099 + ], + "angle": 0, + "content": "domain." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.102, + 0.49, + 0.424 + ], + "angle": 0, + "content": "Multiple unique challenges exist for document OOD detection. Unlike natural images, texts, or image-text pairs, no captions can describe a document and images in documents rarely contain natural objects. Moreover, the spatial relationship of text blocks further differentiates multimodal learning in documents from multimodal learning in the vision-language domain (Lu et al., 2019; Li et al., 2020). In addition, while recent pre-training methods have demonstrated remarkable performance in downstream document understanding tasks (Xu et al., 2020, 2021b; Li et al., 2021a; Gu et al., 2022; Hong et al., 2022; Huang et al., 2022; Li et al., 2022; Wang et al., 2022a), existing pre-training datasets for documents are limited and lack diversity. This is in sharp contrast to common pretraining datasets for natural images. It remains underexplored whether existing OOD detection methods are reliable in the document domain and how pre-training impacts OOD reliability." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.425, + 0.49, + 0.73 + ], + "angle": 0, + "content": "In this work, we first present a comprehensive study to better understand OOD detection in the document domain through the following questions: (1) What is the role of document pre-training? How do pre-training datasets and tasks affect OOD detection performance? (2) Are existing OOD detection methods developed for natural images and texts transferrable to documents? (3) How does modality (textual, visual, and especially spatial information) affect OOD performance? In particular, we find that spatial information is critical for improving OOD reliability. Moreover, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa (Liu et al., 2019). Our module is computationally efficient and significantly improves both ID classification and OOD detection performance (Sec. 5.2). Our contributions are summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.745, + 0.489, + 0.84 + ], + "angle": 0, + "content": "- We provide an extensive and in-depth study to investigate the impacts of pre-training, fine-tuning, model-modality, and OOD scoring functions on a broad spectrum of document OOD detection tasks. Our codebase will be open-sourced to facilitate future research." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.855, + 0.49, + 0.919 + ], + "angle": 0, + "content": "- We present unique insights on document OOD detection. For example, we observe that distance-based OOD scores are consistently advantageous over logit-based scores, which is underexplored" + }, + { + "type": "list", + "bbox": [ + 0.114, + 0.745, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.085, + 0.885, + 0.117 + ], + "angle": 0, + "content": "in the recent OOD detection literature on vision-language pre-trained models." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.132, + 0.886, + 0.228 + ], + "angle": 0, + "content": "- We further propose a spatial-aware adapter module for transformer-based language models, facilitating easy adaptation of pre-trained language models to the document domain. Extensive experiments confirm the effectiveness of our module across diverse types of OOD data." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.242, + 0.833, + 0.258 + ], + "angle": 0, + "content": "2 Preliminaries and Related Works" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.269, + 0.839, + 0.286 + ], + "angle": 0, + "content": "2.1 Document Models and Pre-Training" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.292, + 0.885, + 0.468 + ], + "angle": 0, + "content": "Large-scale pre-trained models gradually gain popularity in the document domain due to their success in producing generic representations from large-scale unlabeled corpora in vision and natural language processing (NLP) tasks (Devlin et al., 2018; Lu et al., 2019; Su et al., 2019; Schiappa et al., 2022). As documents contain both visual and textual information distributed spatially in semantic regions, document-specific models and pre-training objectives are often necessary, which are distinct from vision or language domains." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.47, + 0.885, + 0.919 + ], + "angle": 0, + "content": "We summarize common model structures for document pre-training in Fig. 2a. Specifically, LayoutLM (Xu et al., 2020) takes a sequence of Optical Character Recognition (OCR) (Smith, 2007) words and word bounding boxes as inputs. It extends BERT to learn contextualized word representations for document images through multitask learning. LayoutLMv2 (Xu et al., 2021b) improves on the prior work with new pre-training tasks to model the interaction among texts, layouts, and images. DocFormer (Appalaraju et al., 2021) adopts a CNN model to extract image grid features, fusing the spatial information as an inductive bias for the self-attention module. LayoutLMv3 (Huang et al., 2022) further enhances visual and spatial characteristics with masked image modeling and word-patch alignment tasks. Another line of work focuses on various granularities of documents, such as region-level text/image blocks. Examples of such models include SelfDoc (Li et al., 2021a), UDoc (Gu et al., 2021), and MGDoc (Wang et al., 2022b), which are pre-trained with a cross-modal encoder to capture the relationship between visual and textual features. These models incorporate spatial information by fusing position embeddings at the output layer of their encoders, instead of the input layer. Additionally, OCR-free models (Kim et al., 2022; Tang et al., 2023) tackle document understanding as a se" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.941 + ], + "angle": 0, + "content": "4974" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.117 + ], + "angle": 0, + "content": "quence generation problem, unifying multiple tasks through an image-to-sequence generation network." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.118, + 0.49, + 0.231 + ], + "angle": 0, + "content": "While these pre-trained models demonstrate promising performance on downstream applications, their robustness to different types of OOD data, the influence of pre-training and fine-tuning, and the value of different modalities (e.g. spatial, textual, and visual) for document OOD detection remain largely unexplored." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.243, + 0.396, + 0.258 + ], + "angle": 0, + "content": "2.2 Out-of-Distribution Detection" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.265, + 0.49, + 0.635 + ], + "angle": 0, + "content": "OOD detection has been extensively studied for open-world multi-class classification with natural image and text inputs, where the goal is to derive an OOD score that separates OOD from ID samples. A plethora of methods are proposed for deep neural networks, where the OOD scoring function is typically derived based on logits (without softmax scaling) (Hendrycks et al., 2022), softmax outputs (Liang et al., 2018; Hsu et al., 2020; Huang and Li, 2021; Sun et al., 2021), gradients (Huang et al., 2021), and feature embeddings (Tack et al., 2020; Fort et al., 2021; Ming et al., 2023). Despite their impressive performance on natural images and texts, it is underexplored if the results are transferrable to the document domain. A recent work (Larson et al., 2022) studied OOD detection for documents but only explored a limited number of models and OOD detection methods. The impacts of pre-training, fine-tuning, and spatial information remain unknown. In this work, we aim to provide a comprehensive and finer-grained analysis to shed light on the key factors for OOD robustness in the document domain." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.646, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Notations. Following prior works on OOD detection with large-scale pre-trained models (Ming et al., 2022a; Ming and Li, 2023), the task of OOD detection is defined with respect to the downstream dataset, instead of the pre-training data which is often hard to characterize. In document classification, we use \\(\\mathcal{X}^{\\mathrm{in}}\\) and \\(\\mathcal{Y}^{\\mathrm{in}} = \\{1,\\dots ,K\\}\\) to denote the input and label space, respectively. Let \\(\\mathcal{D}^{\\mathrm{in}} = \\{(x_i^{\\mathrm{in}},y_i^{\\mathrm{in}})\\}_{i = 1}^N\\) be the ID dataset, where \\(x\\in \\mathcal{X}^{\\mathrm{in}}\\) and \\(y^{\\mathrm{in}}\\in \\mathcal{Y}^{\\mathrm{in}}\\). Let \\(\\mathcal{D}^{\\mathrm{out}} = \\{(x_i^{\\mathrm{out}},y_i^{\\mathrm{out}})\\}_{i = 1}^M\\) denote an OOD test set where \\(y^{\\mathrm{out}}\\in \\mathcal{Y}^{\\mathrm{out}}\\), and \\(\\mathcal{Y}^{\\mathrm{out}}\\cap \\mathcal{Y}^{\\mathrm{in}} = \\emptyset\\). We express the neural network model \\(f\\coloneqq g\\circ h\\) as a composition of a feature extractor \\(h:\\mathcal{X}\\to \\mathbb{R}^{d}\\) and a classifier \\(g:\\mathbb{R}^{d}\\to \\mathbb{R}^{K}\\) which maps the feature embedding of an input to \\(K\\) real-valued numbers known as logits. During inference time, given an input \\(\\pmb{x}\\), OOD detection" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.674, + 0.099 + ], + "angle": 0, + "content": "can be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.541, + 0.104, + 0.851, + 0.145 + ], + "angle": 0, + "content": "\\[\nG _ {\\gamma} (\\boldsymbol {x}; h, g) = \\left\\{ \\begin{array}{l l} \\mathrm {I D} & S (\\boldsymbol {x}; h, g) \\geq \\gamma \\\\ \\mathrm {O O D} & S (\\boldsymbol {x}; h, g) < \\gamma \\end{array} \\right.,\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.151, + 0.883, + 0.215 + ], + "angle": 0, + "content": "where \\( S(\\cdot) \\) is a scoring function that measures OOD uncertainty. In practice, the threshold \\( q\\gamma \\) is often chosen so that a high fraction of ID data (e.g., 95%) is above the threshold." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.223, + 0.885, + 0.336 + ], + "angle": 0, + "content": "OOD detection scores. We focus on two major categories of computationally efficient OOD detection methods1: logit-based methods derive OOD scores from the logit layer of the model, while distance-based methods directly leverage feature embeddings, as shown in Fig. 1. We describe a few popular methods for each category as follows." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.342, + 0.886, + 0.664 + ], + "angle": 0, + "content": "- Logit-based: Maximum Softmax Probability (MSP) score (Hendrycks and Gimpel, 2017) \\(S_{\\mathrm{MSP}} = \\max_{i\\in [K]}e^{f_i(\\boldsymbol{x})} / \\sum_{j = 1}^K e^{f_j(\\boldsymbol{x})}\\) naturally arises as a classic baseline as models often output lower softmax probabilities for OOD data; Energy score (Liu et al., 2020): \\(S_{\\mathrm{Energy}} = \\log \\sum_{i\\in [K]}e^{f_i(\\boldsymbol{x})}\\) utilizes the Helmholtz free energy of the data and theoretically aligns with the logarithm of the ID density; the simple MaxLogit score (Hendrycks et al., 2022): \\(S_{\\mathrm{Maxlogit}} = \\max_{i\\in [K]}f_i(\\boldsymbol{x})\\) has demonstrated promising performance on large-scale natural image datasets. We select the above scores due to their simplicity and computational efficiency. In addition, recent studies demonstrate that such simple scores are particularly effective with large-scale pre-trained models in vision (Fort et al., 2021) and vision-language domains (Ming et al., 2022a; Bitterwolf et al., 2023). We complement previous studies and investigate their effectiveness for documents." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.665, + 0.885, + 0.889 + ], + "angle": 0, + "content": "- Distance-based: Distance-based methods directly leverage feature embeddings \\(\\mathbf{z} = h(\\mathbf{x})\\) based on the idea that OOD inputs are relatively far away from ID clusters in the feature space, compared to ID inputs. Distance-based methods can be characterized as parametric and non-parametric. Parametric methods such as Mahalanobis score (Lee et al., 2018; Sehwag et al., 2021) assume ID embeddings follow class-conditional Gaussian distributions and use the Mahalanobis distance as the distance metric. On the other hand, non-parametric methods such as KNN+ (Sun et al., 2022) use cosine similarity as the distance metric." + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.893, + 0.884, + 0.919 + ], + "angle": 0, + "content": "1We also investigate gradient-based methods such as Grad-Norm (Huang et al., 2021) in Appendix C." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4975" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.121, + 0.082, + 0.491, + 0.2 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.119, + 0.205, + 0.493, + 0.229 + ], + "angle": 0, + "content": "(a) Illustration of common structures for document pretraining and classification." + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.083, + 0.872, + 0.201 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.501, + 0.205, + 0.871, + 0.228 + ], + "angle": 0, + "content": "(b) A detailed comparison of per-category accuracy on the RVL-CDIP test set." + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.238, + 0.883, + 0.31 + ], + "angle": 0, + "content": "Figure 2: (Left) Illustration of models for document pre-training and classification, with our proposed spatial-aware models in green blocks. Modality information is also shown atop each architecture. (Right) Evaluating fine-tuning performance for document classification of pre-trained models. Models are grouped into several categories (from left to right): language-only, vision-only, and multi-modal. For comparison, the performance of corresponding models in other groups is shown in gray. The average accuracy for each model is indicated in the parenthesis." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.334, + 0.489, + 0.431 + ], + "angle": 0, + "content": "Evaluation metrics. To evaluate OOD detection performance, we adopt the following commonly used metrics: the Area Under the Receiver Operating Characteristic (AUROC), False Positive Rate at \\(95\\%\\) Recall (FPR95), and the multi-class classification accuracy (ID Acc)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.442, + 0.322, + 0.459 + ], + "angle": 0, + "content": "3 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.467, + 0.49, + 0.692 + ], + "angle": 0, + "content": "Models. Fig. 2a summarizes common structures for document pre-training and classification models2. While documents typically come in the form of images (Harley et al., 2015), an OCR system can be used to extract words and their coordinates from the input image. Therefore, models can use single-modal or multi-modal information. We categorize these models according to the input modalities into the following groups: (1) models using only visual features, (2) models using solely textual features, (3) models incorporating both visual and textual features, and (4) models integrating additional spatial (especially layout) information. Further details can be found in Appendix A." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.701, + 0.49, + 0.862 + ], + "angle": 0, + "content": "- Vision-only: Document classification can be viewed as a standard image classification problem. We consider ResNet-50 (He et al., 2016) and ViT (Fort et al., 2021) as exemplar document image classification models. We adopt two common pre-training settings: (1) only pre-trained on ImageNet (Deng et al., 2009) and (2) further pre-trained on IIT-CDIP (Lewis et al., 2006) with masked image modeling \\((\\mathrm{MIM})^3\\). After pretraining, we append a classifier for fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.334, + 0.885, + 0.496 + ], + "angle": 0, + "content": "- Text-only: Alternatively, we can view document classification as text classification since documents often contain text blocks. To this end, we use RoBERTa (Liu et al., 2019) and Longformer (Beltagy et al., 2020) as the backbones. RoBERTa can handle up to 512 input tokens while Longformer can handle up to 4,096 input tokens. We pre-train the language models with masked language modeling (MLM) on IIT-CDIP extracted text corpus." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.514, + 0.885, + 0.738 + ], + "angle": 0, + "content": "- Text+Layout: Layout information plays a crucial role in the document domain, as shown in Fig. 3. To investigate the effect of layout information, we adopt LayoutLM as the backbone. We will show that spatial-aware models demonstrate promising OOD detection performance. However, such specialized models can be computationally expensive. Therefore, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa and transforms it into a spatial-aware model, which is computationally efficient and competitive for both ID classification and OOD detection (Sec. 5.2)." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.758, + 0.885, + 0.87 + ], + "angle": 0, + "content": "- Vision+Text+Layout: For comprehensiveness, we consider LayoutLMv3 and UDoc, which are large and computationally intensive. Both models are pre-trained on the full IIT-CDIP for fairness. These models utilize different input granularities and modalities, including textual, visual, and spatial information for document tasks." + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.334, + 0.885, + 0.87 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.868, + 0.489, + 0.905 + ], + "angle": 0, + "content": "\\( {}^{2} \\) Apart from document classification, in the Appendix B, we also investigate OOD detection for two entity-level tasks: document entity recognition and document object detection." + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.905, + 0.486, + 0.918 + ], + "angle": 0, + "content": "Note that the document classification dataset we used in" + }, + { + "type": "list", + "bbox": [ + 0.113, + 0.868, + 0.489, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.882, + 0.883, + 0.919 + ], + "angle": 0, + "content": "this paper, RVL-CDIP (Harley et al., 2015), is a subset of IIT-CDIP. Hence, unless otherwise specified, the IIT-CDIP pre-training data used in this paper excludes RVL-CDIP." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4976" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.199 + ], + "angle": 0, + "content": "Constructing ID and OOD datasets. We construct ID datasets from RVL-CDIP (Harley et al., 2015), where 12 out of 16 classes are selected as ID classes. Dataset details are in Appendix A. We consider two OOD scenarios: in-domain and out-domain, based on the content (e.g., words, background) and layout characteristics." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.219, + 0.492, + 0.494 + ], + "angle": 0, + "content": "- In-domain OOD: To determine the OOD categories, we analyzed the performance of recent document classification models on the RVL-CDIP test set. Fig. 2b shows the per-category test accuracy of various models. Naturally, for the classes the models perform poorly on, we may expect the models to detect such inputs as OOD instead of assigning a specific ID class with low confidence. We observe that the 4 categories (letter, form, scientific report, and presentation) result in the worst performance across most of the models with different modalities. We use these as OOD categories and construct the OOD datasets accordingly. The ID dataset is constructed from the remaining 12 categories, which we refer to as in-domain OOD datasets, as they are also sourced from RVL-CDIP." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.512, + 0.493, + 0.738 + ], + "angle": 0, + "content": "- Out-domain OOD: In the open-world setting, test inputs can have significantly different color schemes and layouts compared to ID samples. To mimic such scenarios, we use two public datasets as out-domain OOD test sets: NJU-Fudan Paper-Poster Dataset (Qiang et al., 2019) and CORD (Park et al., 2019). NJU-Fudan Paper-Poster Dataset contains scientific posters in digital PDF format4. CORD is a receipt understanding dataset with significantly different inputs compared to RVL-CDIP. As shown in Fig. 3, receipt images can be challenging and require models to handle not only textual but also visual and spatial information." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.759, + 0.491, + 0.889 + ], + "angle": 0, + "content": "We further support our domain selection using OTDD (Alvarez-Melis and Fusi, 2020), a flexible geometric method for comparing probability distributions, which enables us to compare any two datasets regardless of their label sets. We observe a clear gap between in-domain and out-domain data, which aligns with our data selection. Further details can be found in Appendix A.1." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.084, + 0.81, + 0.116 + ], + "angle": 0, + "content": "4 Analyzing OOD Reliability for Documents" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.125, + 0.85, + 0.142 + ], + "angle": 0, + "content": "4.1 OOD Detection Without Fine-Tuning" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.146, + 0.886, + 0.307 + ], + "angle": 0, + "content": "In this section, we begin by examining the influence of pre-training datasets on zero-shot OOD detection. For each model, we adopt the same pretraining objective while adjusting the amount of pre-training data. Specifically, we increase the data diversity by appending 10, 20, 40, and \\(100\\%\\) of randomly sampled data from IIT-CDIP dataset (around 11M) and pre-train each model. After pre-training, we measure the OOD detection performance with \\(\\mathrm{KNN + }\\) score based on feature embeddings." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.308, + 0.886, + 0.517 + ], + "angle": 0, + "content": "We observe that: (1) for out-domain OOD data (Fig. 4a, right), increasing the amount of pretraining data can significantly improve the zero-shot OOD detection performance (w.o. fine-tuning) for models across different modalities. Our hypothesis is that pre-training with diverse data is beneficial for coarse-grained OOD detection, such as inputs from different domains (e.g., color schemes). (2) For in-domain OOD inputs, even increasing the amount of pre-training data by over \\(40\\%\\) provides negligible improvements (Fig. 4a, left). This suggests the necessity of fine-tuning for improving in-domain OOD detection performance (Fig. 6)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.518, + 0.886, + 0.887 + ], + "angle": 0, + "content": "We further explore a more restricted setting for zero-shot OOD detection where potential OOD categories are removed from the pre-training dataset IIT-CDIP. First, we use LayoutLM fine-tuned on RVL-CDIP to predict labels for all documents in IIT-CDIP. Fig. 4b summarizes the distribution of the predicted classes on IIT-CDIP. Next, we remove the \"OOD\" categories from IIT-CDIP and pretrain two models (RoBERTa and LayoutLM) with 10, 20, 40, and \\(100\\%\\) of randomly sampled data from the filtered IIT-CDIP (dubbed III- \\(\\mathrm{CDIP^{-}}\\)), respectively. The zero-shot OOD performance for in-domain and out-domain OOD is shown in Fig. \\(4c^{5}\\). For RoBERTa, we observe similar trends as in Fig. 4a, where increasing the amount of pretraining data improves zero-shot OOD detection performance for out-domain data. However, the zero-shot performance of LayoutLM benefits from a larger pre-training dataset. In particular, given the same amount of pre-training data, LayoutLM consistently outperforms RoBERTa for both in-domain and out-domain OOD detection, which suggests that spatial information can be essential" + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.893, + 0.884, + 0.919 + ], + "angle": 0, + "content": "5Note that we do not show \\(0\\%\\) in Fig. 4c since we pre-train LayoutLM from scratch." + }, + { + "type": "footer", + "bbox": [ + 0.136, + 0.904, + 0.483, + 0.919 + ], + "angle": 0, + "content": "Extracted using https://github.com/pymupdf/PyMuPDF" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4977" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.081, + 0.885, + 0.248 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.261, + 0.884, + 0.292 + ], + "angle": 0, + "content": "Figure 3: (Top) Examples of ID inputs sampled from RVL-CDIP (top). (Bottom) In-domain OOD from RVL-CDIP, and out-domain OOD from Scientific Poster and Receipts." + }, + { + "type": "image", + "bbox": [ + 0.123, + 0.306, + 0.401, + 0.422 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.182, + 0.432, + 0.341, + 0.445 + ], + "angle": 0, + "content": "(a) Pre-train on IIT-CDIP." + }, + { + "type": "image", + "bbox": [ + 0.405, + 0.305, + 0.593, + 0.424 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.42, + 0.432, + 0.578, + 0.445 + ], + "angle": 0, + "content": "(b) Analysis of IIT-CDIP." + }, + { + "type": "image", + "bbox": [ + 0.599, + 0.309, + 0.875, + 0.42 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.651, + 0.432, + 0.822, + 0.444 + ], + "angle": 0, + "content": "(c) Pre-train on IIT-CDIP-." + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.456, + 0.884, + 0.486 + ], + "angle": 0, + "content": "Figure 4: The impact of pre-training data on zero-shot OOD detection performance. IIT-CDIP\\(^{-}\\) denotes the filtered pre-training data after removing the \"OOD\" categories." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.51, + 0.489, + 0.559 + ], + "angle": 0, + "content": "for boosting the OOD reliability in the document domain. Motivated by the above observations, we dive deeper and analyze spatial-aware models next." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.56, + 0.49, + 0.753 + ], + "angle": 0, + "content": "While pre-trained models exhibit the capability to differentiate data from various domains as a result of being trained on a diverse range of data. We observe that achieving more precise separation for in-domain OOD inputs remains difficult. Given this observation, we further analyze the impacts of fine-tuning for OOD detection with fixed pretraining datasets in the next section. By combining pre-trained models with a simple classifier and fine-tuning on RVL-CDIP (ID), we find that fine-tuning is advantageous in enhancing the OOD detection performance for both types of OOD samples." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.768, + 0.483, + 0.799 + ], + "angle": 0, + "content": "4.2 The Impact of Fine-Tuning on Document OOD Detection" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.807, + 0.489, + 0.92 + ], + "angle": 0, + "content": "Recent document models are often pre-trained on a large-scale dataset and adapted to the target task via fine-tuning. To better understand the role of fine-tuning, we explore the following questions: 1) How does fine-tuning impact OOD reliability for in-domain and out-domain OOD inputs? 2) How does model modality impact the performance?" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.51, + 0.885, + 0.913 + ], + "angle": 0, + "content": "We consider a wide range of models pretrained on pure-text/image data (e.g., ImageNet and Wikipedia) described in Appendix A.3. During fine-tuning, we combine pre-trained models with a simple classifier and fine-tune on RVL-CDIP (ID). For models before and after fine-tuning, we extract the final feature embeddings and use a distance-based method KNN+ (Sun et al., 2022) for OOD detection. The results are shown in Fig. 6. We observe the following trends. First, fine-tuning largely improves OOD detection performance for both in-domain and out-domain OOD data. The same trend holds broadly across models with different modalities. Second, the improvement of fine-tuning is less significant for out-domain OOD data. For example, on Receipt (out-domain OOD), the AUROC for pre-trained ViT model is 97.13, whereas fine-tuning only improves by \\(0.79\\%\\). This suggests that pre-trained models do have the potential to separate data from different domains due to the diversity of data used for pre-training, while it remains hard for pre-trained models to perform finer-grained separation for in-domain OOD inputs. Therefore, fine-tuning is beneficial for improving OOD detection performance for both types of OOD" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4978" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.117, + 0.082, + 0.496, + 0.182 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.082, + 0.881, + 0.182 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.192, + 0.884, + 0.222 + ], + "angle": 0, + "content": "Figure 5: Comparison between representative feature-based scores and logit-based scores for spatial-aware and non-spatial-aware models. Spatial-aware models are colored in blue." + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.234, + 0.375, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.383, + 0.234, + 0.627, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.636, + 0.234, + 0.882, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.369, + 0.884, + 0.412 + ], + "angle": 0, + "content": "Figure 6: OOD detection performance for pre-trained models w. and w.o. fine-tuning. We use a distance-based method KNN+ as the OOD scoring function. Fine-tuning significantly improves performance for both in and out-domain OOD data." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.437, + 0.488, + 0.564 + ], + "angle": 0, + "content": "samples. To further validate our conclusion, we consider two additional in-domain OOD settings for our analysis: (1) selecting the classes the model performs well on, as in-domain OOD categories; (2) randomly selecting classes as OOD categories (Appendix A.2). We find that fine-tuning improves OOD detection for both settings, further verifying our observations." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.568, + 0.49, + 0.825 + ], + "angle": 0, + "content": "Next, we take a closer look at the impact of model modality on out-domain OOD detection. As shown in Fig. 6 (mid and right), both vision and text-based models demonstrate strong reliability against scientific posters (OOD). However, vision-based models display stronger performance than text-based models for Receipts (OOD). This can be explained by the fact that ViT was first pre-trained on ImageNet while scientific posters and receipts contain diverse visual information such as colors and edges for vision models to utilize (see Fig. 3). On the other hand, although fine-tuning text-based models largely improves the detection performance compared to pre-trained counterparts, utilizing only textual information can be inherently limited for out-domain OOD detection." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.842, + 0.475, + 0.859 + ], + "angle": 0, + "content": "5 The Importance of Spatial-Awareness" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.872, + 0.49, + 0.919 + ], + "angle": 0, + "content": "In previous sections, we mainly focus on mainstream text-based and vision-based models for in- and out-domain OOD detection. Next, we consider" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.437, + 0.885, + 0.516 + ], + "angle": 0, + "content": "models tailored to document processing, which we refer to as spatial-aware models, such as LayoutLMv3 and UDoc. Given fine-tuned models, we compare the performance of logit-based and distance-based OOD scores." + }, + { + "type": "image", + "bbox": [ + 0.514, + 0.527, + 0.88, + 0.66 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.672, + 0.884, + 0.773 + ], + "angle": 0, + "content": "Figure 7: Illustration of our spatial-aware adapter for language models. We present 2 adapter designs (marked in green box): (1) insert the adapter into the word embedding layer during pre-training and fine-tuning; (2) insert the adapter into the output layer for fine-tuning only. For the first design, we freeze the word embedding layer and learn the adapter and transformer layers." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.802, + 0.822, + 0.818 + ], + "angle": 0, + "content": "5.1 Analysis of Spatial-Aware Models" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.823, + 0.884, + 0.919 + ], + "angle": 0, + "content": "We summarize key comparisons in Fig. 5, where we use MSP and Energy as exemplar logit-based scores and \\(\\mathrm{KNN + }\\) as the distance-based score. Full results are in Appendix C. We can see that the simple KNN-based score (KNN+) consistently outperforms logit-based scores for both in-domain and" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.928, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4979" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.112, + 0.085, + 0.493, + 0.358 + ], + "angle": 0, + "content": "out-domain OOD data across different models with different modalities. This is in contrast with recent works that investigate large-scale pre-trained models in the vision-language domain, where logit-based scores demonstrate strong OOD detection performance (Fort et al., 2021). As documents are distinct from natural image-text pairs, observations in the vision-language domain do not seamlessly translate to the document domain. Moreover, spatial-aware models demonstrate stronger OOD detection performance for both in and out-domain OOD. For example, with the best scoring function \\((\\mathrm{KNN}+)\\), LayoutLMv3 improves the average AUROC by \\(7.09\\%\\) for out-domain OOD and \\(7.54\\%\\) for in-domain OOD data compared to RoBERTa. This further highlights the value of spatial information for improving OOD robustness for documents." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.36, + 0.49, + 0.552 + ], + "angle": 0, + "content": "Despite the impressive improvements brought by spatial-aware models, acquiring a large-scale pretraining dataset that includes spatial information remains challenging. In contrast, there is a growing abundance of pre-trained language models that are based on textual data. This motivates us to explore the possibility of leveraging these pre-trained language models by training an adapter on a small dataset containing document-specific information. By adopting this approach, we can effectively utilize existing models while minimizing the time and cost required for training." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.565, + 0.489, + 0.581 + ], + "angle": 0, + "content": "5.2 Towards Effective Spatial-Aware Adapter" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.587, + 0.49, + 0.844 + ], + "angle": 0, + "content": "During our investigation into the effects of model modality, pre-training, and fine-tuning on various types of OOD inputs, we find that spatial/layout information plays a critical role in the document domain. However, existing pre-training models such as LayoutLM series, SelfDoc, and UDoc do not fully leverage the benefits of well-pre-trained language models. This raises the question of whether a large-scale language model, such as RoBERTa, can be adapted to detect OOD documents effectively. In this section, we demonstrate that incorporating an adapter module that accounts for spatial information with transformer-based pre-trained models can achieve strong performance with minimal changes to the code. To the best of our knowledge, this is the first study to apply the adapter idea to documents." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.855, + 0.493, + 0.922 + ], + "angle": 0, + "content": "Spatial-aware adapter. Given a pre-trained language model such as RoBERTa, we propose an adapter that utilizes spatial information. We consider two potential designs: 1) the adapter is ap-" + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.083, + 0.882, + 0.193 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.203, + 0.886, + 0.304 + ], + "angle": 0, + "content": "Figure 8: Comparison of OOD detection performance of Spatial-RoBERTa and RoBERTa. All models are initialized with public pre-trained checkpoints trained on purely textual data and further pre-trained on IIT-CDIP. The only difference is that Spatial-RoBERTa has an additional spatial-ware adapter and takes word bounding boxes as additional inputs." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.335, + 0.885, + 0.479 + ], + "angle": 0, + "content": "pended to the word embedding layer, denoted as Spatial-RoBERTa (pre), which requires both pretraining and fine-tuning. This architecture is illustrated in the top row of Fig. 7.2) The adapter is appended to the final layer of the text encoder, denoted as Spatial-BoBERTa (post), which only requires fine-tuning as the model can utilize the pre-trained textual encoder, as shown in the bottom row of Fig. 7." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.482, + 0.885, + 0.868 + ], + "angle": 0, + "content": "For Spatial-RoBERTa (pre), we freeze the word embedding layer during pre-training for several considerations: 1) word embeddings learned from large-scale corpus already cover most of those words from documents; 2) pre-training on documents without strong language dependency may not help improve word embeddings. For example, in semi-structured documents (e.g., forms, receipts), language dependencies are not as strong as in text-rich documents (e.g., letters, resumes), which may degenerate the learned word representations. In practice, each word has a normalized bounding box \\((x_0, y_0, x_1, y_1)\\), where \\((x_0, y_0) / (x_1, y_1)\\) corresponds to the position of the upper left / lower right in the bounding box. To encode positional information, we employ four position embedding layers, where each layer= encodes one coordinate \\((e.g., x_0)\\) and produces a corresponding position embedding. The special tokens ([CLS], [SEP], and [PAD]) are attached with an empty bounding box \\((0, 0, 0, 0)\\). As depicted in the top row of Fig. 7, the spatial-aware word embeddings are formed by adding position embeddings to their corresponding word embeddings." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.872, + 0.887, + 0.92 + ], + "angle": 0, + "content": "For Spatial-RoBERTa (post), position embeddings are added through late fusion in the final hidden states during fine-tuning without affecting the" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.941 + ], + "angle": 0, + "content": "4980" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.081, + 0.38, + 0.212 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.373, + 0.081, + 0.631, + 0.212 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.631, + 0.081, + 0.882, + 0.212 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.23, + 0.885, + 0.288 + ], + "angle": 0, + "content": "Figure 9: Correlation between ID accuracy and OOD detection performance. For most models, ID accuracy is positively correlated with OOD detection performance. Language models with spatial-aware adapters (highlighted in blue) achieve significantly higher ID accuracy and stronger OOD robustness (in AUROC) compared to language models without adapters. Here, \\((+)\\) represents further pre-training on the IIT-CDIP dataset." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.313, + 0.49, + 0.442 + ], + "angle": 0, + "content": "pre-trained encoder. Our experiments demonstrate that introducing spatial-aware adapters during pretraining yields better results than only adding position embeddings during fine-tuning. For additional details, please refer to Appendix C. In the following, we focus on analyzing Spatial-RoBERTa (pre) and comparing both ID and OOD performance with that of the pure-text pre-trained RoBERTa." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.451, + 0.49, + 0.675 + ], + "angle": 0, + "content": "Spatial-RoBERTa significantly outperforms RoBERTa. To verify the effectiveness of Spatial-RoBERTa, we compare the OOD detection performance of pre-trained and fine-tuned models. The results are shown in Fig. 8, where OOD performance is based on \\(\\mathrm{KNN + (K = 10)}\\). Full results can be seen in Table 6. Spatial-RoBERTa significantly improves the OOD detection performance, especially after fine-tuning. For example, compared to RoBERTa (base), Spatial-RoBERTa (base) improves AUROC significantly by \\(4.24\\%\\) averaged over four in-domain OOD datasets. This further confirms the importance of spatial information for OOD detection in the document domain." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.684, + 0.49, + 0.862 + ], + "angle": 0, + "content": "Spatial-RoBERTa is competitive for both ID classification and OOD detection. Beyond OOD detection performance, we also examine the multi-class ID classification accuracy and plot the two metrics for all models with different modalities in Fig. 9. We can clearly observe a positive correlation between ID accuracy and OOD detection performance (measured by AUROC) for both in-domain and out-domain OOD data. Moreover, spatial-aware models display superior ID accuracy and OOD robustness compared to text-only and" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.312, + 0.885, + 0.424 + ], + "angle": 0, + "content": "vision-only models. Overall, Spatial-RoBERTa greatly improves upon RoBERTa and matches the performance of models with more complex and specialized architectures such as LayoutLM. Specifically, Spatial-RoBERTaLarge achieves 97.37 ID accuracy, which is even higher than LayoutLM (97.28) and UDoc (97.36)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.426, + 0.887, + 0.553 + ], + "angle": 0, + "content": "To summarize, our spatial-aware adapter effectively adapts pre-trained transformer-based text models to the document domain, improving both ID and OOD performance. In addition, by freezing the original word embeddings during pre-training, the models (Spatial-RoBERTaBase and Spatial-RoBERTaLarge) are parameter-efficient and thus reduce the training cost." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.566, + 0.652, + 0.58 + ], + "angle": 0, + "content": "6 Conclusions" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.591, + 0.885, + 0.865 + ], + "angle": 0, + "content": "In this work, we provide a comprehensive and in-depth study on the impacts of pre-training, finetuning, model-modality, and OOD scores on a broad variety of document OOD detection tasks. We present novel insights on document OOD detection, which are under-explored or in contrast with OOD detection works based on vision-language models. In particular, we highlight that spatial information is critical for OOD detection in documents. We further propose a spatial-aware adapter as an add-on module to transformer-based models. Our module adapts pre-trained language models to the document domain. Extensive experiments on a broad range of datasets verify the effectiveness of our design. We hope our work will inspire future research toward improving OOD robustness for reliable document understanding." + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.87, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Spatial-RoBERTaBase (pre) incorporates position information during both pre-training and fine-tuning, while Spatial-RoBERTaBase (post) only inserts the adapter into the output layer for fine-tuning." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.941 + ], + "angle": 0, + "content": "4981" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.251, + 0.099 + ], + "angle": 0, + "content": "7 Limitations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.111, + 0.49, + 0.32 + ], + "angle": 0, + "content": "In this work, our main focus is on OOD detection for document understanding, with a specific emphasis on the context of document classification. As OOD detection based on document pre-trained models remains largely underexplored, we believe establishing an in-depth and extensive study of OOD detection for document classification would be a valuable stepping stone towards more complex tasks. Apart from document classification, in the Appendix B, we also investigate OOD detection for two entity-level tasks: document entity recognition and document object detection. We leave a more comprehensive treatment for future works." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.347, + 0.214, + 0.362 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.37, + 0.486, + 0.398 + ], + "angle": 0, + "content": "David Alvarez-Melis and Nicolo Fusi. 2020. Geometric dataset distances via optimal transport. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.409, + 0.489, + 0.46 + ], + "angle": 0, + "content": "Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In ICCV." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.472, + 0.487, + 0.511 + ], + "angle": 0, + "content": "Udit Arora, William Huang, and He He. 2021. Types of out-of-distribution texts and how to detect them. In EMNLP." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.523, + 0.487, + 0.563 + ], + "angle": 0, + "content": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.574, + 0.486, + 0.601 + ], + "angle": 0, + "content": "Abhijit Bendale and Terrance E Boult. 2016. Towards open set deep networks. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.612, + 0.488, + 0.651 + ], + "angle": 0, + "content": "Julian Bitterwolf, Maximilian Mueller, and Matthias Hein. 2023. In or out? fixing imagenet out-of-distribution detection evaluation. In ICML." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.662, + 0.488, + 0.702 + ], + "angle": 0, + "content": "Lei Cui, Yiheng Xu, Tengchao Lv, and Furu Wei. 2021. Document ai: Benchmarks, models and applications. arXiv preprint arXiv:2111.08609." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.713, + 0.487, + 0.753 + ], + "angle": 0, + "content": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.764, + 0.487, + 0.817 + ], + "angle": 0, + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.828, + 0.487, + 0.868 + ], + "angle": 0, + "content": "Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. 2022. Vos: Learning what you don't know by virtual outlier synthesis. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.879, + 0.487, + 0.919 + ], + "angle": 0, + "content": "Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. 2022. Zero-shot open set detection by extending clip. In AAAI." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.37, + 0.489, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.883, + 0.125 + ], + "angle": 0, + "content": "Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. 2021. Exploring the limits of out-of-distribution detection. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.136, + 0.882, + 0.187 + ], + "angle": 0, + "content": "ZongYuan Ge, Sergey Demyanov, Zetao Chen, and Rahul Garnavi. 2017. Generative openmax for multi-class open set classification. arXiv preprint arXiv:1707.07418." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.198, + 0.882, + 0.25 + ], + "angle": 0, + "content": "Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, and Tong Sun. 2021. Unified pretraining framework for document understanding. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.26, + 0.883, + 0.323 + ], + "angle": 0, + "content": "Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. 2022. Xlayoutlm: Towards layout-aware multimodal networks for visually-rich document understanding. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.334, + 0.883, + 0.386 + ], + "angle": 0, + "content": "Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In ICDAR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.396, + 0.883, + 0.45 + ], + "angle": 0, + "content": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pages 770-778." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.459, + 0.883, + 0.511 + ], + "angle": 0, + "content": "Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. 2022. Scaling out-of-distribution detection for real-world settings. In ICML." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.521, + 0.883, + 0.56 + ], + "angle": 0, + "content": "Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.569, + 0.883, + 0.634 + ], + "angle": 0, + "content": "Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In AAAI." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.644, + 0.883, + 0.696 + ], + "angle": 0, + "content": "Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. 2020. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.706, + 0.882, + 0.745 + ], + "angle": 0, + "content": "Rui Huang, Andrew Geng, and Yixuan Li. 2021. On the importance of gradients for detecting distributional shifts in the wild. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.755, + 0.883, + 0.794 + ], + "angle": 0, + "content": "Rui Huang and Yixuan Li. 2021. Mos: Towards scaling out-of-distribution detection for large semantic space. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.804, + 0.883, + 0.856 + ], + "angle": 0, + "content": "Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In ACMMM." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.866, + 0.883, + 0.919 + ], + "angle": 0, + "content": "Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In IC-DAR Workshop." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.883, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4982" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.489, + 0.126 + ], + "angle": 0, + "content": "Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, and Dilek Hakkani-Tur. 2022. Towards textual out-of-domain detection without in-domain labels. TASLP." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.137, + 0.487, + 0.19 + ], + "angle": 0, + "content": "Geewook Kim, Teakgyu Hong, Moonbin Yim, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. 2022. Donut: Document understanding transformer withoutOCR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.201, + 0.486, + 0.228 + ], + "angle": 0, + "content": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.239, + 0.487, + 0.291 + ], + "angle": 0, + "content": "Stefan Larson, Gordon Lim, Yutong Ai, David Kuang, and Kevin Leach. 2022. Evaluating out-of-distribution performance on document image classifiers. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.303, + 0.487, + 0.355 + ], + "angle": 0, + "content": "Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.367, + 0.487, + 0.419 + ], + "angle": 0, + "content": "D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard. 2006. Building a test collection for complex document information processing. In SIGIR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.431, + 0.487, + 0.483 + ], + "angle": 0, + "content": "Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In AAAI." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.495, + 0.487, + 0.547 + ], + "angle": 0, + "content": "Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. 2022. Dit: Self-supervised pretraining for document image transformer. In ACM MM." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.559, + 0.487, + 0.612 + ], + "angle": 0, + "content": "Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021a. Selfdoc: Self-supervised document representation learning. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.623, + 0.487, + 0.675 + ], + "angle": 0, + "content": "Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, and Jun Zhang. 2021b. kfolden: k-fold ensemble for out-of-distribution detection. In EMNLP." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.687, + 0.487, + 0.726 + ], + "angle": 0, + "content": "Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.738, + 0.487, + 0.776 + ], + "angle": 0, + "content": "Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. 2020. Energy-based out-of-distribution detection. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.789, + 0.487, + 0.855 + ], + "angle": 0, + "content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.866, + 0.487, + 0.918 + ], + "angle": 0, + "content": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.489, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.513, + 0.086, + 0.882, + 0.138 + ], + "angle": 0, + "content": "Yifei Ming, Ziyang Cai, Jiumiang Gu, Yiyou Sun, Wei Li, and Yixuan Li. 2022a. Delving into out-of-distribution detection with vision-language representations. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.149, + 0.882, + 0.188 + ], + "angle": 0, + "content": "Yifei Ming, Ying Fan, and Yixuan Li. 2022b. Poem: Out-of-distribution detection with posterior sampling. In ICML. PMLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.199, + 0.882, + 0.239 + ], + "angle": 0, + "content": "Yifei Ming and Yixuan Li. 2023. How does fin-tuning impact out-of-distribution detection for vision-language models? IJCV." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.249, + 0.882, + 0.288 + ], + "angle": 0, + "content": "Yifei Ming, Yiyou Sun, Ousmane Dia, and Yixuan Li. 2023. How to exploit hyperspherical embeddings for out-of-distribution detection? In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.299, + 0.882, + 0.338 + ], + "angle": 0, + "content": "Yifei Ming, Hang Yin, and Yixuan Li. 2022c. On the impact of spurious correlation for out-of-distribution detection. In AAAI." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.349, + 0.882, + 0.402 + ], + "angle": 0, + "content": "Ajoy Mondal, Peter Lipps, and CV Jawahar. 2020. Iiit-13k: a new dataset for graphical object detection in documents. In International Workshop on Document Analysis Systems." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.412, + 0.882, + 0.464 + ], + "angle": 0, + "content": "Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. 2019. Do deep generative models know what they don't know? In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.474, + 0.882, + 0.514 + ], + "angle": 0, + "content": "Poojan Oza and Vishal M Patel. 2019. C2ae: Class conditioned auto-encoder for open-set recognition. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.524, + 0.882, + 0.578 + ], + "angle": 0, + "content": "Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: A consolidated receipt dataset for post-ocr parsing. In NeurIPS Workshop." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.588, + 0.882, + 0.64 + ], + "angle": 0, + "content": "Alexander Podolskiy, Dmitry Lipin, Andrey Bout, Ekaterina Artemova, and Irina Piontkovskaya. 2021. Revisiting mahalanobis distance for transformer-based out-of-domain detection. In AAAI." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.65, + 0.882, + 0.703 + ], + "angle": 0, + "content": "Yu-Ting Qiang, Yan-Wei Fu, Xiao Yu, Yan-Wen Guo, Zhi-Hua Zhou, and Leonid Sigal. 2019. Learning to generate posters of scientific papers by probabilistic graphical models. JCST." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.713, + 0.882, + 0.78 + ], + "angle": 0, + "content": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.789, + 0.882, + 0.855 + ], + "angle": 0, + "content": "Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, and Peter J Liu. 2023. Out-of-distribution detection and selective generation for conditional language models. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.866, + 0.882, + 0.918 + ], + "angle": 0, + "content": "Madeline C Schiappa, Yogesh S Rawat, Shruti Vyas, Vibhav Vineet, and Hamid Palangi. 2022. Multimodal robustness analysis against language and visual perturbations. In NeurIPS." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4983" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.125 + ], + "angle": 0, + "content": "Vikash Sehwag, Mung Chiang, and Prateek Mittal. 2021. Ssd: A unified framework for self-supervised outlier detection. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.136, + 0.487, + 0.188 + ], + "angle": 0, + "content": "Yilin Shen, Yen-Chang Hsu, Avik Ray, and Hongxia Jin. 2021. Enhancing the generalization for intent classification and out-of-domain detection in SLU. In ACL-IJCNLP." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.199, + 0.486, + 0.226 + ], + "angle": 0, + "content": "Ray Smith. 2007. An overview of the tesseractOCR engine. In ICDAR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.236, + 0.487, + 0.276 + ], + "angle": 0, + "content": "Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. VI-bert: Pre-training of generic visual-linguistic representations. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.286, + 0.487, + 0.325 + ], + "angle": 0, + "content": "Yiyou Sun, Chuan Guo, and Yixuan Li. 2021. React: Out-of-distribution detection with rectified activations. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.336, + 0.487, + 0.375 + ], + "angle": 0, + "content": "Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. 2022. Out-of-distribution detection with deep nearest neighbors. In ICML." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.386, + 0.487, + 0.438 + ], + "angle": 0, + "content": "Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. 2020. Csi: Novelty detection via contrastive learning on distributionally shifted instances. In NeurIPS." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.449, + 0.487, + 0.514 + ], + "angle": 0, + "content": "Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, and Mohit Bansal. 2023. Unifying vision, text, and layout for universal document processing. In CVPR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.524, + 0.487, + 0.564 + ], + "angle": 0, + "content": "Thirumalaisamy P Velavan and Christian G Meyer. 2020. The Covid-19 epidemic. Tropical medicine & international health, 25(3):278." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.575, + 0.487, + 0.64 + ], + "angle": 0, + "content": "Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, Dianhai Yu, et al. 2022a. mmlayout: Multi-grained multimodal transformer for document understanding. In ACMMM." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.651, + 0.487, + 0.716 + ], + "angle": 0, + "content": "Zilong Wang, Jiaxiang Gu, Chris Tensmeyer, Nikolaos Barmpalios, Ani Nenkova, Tong Sun, Jingbo Shang, and Vlad I Morariu. 2022b. Mgdoc: Pre-training with multi-granular hierarchy for document image understanding. In EMNLP." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.727, + 0.487, + 0.804 + ], + "angle": 0, + "content": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.816, + 0.487, + 0.868 + ], + "angle": 0, + "content": "Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. 2019. Detector2. https://github.com/facebookresearch/detectron2." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.878, + 0.487, + 0.918 + ], + "angle": 0, + "content": "Zhisheng Xiao, Qing Yan, and Yali Amit. 2020. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In NeurIPS." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.138 + ], + "angle": 0, + "content": "Keyang Xu, Tongzheng Ren, Shikun Zhang, Yihao Feng, and Caiming Xiong. 2021a. Unsupervised out-of-domain detection via pre-trained transformers. In ACL." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.149, + 0.882, + 0.215 + ], + "angle": 0, + "content": "Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. 2021b. Layoutmv2: Multi-modal pre-training for visually-rich document understanding. In ACL." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.225, + 0.882, + 0.276 + ], + "angle": 0, + "content": "Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pre-training of text and layout for document image understanding. In SIGKDD." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.287, + 0.882, + 0.327 + ], + "angle": 0, + "content": "Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. In ICDAR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.336, + 0.882, + 0.376 + ], + "angle": 0, + "content": "Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021. Contrastive out-of-distribution detection for pretrained transformers. In EMNLP." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.386, + 0.882, + 0.425 + ], + "angle": 0, + "content": "Yunhua Zhou, Peiju Liu, and Xipeng Qiu. 2022. KNN-contrastive learning for out-of-domain intent classification. In ACL." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.425 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4984" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.381, + 0.099 + ], + "angle": 0, + "content": "A Dataset and Model Details" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.113, + 0.232, + 0.126 + ], + "angle": 0, + "content": "A.1 Datasets" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.134, + 0.488, + 0.248 + ], + "angle": 0, + "content": "The full RVL-CDIP dataset consists of 320K/40K/40K training/validation/testing images under 16 categories. We select 12 of them as the ID (In-domain) data. We employ the Google OCR engine to extract the text and layout information, which provides tokens, text blocks and the corresponding bounding boxes." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.261, + 0.478, + 0.277 + ], + "angle": 0, + "content": "A.2 Quantifying OOD Dataset Construction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.284, + 0.49, + 0.603 + ], + "angle": 0, + "content": "The distance between datasets can be measured via Optimal Transport Dataset Distance (OTDD)\\(^{8}\\). We visualize the OTDD distance between ID and the OOD (both in-domain and out-domain) data in Fig. 10a, where we highlight the in-domain OOD data in blue and the out-domain OOD data in green. Specifically, we randomly sample 1000 images from each dataset and calculate the average distance between pairs of datasets. We can see a significant gap between the OTDD of in-domain OOD data and out-domain OOD data. To make the analysis more thorough, we consider two additional in-domain OOD settings: (1) select the classes the model performs well as OOD data; (2) randomly select classes as OOD data. The results are shown in Fig. 10b and Fig. 10c. We can see that the distance between ID and in-domain OOD is similar to the original scheme (Fig. 10a). This suggests that most in-domain OOD categories are not far from ID data." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.607, + 0.49, + 0.703 + ], + "angle": 0, + "content": "While this paper represents an initial endeavor, we hope that our work will serve as a stepping stone towards constructing more comprehensive and diverse OOD benchmarks in the document domain, akin to those available in the NLP and natural image domain." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.717, + 0.388, + 0.733 + ], + "angle": 0, + "content": "A.3 Models and Training Details" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.739, + 0.49, + 0.867 + ], + "angle": 0, + "content": "All models reported in Fig. 2b, except UDoc, are initialized with pre-trained weights from Huggingface and fine-tuned on the full RVL-CDIP training set. During fine-tuning, we train these models on RVL-CDIP with the cross-entropy loss. The models were optimized with Adam optimizer (Kingma and Ba, 2014) for 30 epochs with a batch size of 50 and a learning rate of \\(2 \\times 10^{-5}\\) on 8 A100 GPUs." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.885, + 0.117 + ], + "angle": 0, + "content": "The following are the hyperparameters of the models used in our paper:" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.13, + 0.596, + 0.145 + ], + "angle": 0, + "content": "Text-only:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.16, + 0.884, + 0.257 + ], + "angle": 0, + "content": "- BERT and RoBERTa: We adopt RoBERTaBase (12 layers) and BERTBase (12 layers) as backbones and set the maximum sequence length to 512. For RoBERTa, the classifier consists of two linear layers followed by a tanh activation function." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.272, + 0.885, + 0.334 + ], + "angle": 0, + "content": "- LongformerBase: We also employ LongformerBase (12 layers) as the backbone and set the maximum sequence length to 4,096." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.16, + 0.885, + 0.334 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.351, + 0.611, + 0.366 + ], + "angle": 0, + "content": "Vision-only:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.381, + 0.883, + 0.429 + ], + "angle": 0, + "content": "- ResNet50: We adopt ResNet50 pre-trained on ImageNet-1k as the backbone. We fine-tune the model at a resolution of \\(224 \\times 224\\)." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.444, + 0.884, + 0.507 + ], + "angle": 0, + "content": "- ViT: We consider ViTBase (vit-base-patch16-224, pre-trained on ImageNet-21k) as the backbone and fine-tune at a resolution of \\(224 \\times 224\\)." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.523, + 0.884, + 0.602 + ], + "angle": 0, + "content": "- SwinB: We also use the Swin Transformer (swin-base-patch4-window7-224-in22k, pretrained on ImageNet-21k) as the backbone and fine-tune the model at a resolution of \\(224 \\times 224\\)." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.381, + 0.884, + 0.602 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.619, + 0.621, + 0.634 + ], + "angle": 0, + "content": "Text+Layout:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.65, + 0.884, + 0.713 + ], + "angle": 0, + "content": "- **LayoutLMv1:** This model employs the LayoutLM (layoutlm-base-uncased, 12 layers, pre-trained on IIT-CDIP) as the backbone. We set the maximum sequence length to 512." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.728, + 0.885, + 0.839 + ], + "angle": 0, + "content": "- Spatial-RoBERTaBase (Pre): This model combines our spatial-aware adapter to the pretrained RoBERTaBase model. The adapter is applied to the word embedding layer. We freeze the pre-trained word embeddings and optimize the spatial-aware adapter and transformers." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.855, + 0.884, + 0.919 + ], + "angle": 0, + "content": "- Spatial-RoBERTaBase (Post): Instead of inserting the spatial-aware adapter in the input layer, this model integrates the spatial-aware adapter at the output layer of the transformer." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.65, + 0.885, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.136, + 0.878, + 0.448, + 0.892 + ], + "angle": 0, + "content": "7https://cloud.google.com/vision/docs/ocr" + }, + { + "type": "page_footnote", + "bbox": [ + 0.137, + 0.892, + 0.396, + 0.905 + ], + "angle": 0, + "content": "8https://github.com/microsoft/otdd" + }, + { + "type": "page_footnote", + "bbox": [ + 0.137, + 0.905, + 0.365, + 0.918 + ], + "angle": 0, + "content": "9https://huggingface.co/models" + }, + { + "type": "list", + "bbox": [ + 0.136, + 0.878, + 0.448, + 0.918 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.94 + ], + "angle": 0, + "content": "4985" + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.941, + 0.504, + 0.952 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.162, + 0.348, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.137, + 0.292, + 0.326, + 0.306 + ], + "angle": 0, + "content": "(a) OOD (Worst performance)." + }, + { + "type": "image", + "bbox": [ + 0.352, + 0.163, + 0.582, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.375, + 0.292, + 0.555, + 0.306 + ], + "angle": 0, + "content": "(b) OOD (Best performance)." + }, + { + "type": "image", + "bbox": [ + 0.584, + 0.163, + 0.815, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.607, + 0.292, + 0.788, + 0.306 + ], + "angle": 0, + "content": "(c) OOD (Random selection)." + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.317, + 0.884, + 0.346 + ], + "angle": 0, + "content": "Figure 10: Visualization of optimal transport dataset distance for ID and OOD (in-domain and out-domain) datasets. We highlight the in-domain OOD data in blue and the out-domain OOD data in green." + }, + { + "type": "image", + "bbox": [ + 0.128, + 0.514, + 0.313, + 0.576 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.128, + 0.577, + 0.312, + 0.641 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.144, + 0.645, + 0.294, + 0.658 + ], + "angle": 0, + "content": "(a) RoBERTaBase (10%)" + }, + { + "type": "image", + "bbox": [ + 0.315, + 0.514, + 0.498, + 0.577 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.315, + 0.577, + 0.497, + 0.641 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.33, + 0.645, + 0.481, + 0.658 + ], + "angle": 0, + "content": "(b) RoBERTaBase (20%)" + }, + { + "type": "image", + "bbox": [ + 0.503, + 0.514, + 0.685, + 0.577 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.503, + 0.577, + 0.685, + 0.641 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.645, + 0.667, + 0.658 + ], + "angle": 0, + "content": "(c) RoBERTaBase (40%)" + }, + { + "type": "image", + "bbox": [ + 0.688, + 0.514, + 0.872, + 0.577 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.688, + 0.577, + 0.872, + 0.641 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.699, + 0.645, + 0.858, + 0.658 + ], + "angle": 0, + "content": "(d) RoBERTaBase (100%)" + }, + { + "type": "image", + "bbox": [ + 0.128, + 0.66, + 0.311, + 0.719 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.128, + 0.719, + 0.311, + 0.785 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.161, + 0.788, + 0.273, + 0.802 + ], + "angle": 0, + "content": "(e) \\(\\mathrm{ViT_{Base}}\\) (10%)" + }, + { + "type": "image", + "bbox": [ + 0.315, + 0.66, + 0.497, + 0.785 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.348, + 0.788, + 0.46, + 0.802 + ], + "angle": 0, + "content": "(f) \\(\\mathrm{ViT_{Base}}\\) (20%)" + }, + { + "type": "image", + "bbox": [ + 0.501, + 0.66, + 0.683, + 0.785 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.534, + 0.788, + 0.648, + 0.802 + ], + "angle": 0, + "content": "(g) \\(\\mathrm{ViT_{Base}}\\) (40%)" + }, + { + "type": "image", + "bbox": [ + 0.688, + 0.66, + 0.871, + 0.785 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.717, + 0.788, + 0.838, + 0.802 + ], + "angle": 0, + "content": "(h) \\(\\mathrm{ViT_{Base}}\\) (100%)" + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.806, + 0.884, + 0.835 + ], + "angle": 0, + "content": "Figure 11: Feature visualization for pre-trained (with different numbers of pre-training data) and fine-tuned models. We show both in-domain (RVL-CDIP) and out-domain (CORD) OOD datasets." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4986" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.942, + 0.505, + 0.953 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.075, + 0.331, + 0.39 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.389, + 0.075, + 0.605, + 0.387 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.664, + 0.075, + 0.878, + 0.386 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.112, + 0.397, + 0.884, + 0.482 + ], + "angle": 0, + "content": "Figure 12: MSP, Energy, KNN, and Maha score histogram distributions of ID (blue) and OOD (green) inputs derived from fine-tuned ResNet-50, RoBERTa, and LayoutLMv3. The KNN scores calculated from both vision and language models naturally form smooth distributions. In contrast, MSP and Maha scores for both in- and out-of-distribution data concentrate on high values. Overall our experiments show that using feature space makes the scores more distinguishable between and out-of-distributions and, as a result, enables more effective OOD detection." + }, + { + "type": "image", + "bbox": [ + 0.119, + 0.485, + 0.487, + 0.604 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.616, + 0.489, + 0.66 + ], + "angle": 0, + "content": "Figure 13: The network architectures in green blocks are our proposed models. We also show the modality information on top of each architecture." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.695, + 0.285, + 0.71 + ], + "angle": 0, + "content": "Vision+Text+Layout:" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.743, + 0.488, + 0.79 + ], + "angle": 0, + "content": "- LaytouLMv3: We use LayoutLMv3 (layoutlmv3-base, 12 layers, pre-trained on IIT-CDIP) as the backbone." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.823, + 0.49, + 0.919 + ], + "angle": 0, + "content": "- UDoc: We use a slight variant of UDoc with the only difference in the sentence encoder, where we adopt a smaller version of the pretrained sentence encoder (all-MiniLM-L6-v2, 6 layers) instead of the larger sentence encoder (bert-base-nli-mean-tokens, 12 layers)." + }, + { + "type": "list", + "bbox": [ + 0.136, + 0.743, + 0.49, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.49, + 0.826, + 0.507 + ], + "angle": 0, + "content": "B Beyond Document Classification" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.533, + 0.885, + 0.919 + ], + "angle": 0, + "content": "In the main paper, we mainly focus on document classification to provide a thorough and in-depth analysis. In this section, we go beyond document classification and explore OOD detection for two entity-level tasks in documents: document entity recognition and document object detection. It is natural to detect and recognize basic units in documents such as text, tables, and figures. Document entity recognition aims to predict the label for each semantic entity with given bounding boxes. Document object detection is an object detection task for document images. Specifically, we denote the input as \\( x \\), the bounding box coordinates associated with object instances in the image as \\( \\pmb{b} \\in \\mathbb{R}^4 \\), and use the model with parameters \\( \\theta \\) to model the bounding box regression \\( p_{\\theta}(b|x) \\) and the label classification \\( p_{\\theta}(y|x, b) \\). Given a test input \\( \\hat{x} \\), the OOD detection scoring function for entity detection and recognition can be unified as \\( S(\\hat{x}, \\hat{b}) \\), where \\( \\hat{b} \\) denotes the object instance predicted by the object detector. In particular, for document entity recognition, since the bounding boxes are provided, the OOD score can be simplified as \\( S(\\hat{x}, \\bar{b}) \\), where \\( \\bar{b} \\) is the given object instance." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.939 + ], + "angle": 0, + "content": "4987" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.941, + 0.504, + 0.952 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.112, + 0.085, + 0.493, + 0.456 + ], + "angle": 0, + "content": "Document Object Detection. For document object detection, we use PubLayNet as the ID dataset and construct the OOD dataset from IIIT-AR-13K. Unlike PubLayNet, where the documents are scientific articles, IIIT-AR-13K is a dataset for graphical object detection in business documents (e.g., annual reports), thus there exists an obvious domain gap. We select natural images as the OOD entity and filter images that contain the OOD entity. Two object detection models are considered in this paper: (1) Vanilla Faster-RCNN with ResNet-50 visual backbone, and (2) Faster-RCNN with VOS (Du et al., 2022), a recent unknown-aware learning framework to improve OOD detection performance for natural images. Following the original paper, we use 1,000 samples for each ID class to estimate the class-conditional Gaussian statistics. The models are trained for 180k iterations with a base learning rate of 0.01 and a batch size of 8 using the Detectron2 framework (Wu et al., 2019). The performance of the models is measured using the mean average precision (MAP) @ intersection over union (IOU) [0.50:0.95] of bounding boxes." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.464, + 0.49, + 0.769 + ], + "angle": 0, + "content": "Document Entity Recognition. For entity recognition, we construct ID and OOD datasets from FUNSD. Each semantic entity includes a list of words, a label, and a bounding box. The standard label set for this dataset contains four categories: question, answer, header, and other. In this paper, we select entities labeled as other or header as OOD data, and the entities belonging to the other three categories as ID. Instead of treating entity recognition as a named-entity recognition problem, we follow UDoc and solve this problem at the semantic region level. We replace the sentence encoder in UDoc with a smaller sentence encoder (all-MiniLM-L6-v2\\(^{10}\\)) from Huggingface (Wolf et al., 2019). We also have the following model variants to verify the effectiveness of the combination of modalities: textual-only, visual-only, textual+spatial, visual+spatial, and visual+textual+spatial." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.77, + 0.489, + 0.801 + ], + "angle": 0, + "content": "We provide details on datasets and models as follows." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.812, + 0.233, + 0.826 + ], + "angle": 0, + "content": "B.1 Datasets" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.833, + 0.489, + 0.898 + ], + "angle": 0, + "content": "The original FUNSD (Jaume et al., 2019) dataset contains 149 training and 50 testing images. For document entity recognition, we treat entities with the category other/anchor as OOD entities. After" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.149 + ], + "angle": 0, + "content": "the split, if we consider other as OOD, we have a total of 8,330 ID and 1,019 OOD entities. Otherwise, if we consider header as OOD, we have 8,981 ID and 368 OOD entities in total." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.15, + 0.886, + 0.327 + ], + "angle": 0, + "content": "For document object detection, we consider PubLayNet (Zhong et al., 2019), which contains \\(336\\mathrm{K} / 11\\mathrm{K}\\) training/validation images with 6 categories (text, title, list, fig., and table). The original IIIT-AR-13K (Mondal et al., 2020) contains (table, fig., natural image, logo, and signature). In this paper, considering the overlap between IIIT-AR-13K and PubLayNet, we select those images containing natural images as the OOD test set. After filtering, we obtain 2,880 OOD entities across 1,837 document images." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.328, + 0.886, + 0.473 + ], + "angle": 0, + "content": "We consider three ID datasets in this experiment. (1) PubLayNet: This is the original PubLayNet dataset. We treat all the entities in training/validation images as ID entities. (2) Considering the domain shift between ID data (PubLayNet) and OOD data (IIIT-AR-13K). We combine the PubLayNet training data with the images from IIIT-AR-13K with overlapping annotations (table and figure) and train the object detection model." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.485, + 0.618, + 0.499 + ], + "angle": 0, + "content": "B.2 Models" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.507, + 0.886, + 0.62 + ], + "angle": 0, + "content": "Fig. 13 illustrates the entity recognition models used in this paper. We consider the entities on regions instead of tokens, as regions provide richer semantic information. As for the pre-trained model, we adopt UDoc (trained on IIT-CDIP) since it models inputs at the regional level. Based on the UDoc framework, we develop the following models." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.627, + 0.691, + 0.643 + ], + "angle": 0, + "content": "Vision/Vision+Layout:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.656, + 0.883, + 0.72 + ], + "angle": 0, + "content": "- ResNet-50: This model is composed of the ResNet-50 from pre-trained UDoc. It adopts the RoI pooling followed by a classifier to extract the entity features." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.733, + 0.885, + 0.83 + ], + "angle": 0, + "content": "- ResNet-50+Position: This model also adapts UDoc's pre-trained ResNet-50 for further improvement. It makes the RoI features spatially aware by adding position embeddings, which are mapped from the bounding boxes via a linear mapping layer." + }, + { + "type": "list", + "bbox": [ + 0.532, + 0.656, + 0.885, + 0.83 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.843, + 0.661, + 0.859 + ], + "angle": 0, + "content": "Text/Text+Layout:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.872, + 0.885, + 0.919 + ], + "angle": 0, + "content": "- Sentence BERT: This model adopts the language branch of UDoc and appends the classifier to the output of the sentence encoder." + }, + { + "type": "page_footnote", + "bbox": [ + 0.132, + 0.904, + 0.48, + 0.919 + ], + "angle": 0, + "content": "10https://huggingface.co/sentence-transformers" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.94 + ], + "angle": 0, + "content": "4988" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.941, + 0.505, + 0.952 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.151, + 0.083, + 0.496, + 0.169 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.148, + 0.172, + 0.496, + 0.195 + ], + "angle": 0, + "content": "(a) Comparison of OOD detection methods on different models on two OOD classes: other and header." + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.083, + 0.842, + 0.169 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.497, + 0.171, + 0.843, + 0.195 + ], + "angle": 0, + "content": "(b) OOD detection results from different object detection methods and models." + }, + { + "type": "image_caption", + "bbox": [ + 0.138, + 0.2, + 0.856, + 0.215 + ], + "angle": 0, + "content": "Figure 14: Ablation on document entity recognition and object detection. Numbers are reported in FPR95." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.224, + 0.486, + 0.271 + ], + "angle": 0, + "content": "- Sentence BERT+Position: This model is close to the above model but adds position embeddings to the sentence embeddings." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.281, + 0.284, + 0.297 + ], + "angle": 0, + "content": "Vision+Text+Layout:" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.306, + 0.489, + 0.386 + ], + "angle": 0, + "content": "- ResNet-50+sentence BERT: This model follows the same framework as UDoc, but replaces the sentence encoder in their original design with a more miniature sentence encoder (all-MiniLM-L6-v2)." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.397, + 0.489, + 0.476 + ], + "angle": 0, + "content": "- SwinT+Sentence BERT: This model replaces the ResNet-50 visual backbone with a pre-trained tiny Swin Transformer (swintiny-patch4-window7-224) adopted from the Huggingface." + }, + { + "type": "list", + "bbox": [ + 0.137, + 0.306, + 0.489, + 0.476 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.487, + 0.487, + 0.533 + ], + "angle": 0, + "content": "All the models are fine-tuned with the cross-entropy loss for 100 epochs, using a learning rate of \\(10^{-5}\\) and a batch size of 8 on an A100 GPU." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.545, + 0.37, + 0.56 + ], + "angle": 0, + "content": "B.3 Summary of Observations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.566, + 0.489, + 0.66 + ], + "angle": 0, + "content": "We provide a summary of observations here and hope to inspire future works on a thorough investigation of OOD detection for entity-level tasks. To identify entity types, models should not only understand the words but also utilize spatial and visual information." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.663, + 0.49, + 0.919 + ], + "angle": 0, + "content": "For document entity recognition, the comparison of distance-based and logit-based OOD detection methods with different models are shown in Fig. 14a. More details are shown in Table 2. We see that models can better predict the entity type and also achieve better OOD robustness with the help of spatial information. Considering the weak language dependency between entities, it is not surprising that vision-based models achieve better performance than text-based models. In particular, UDoc with ResNet-50 achieves the best performance on two OOD test sets, illustrating that visual information plays a major role in increasing the discrimination of entities with similar semantics. For document object detection, we summarize our findings in Fig. 14b and describe them in more" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.224, + 0.884, + 0.287 + ], + "angle": 0, + "content": "detail in Table 1. We can see that the OOD detection performance is further improved by introducing document images from IIIT-AR-13K with the same ID annotations as training data." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.288, + 0.885, + 0.545 + ], + "angle": 0, + "content": "To provide more intuitions, in Fig. 15, we visualize the document entity recognition OOD detection results. In Fig. 16, we visualize the prediction on sample OOD images, using object detection models trained without VOS (top) and with VOS (bottom), respectively. We can see that vanilla Faster RCNN trained on PubLayNet produces false positives when applied to the OOD document images from IIIT-AR-13K. Table 1 shows that introducing the unknown-aware learning method optimized for both ID and OOD can reduce the FPR95 while preserving the mAP on the ID data. This experiment indicates that incorporating uncertainty estimation into the entity detection training procedure can improve the reliability of the document object detection system." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.557, + 0.812, + 0.573 + ], + "angle": 0, + "content": "C Detailed Experimental Results" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.582, + 0.881, + 0.614 + ], + "angle": 0, + "content": "- Table 2 corresponds to the results shown in Fig. 15 and Fig. 14a." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.624, + 0.881, + 0.655 + ], + "angle": 0, + "content": "- Table 1 corresponds to the results shown in Fig. 16 and Fig. 14b." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.666, + 0.881, + 0.698 + ], + "angle": 0, + "content": "- Table 3 and Table 7 correspond to the results shown in Fig. 4a." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.709, + 0.881, + 0.74 + ], + "angle": 0, + "content": "- Table 4 and Table 5 correspond to the results shown in Fig. 4c." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.751, + 0.881, + 0.782 + ], + "angle": 0, + "content": "- Table 6 corresponds to the results shown in Fig. 8 and Fig. 9." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.793, + 0.881, + 0.824 + ], + "angle": 0, + "content": "- Table 9 and Table 8 correspond to the results shown in Fig. 6 and Fig. 9." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.835, + 0.881, + 0.865 + ], + "angle": 0, + "content": "- Table 10 and Table 11 correspond to the analysis for Sec. 4 and Sec. 4.2." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.877, + 0.881, + 0.908 + ], + "angle": 0, + "content": "- Table 12 corresponds to the results shown in Fig. 9." + }, + { + "type": "list", + "bbox": [ + 0.509, + 0.582, + 0.881, + 0.908 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.939 + ], + "angle": 0, + "content": "4989" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.941, + 0.504, + 0.952 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.117, + 0.131, + 0.885, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.289, + 0.884, + 0.334 + ], + "angle": 0, + "content": "Figure 15: Visualization of detected OOD entities on the form images. The top part shows the entities in blue are entities annotated as other. The bottom part shows the detected OOD entities (green). We also show failure cases on the right part." + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.422, + 0.885, + 0.59 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.603, + 0.884, + 0.677 + ], + "angle": 0, + "content": "Figure 16: Visualization of detected objects on the OOD images (from IIIT-AR-13K) by a vanilla Faster-RCNN (top) and Faster-RCNN with VOS (bottom) is shown. Objects in blue boxes are detected and classified as one of the ID classes. The detected OOD objects (green) reduce false positives among detected objects. We also visualize detected objects on the ID images. There is a clear difference between PubLayNet and IIIT-AR-13K – entities and annotations of natural images rarely exist in PubLayNet." + }, + { + "type": "table_caption", + "bbox": [ + 0.269, + 0.765, + 0.726, + 0.779 + ], + "angle": 0, + "content": "Table 1: Comparison with different training and detection methods." + }, + { + "type": "table", + "bbox": [ + 0.157, + 0.789, + 0.843, + 0.87 + ], + "angle": 0, + "content": "
ModelsID DatasetOOD ScoreIIIT-AR-13K (Natural Image as OOD)PubLayNet (ID)
FPR95AUROCAUPRmAP
Vanilla Faster-RCNNPubMedNetMSP74.3379.1298.4192.6
Energy55.9683.5598.73
Faster-RCNN with VOSPubMedNetMSP63.6579.3798.5792.2
Energy55.6180.6098.67
Faster-RCNN with VOSPubMedNet+IIIT-AR-13K(ID)MSP56.5782.9498.5992.4
Energy47.7384.0498.67
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.941 + ], + "angle": 0, + "content": "4990" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.942, + 0.506, + 0.953 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.354, + 0.885, + 0.397 + ], + "angle": 0, + "content": "Table 2: Comparison with different models on FUNSD OOD setting. All models are initialized with UDoc pretrained on IIT-CDIP and fine-tuned on FUNSD data with ID entities. All values are percentages. S-BERT deontes Sentence BERT. A lower FPR95 or a higher AUROC value indicates better performance." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.409, + 0.88, + 0.643 + ], + "angle": 0, + "content": "
Test F1MethodOther (OOD)IDHeader (OOD)IDTest F1MethodOther (OOD)IDHeader (OOD)ID
FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1
ResNet-5075.15KNN1059.4779.1481.7963.97ResNet-50+Position75.82KNN1073.2173.1990.2261.42
KNN2069.9778.1581.2563.66KNN2072.9173.4488.0461.54
KNN5084.4977.4082.6162.86KNN5075.9674.4382.8860.93
KNN10097.9477.0877.6584.2461.6278.04KNN10079.6974.8583.7059.3977.98
KNN20097.8477.1594.2959.74KNN20086.0675.1491.5857.42
KNN40097.1576.0994.8457.53KNN40087.9374.9295.9255.37
MSP50.5475.8075.8276.55MSP77.8267.6084.2466.58
MaxLogit52.4073.7073.6476.72MaxLogit76.9467.0584.2465.41
Energy52.5073.7075.8276.55Energy76.6466.9384.5164.98
S-BERT77.15KNN1093.7248.4492.6660.99S-BERT+Position82.69KNN1097.4541.2493.7562.38
KNN2093.9247.6592.9359.00KNN2097.5539.9193.4861.51
KNN5093.6248.9493.2157.90KNN5097.1539.5692.3961.76
KNN10093.9248.7993.2155.07KNN10097.0641.6791.8560.99
KNN20093.9247.8582.1293.4852.8682.41KNN20096.5741.8587.0859.0887.01
KNN40094.1146.2195.3849.86KNN40097.2540.8390.2254.03
MSP93.6254.9194.2952.14MSP88.4261.1190.7659.58
MaxLogit93.7254.7594.5756.51MaxLogit89.7060.1988.8660.92
Energy93.2354.8893.2158.22Energy90.4859.6189.9561.12
ResNet-50+S-BERT89.11KNN1045.9387.8553.8087.97SwimT+S-BERT86.00KNN1063.3083.6481.5264.08
KNN2053.5886.7155.7187.06KNN2066.7382.5381.5261.50
KNN5073.2184.3662.7785.49KNN5070.1780.2182.3457.77
KNN10089.7083.0169.0283.60KNN10083.9177.7183.1554.97
KNN20096.6681.9093.1375.5480.8593.18KNN20095.3975.7990.8250.5790.40
KNN40098.8281.0091.5877.42KNN40096.7675.4999.7347.45
MSP45.4487.8267.3972.85MSP69.2870.7080.7152.02
MaxLogit45.5390.5863.0472.39MaxLogit67.1274.4181.7952.77
Energy45.5390.5763.8672.37Energy67.2274.4181.7952.77
" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.518, + 0.941 + ], + "angle": 0, + "content": "4991" + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.941, + 0.504, + 0.952 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.229, + 0.885, + 0.3 + ], + "angle": 0, + "content": "Table 3: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP. ID (Acc) denotes the ID accuracy obtained by testing on ID test data. We report the KNN-based scores for both pre-trained and fine-tuned models. Sci. Poster denotes the document images converted from NJU-Fudan Paper-Poster Dataset. Receipt denotes the receipt images collected from the CORD receipt understanding dataset. For in-domain OOD test data, we also report the averaged scores." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.303, + 0.88, + 0.771 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.59MSP92.7569.2492.2166.9394.6565.4092.0070.0992.9067.9296.5166.9399.1052.90
MaxLogit98.3677.8597.2378.5198.7672.8498.8678.0898.3076.82100.0078.69100.0063.74
Energy98.6077.8197.5578.4998.9672.7998.9478.0098.5176.77100.0078.68100.0063.70
GradNorm98.0479.2697.0776.8598.5672.8398.6280.5598.0777.37100.0085.23100.0064.10
KNN1063.2188.1865.8188.0573.0284.6367.7488.9267.4587.4469.7788.4990.5084.44
KNN2063.5388.0765.8987.9072.7584.4867.3388.8167.3887.3268.6088.1391.1084.09
KNN5064.1787.8966.9787.7773.3484.2367.2188.6067.9287.1272.0987.4791.6083.59
KNN10064.4987.6467.7887.5573.4683.9467.2988.3768.2686.8872.0986.8391.5083.21
Pre-train on 10% IIT-CDIP (no fine-tune)
-KNN1088.0766.9492.1366.6294.1361.9094.4054.5792.1862.5167.4487.0462.1084.94
KNN2088.5966.0292.6565.2594.1360.8394.7253.7992.5261.4777.9185.3864.6083.86
KNN5089.7564.4093.5363.1294.3758.9895.1752.3393.2059.7183.7282.9769.2082.29
KNN10090.2362.9493.8561.2894.4157.4595.1351.2893.4058.2483.7280.9170.1081.05
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.71MSP94.2868.0294.4665.9896.0162.9894.8165.9894.8965.7495.3563.5599.1054.99
MaxLogit97.3677.8297.1979.1698.4072.6498.3477.6897.8276.82100.0077.3699.6066.63
Energy98.0477.8097.4379.1598.7672.6198.5877.6498.2076.80100.0077.3299.6066.61
GradNorm97.3680.6896.8376.0498.4473.2997.8981.3797.6377.85100.0086.1899.5067.49
KNN1063.5788.3067.0687.0673.6683.9273.0987.8069.3486.7769.7788.0187.6083.81
KNN2063.8588.2067.4686.9073.9483.7872.9387.7069.5486.6469.7787.6388.3083.53
KNN5063.8988.0267.5486.7174.3883.5572.2487.4669.5186.4370.9387.0988.2083.12
KNN10064.8587.8167.6286.4574.9083.2572.6587.2470.0086.1972.0986.6588.3082.89
Pre-train on 20% IIT-CDIP (no fine-tune)
-KNN1087.1568.2790.8866.8992.2662.3995.0153.0291.3262.6443.0292.2957.0087.67
KNN2087.3167.3592.0465.5491.5461.4094.9752.3391.4661.6647.6791.1862.6086.61
KNN5088.3965.7192.6963.4592.1859.5795.2550.9792.1359.9256.9889.6465.7085.20
KNN10088.8364.2093.1361.6192.2257.9995.4549.9592.4158.4458.1488.3666.9084.17
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.76MSP92.6770.0993.9365.6995.0563.1995.5065.5494.2966.1395.3563.6395.4064.97
MaxLogit98.0878.7297.8779.8598.4471.6398.3075.4198.1776.4098.8478.0798.9075.65
Energy98.4878.6997.9179.8398.6871.6198.5075.4098.3976.38100.0078.0498.5075.60
GradNorm98.0481.0397.4776.7398.4472.7797.4079.1197.8477.41100.0087.4797.6077.12
KNN1060.5788.7968.8686.3675.2683.5573.9087.1269.6586.4667.4489.9072.7089.49
KNN2061.3788.7269.0686.2475.4683.4373.4687.0069.8486.3568.6089.6673.5089.25
KNN5062.2188.5269.1886.0875.6683.2173.4286.7170.1286.1370.9389.2074.7088.89
KNN10063.7788.3069.7985.8476.0282.9374.1986.4670.9485.8874.4288.8475.3088.69
Pre-train on 40% IIT-CDIP (no fine-tune)
-KNN1085.7169.0890.8468.6890.4662.5294.7651.7690.4463.0125.5895.8357.3088.60
KNN2085.2768.2191.6467.4889.7461.3294.8151.0190.3662.0029.0795.2262.3087.61
KNN5086.1966.6092.2165.5490.3059.3594.9349.6090.9160.2741.8694.3266.8086.25
KNN10087.1965.0492.5763.8390.5057.7495.0948.4491.3458.7645.3593.6668.3085.14
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP (no fine-tune)
-KNN1084.4370.2090.2068.5490.9863.1894.7252.1690.0863.5227.9194.1046.0091.37
KNN2084.5169.3091.2867.3590.3861.9694.7251.4390.2262.5133.7293.3951.5090.55
KNN5085.6767.7591.9265.3590.8259.7994.8949.7790.8260.6639.5392.2856.7089.32
KNN10086.5566.0892.9763.4691.4658.0095.4148.3991.6058.9844.1991.2961.6088.18
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.521, + 0.941 + ], + "angle": 0, + "content": "4992" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.941, + 0.505, + 0.952 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.115, + 0.248, + 0.881, + 0.276 + ], + "angle": 0, + "content": "Table 4: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP\\(^{-}\\) (remove pseudo OOD categories)." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.28, + 0.88, + 0.751 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.62MSP90.0769.0089.9268.8692.5864.1691.0766.7890.9167.2096.5154.4796.7059.63
MaxLogit97.7678.4097.7180.5898.6471.2698.7076.3898.2076.66100.0073.5199.8073.32
Energy98.1678.3597.7580.5598.8471.2098.9076.3298.4176.60100.0073.4699.8073.31
GradNorm97.6879.9297.2779.4298.5671.3198.5079.4498.0077.52100.0082.6299.6075.85
KNN1065.8587.8966.6988.1275.9882.8274.5586.8570.7786.4287.2185.1683.9087.91
KNN2066.3387.8066.8588.0475.9482.7073.9486.7570.7686.3287.2184.6383.6087.71
KNN5066.7787.6667.3088.0076.0282.4973.6686.5270.9486.1788.3783.7383.9087.34
KNN10067.2587.4267.7487.8476.1882.1873.9986.2671.2985.9289.5382.8583.9086.98
Pre-train on 10% IIT-CDIP(- no fine-tune)
-KNN1086.3565.4885.7470.8492.9459.5593.1456.6289.5463.1229.0795.4287.6083.13
KNN2086.8764.4887.1469.6893.3058.4193.3055.9190.1562.1237.2194.7588.0081.44
KNN5087.7562.7388.9967.8093.5056.5493.7554.5291.0060.4047.6793.7190.3078.97
KNN10088.4361.1789.5966.0593.6254.9193.9953.4091.4158.8848.8493.0991.5077.00
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.65MSP96.0467.5894.9068.3296.0564.9296.2368.6295.8067.36100.0061.4998.7056.38
MaxLogit97.9676.9297.5980.6898.4872.3198.7477.7298.1976.91100.0075.9199.5069.21
Energy98.1676.8998.2380.6598.8872.2699.0777.6798.5876.87100.0075.8999.5069.18
GradNorm97.8478.2397.3178.5798.0071.4498.4680.0397.9077.07100.0085.8099.0069.54
KNN1066.0587.6067.7087.9473.4283.1073.5087.9670.1786.6577.9190.1990.1084.32
KNN2066.1787.5068.3887.8373.9082.9373.6687.8270.5386.5277.9189.8489.8084.13
KNN5067.2187.2668.4687.7374.1882.6373.6687.5870.8886.3079.0789.2489.6083.80
KNN10068.7886.9869.1487.5375.5082.3074.2787.3671.9286.0482.5688.6889.8083.59
Pre-train on 20% IIT-CDIP(- no fine-tune)
-KNN1085.6366.1085.1770.3492.5860.2993.4356.8589.2063.4030.2395.7283.2083.84
KNN2086.3165.1785.9869.1393.3059.0993.4756.0589.7762.3634.8895.0884.9082.16
KNN5087.3163.5087.6367.1193.3857.1794.1654.6090.6260.6044.1994.0787.5079.74
KNN10087.8362.0688.2765.3193.6255.6594.3253.5691.0159.1448.8493.4888.8077.77
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.72MSP93.8468.8693.6967.6295.4163.9194.2065.2594.2866.4196.5163.3298.9054.02
MaxLogit97.1678.5696.8780.1898.6871.8498.5874.4497.8276.26100.0076.7299.1065.41
Energy97.4078.5397.1580.1798.6871.7998.7874.3998.0076.22100.0076.6799.5065.39
GradNorm97.2480.5996.9578.0198.5272.1298.3477.1697.7676.97100.0086.9499.7067.46
KNN1066.8987.9168.5886.9077.6182.3176.5885.3972.4185.6375.5889.4586.4084.23
KNN2067.5787.8068.9086.7977.7782.1976.3085.2272.6485.5080.2389.1786.8083.85
KNN5067.9787.5869.6786.6778.0181.9876.6684.8573.0885.2780.2388.6387.2083.21
KNN10069.4687.3471.2386.4779.0181.7277.4884.5774.3085.0282.5688.1988.0082.72
Pre-train on 40% IIT-CDIP(- no fine-tune)
-KNN1088.7966.1488.3568.9293.5060.3095.5451.0991.5461.6137.2195.3755.9091.90
KNN2089.5965.0789.8067.6193.8959.1095.5850.1792.2160.4946.5194.4161.5091.00
KNN5090.5963.3991.6465.6893.7757.3595.6648.6392.9258.7653.4993.0666.4089.72
KNN10091.1961.7992.3763.9093.6655.7895.6247.4293.2157.2265.1291.9968.3088.72
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.74MSP94.1268.2494.2966.1895.9363.8395.2165.6694.8965.9898.8459.2596.5065.42
MaxLogit97.2478.1597.1980.2798.3672.1698.3875.8297.7976.60100.0073.2899.3075.58
Energy97.3278.1397.5180.2698.6472.1298.7075.7898.0476.57100.0073.2799.6075.52
GradNorm97.1680.0797.3977.8698.4071.8398.0579.0897.7577.21100.0086.3299.4073.52
KNN1066.8187.8669.6786.9177.4982.6074.5986.2872.1485.9181.4087.7476.9088.49
KNN2066.7387.7570.3186.7877.8982.5175.2886.1372.5585.7981.4087.4377.5088.39
KNN5067.2587.5470.5986.6277.8582.3275.4185.8472.7885.5883.7286.8577.8088.23
KNN10068.1387.3471.4786.3978.0582.0876.1485.6073.4585.3583.7286.3978.5088.21
Pre-train on 100% IIT-CDIP(- no fine-tune)
-KNN1087.9566.4484.4972.3495.0158.4796.2349.0790.9261.5831.4096.1941.6094.78
KNN2088.9165.3985.7071.2595.3357.1996.5948.0691.6360.4734.8895.5048.4094.12
KNN5090.5963.6987.1469.4595.5354.9397.0846.2692.5858.5843.0294.5155.2093.05
KNN10091.7562.0888.5567.8595.8953.0597.2044.8193.3556.9550.0093.6061.1092.04
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.941 + ], + "angle": 0, + "content": "4993" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.941, + 0.505, + 0.952 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.115, + 0.248, + 0.881, + 0.276 + ], + "angle": 0, + "content": "Table 5: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP\\(^{-}\\) (remove pseudo OOD categories)." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.28, + 0.88, + 0.751 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
LayoutLMyBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.89MSP42.4376.3156.0569.3954.3170.2547.0073.9349.9572.4743.0276.5544.1075.68
MaxLogit41.9191.2755.0489.3354.1985.2044.9790.9349.0389.1838.3794.2741.3091.38
Energy41.8391.2954.9289.3554.1185.2245.0190.9748.9789.2138.3794.2941.1091.42
GradNorm39.1591.8054.0486.9351.8886.0542.4991.6546.8989.1138.3791.7941.4091.82
KNN1031.6394.2546.5290.9846.7790.4940.8392.7941.4492.1324.4295.9530.3095.66
KNN2032.0394.1146.6590.8947.0190.3241.6092.6341.8291.9926.7495.7631.8095.44
KNN5034.3993.7549.3490.4649.3689.9444.5292.2344.4091.6033.7295.3333.2095.38
KNN10036.1593.4751.2790.1951.3689.6546.6391.9946.3591.3233.7295.1035.1095.16
Pre-train on 10% IIT-CDIP-(no fine-tune)
-KNN1090.9572.3094.6665.4990.9472.3894.4067.3292.7469.3748.8491.5656.0075.08
KNN2091.5970.5494.9863.9191.6670.7494.8165.9593.2667.7853.4990.4157.6073.51
KNN5093.0767.7695.5461.2492.7868.2795.2564.0194.1665.3255.8188.3758.5071.06
KNN10093.5565.4195.9059.1393.1066.1995.5462.4194.5263.2867.4486.4460.2069.09
LayoutLMyBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.84MSP49.2076.7861.5170.1362.3769.4955.5273.6457.1572.5150.0077.9950.7075.90
MaxLogit41.0391.5754.0088.4556.4285.7047.0090.1949.6188.9838.3793.6241.8090.56
Energy40.9591.6053.7688.4756.1985.7246.7990.2249.4289.0038.3793.6541.7090.59
GradNorm37.1591.8954.1684.9953.0386.2843.9590.9447.0788.5240.7090.4142.4090.91
KNN1031.6394.1747.6990.2947.4990.5040.5492.9241.8491.9731.4095.6534.5095.15
KNN2032.5594.0347.8990.2248.3290.3440.9192.7642.4291.8433.7295.4535.4094.97
KNN5035.7193.6749.7489.8251.0489.9944.1292.3945.1591.4736.0595.0136.2094.92
KNN10036.7593.3850.3089.6051.6889.7144.9792.1745.9291.2236.0594.7336.5094.71
Pre-train on 20% IIT-CDIP-(no fine-tune)
-KNN1090.3975.2579.5979.4393.1472.4197.1266.9990.0673.5250.0091.3624.7096.34
KNN2090.6373.7580.4778.5193.8170.5897.1665.5490.5272.1055.8189.9126.9095.94
KNN5091.6771.1982.5676.9094.4567.8297.3662.9891.5169.7267.4487.2929.1095.31
KNN10091.9569.1983.7375.5595.3365.3797.3660.8492.0967.7474.4284.7830.3094.75
LayoutLMyBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.01MSP51.7675.7662.3969.6363.3768.7554.2274.0357.9472.0455.8171.6942.5080.56
MaxLogit42.0391.2954.2489.4757.3084.4445.6690.0249.8188.8052.3393.0833.0092.89
Energy41.8791.3154.2089.4957.2684.4745.5090.0549.7188.8352.3393.1332.5092.92
GradNorm38.1991.6653.6486.8555.0385.6643.1891.4547.5188.9052.3392.3934.6092.95
KNN1031.4794.4347.1390.6348.2090.4538.1193.3041.2392.2027.9195.7824.7096.09
KNN2032.5994.2947.6190.5549.6090.2739.2593.1442.2692.0632.5695.6025.5095.95
KNN5034.8793.9349.5090.1052.1189.8742.2992.7544.6991.6638.3795.1626.4095.95
KNN10036.5593.6550.3889.8253.5589.5743.7192.5146.0591.3943.0294.8927.7095.77
Pre-train on 40% IIT-CDIP-(no fine-tune)
-KNN1087.0780.4471.7683.7286.7582.3196.1076.3685.4280.7175.5884.965.9098.24
KNN2088.9579.0374.9382.3188.9981.1196.7175.0187.4079.3680.2382.567.2097.93
KNN5091.4777.2380.3991.7891.7879.7597.4072.6090.2677.3787.2178.199.0097.92
KNN10090.7575.2784.7777.4891.7478.3197.1670.2691.1075.3389.5374.1114.2097.49
LayoutLMyBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.38MSP43.4376.1257.2169.1658.3868.5646.1474.7651.2972.1538.3778.6728.3083.78
MaxLogit35.1991.2950.2288.9853.1984.5439.9890.7144.6488.8824.4296.3921.4095.57
Energy35.2391.3250.2289.0053.1984.5539.9890.7344.6588.9024.4296.4421.4095.58
GradNorm30.3092.5448.6188.1848.9686.5836.1692.6341.0189.9819.7796.7119.2096.35
KNN1026.5094.9543.4791.6945.0990.9534.0993.8637.2992.8619.7797.3917.8096.37
KNN2027.2294.8344.0791.5845.4190.7934.6293.7137.8392.7319.7797.2218.4096.26
KNN5029.4694.4946.2891.1247.6990.4537.5093.3340.2392.3517.4497.0418.7096.80
KNN10032.1594.2648.1790.8550.6490.2140.3893.1242.8392.1119.7796.8820.7096.74
Pre-train on 100% IIT-CDIP-(no fine-tune)
-KNN1078.7481.6774.4580.8680.5383.7195.0177.3382.1880.8938.3794.6217.7096.12
KNN2082.3980.1377.8679.3183.4882.7595.4575.9384.8079.5344.1993.4214.6096.13
KNN5086.0377.6582.8076.6086.9181.3096.1073.0787.9677.1654.6591.099.6097.21
KNN10089.1175.5188.0374.0890.6279.7896.7170.4391.1274.9566.2888.5018.0096.82
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.941 + ], + "angle": 0, + "content": "4994" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.941, + 0.51, + 0.953 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.274, + 0.882, + 0.318 + ], + "angle": 0, + "content": "Table 6: OOD detection performance for document classification. Spatial-RoBERTaBase (Pre) or SRBase (Pre) denotes applying the spatial-aware adapter in the word embedding layer. Spatial-RoBERTaBase (Post) or SRBase (Post) denotes applying the spatial-aware adaptor at the output layer." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.327, + 0.88, + 0.727 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTBlasteFine-tune on RVL-CDIP (ID)
90.19MSP91.1973.7090.8473.4991.8271.5391.0372.3591.2272.7793.0280.9497.6074.59
MaxLogit96.8879.0496.8779.3898.0475.8598.5477.4597.5877.93100.0082.7699.4079.99
Energy97.4878.9697.2379.3198.4075.7199.0777.2598.0477.81100.0082.7199.2080.06
KNN1053.2088.9458.5088.6261.3786.2563.7288.2959.2088.0222.0996.5268.6092.47
KNN2053.4488.8158.9088.5061.6586.0763.6088.1559.4087.8827.9196.3871.7092.02
KNN5053.8488.5259.4288.4262.0185.8164.1687.8059.8687.6432.5696.0774.3091.37
KNN10055.5688.1060.6788.2063.6985.4164.7787.4261.1787.2834.8895.6776.5090.81
No fine-tune
-KNN1093.1163.5288.1566.3494.5766.9298.4253.3793.5662.5425.5895.9986.0072.99
KNN2092.9963.1888.3965.7894.5766.0898.4252.1093.5961.7826.7495.7187.3070.44
KNN5092.6762.4189.3164.7294.1764.7498.3450.0793.6260.4826.7495.0290.8066.04
KNN10092.6761.5789.5963.5794.0163.4598.1748.3393.6159.2329.0794.3492.8061.62
SRBase(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.11MSP46.8074.5254.6470.5856.2669.7254.3070.7453.0071.3944.1975.7957.2069.23
MaxLogit39.4388.6446.4889.9249.9685.7548.3087.6646.0487.9933.7293.4250.6088.70
Energy39.4388.6646.4889.9450.0085.7648.3087.6746.0588.0133.7293.4550.6088.71
KNN1031.9194.4142.1992.6546.6589.3142.0992.6540.7192.2610.4797.4552.1092.93
KNN2032.3194.2842.5992.6447.0189.2143.4392.5341.3492.1611.6397.3153.3092.80
KNN5034.3993.9943.8392.3649.0488.9345.4192.1943.1791.8712.7997.0153.1092.51
KNN10035.1593.7644.2792.1549.4888.6546.1491.9743.7691.6315.1296.8149.7092.44
Pre-train on IIT-CDIP (no fine-tune)
-KNN1078.8278.9279.9973.8977.6981.3291.4876.5282.0077.6610.4798.0887.3080.89
KNN2079.7477.9582.6472.1779.8180.4092.1375.1183.5876.4116.2897.6092.1076.94
KNN5080.4276.8785.1369.6282.1278.9392.9873.0185.1674.6122.0996.6695.2070.53
KNN10081.4375.7086.9067.1983.4077.1293.3871.0786.2872.7727.9195.8696.6064.56
SRBase(Post)Fine-tune on RVL-CDIP (ID)
97.10MSP58.0578.3776.4665.4465.8075.0061.8177.5965.5374.1054.6581.6593.5052.85
MaxLogit49.2089.8272.3680.2857.8287.2852.5290.0457.9886.8634.8894.8891.6073.37
Energy47.5689.8771.9680.3056.5887.3251.1890.1056.8286.9034.8895.0491.3073.39
KNN1037.4393.3764.0886.8349.4489.8246.9292.1749.4790.5526.7496.3890.1080.21
KNN2038.2793.2565.3386.5250.8089.6648.0991.9950.6290.3526.7496.2391.2079.57
KNN5040.4392.9867.3886.0252.8389.3850.6591.5852.8289.9926.7495.8992.1078.48
KNN10041.9992.7767.9485.6253.8789.1751.2291.3353.7689.7229.0795.6792.6077.68
SRLarge(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.37MSP62.3767.8271.2763.3672.8762.5470.2563.8469.1964.3976.7460.6167.0065.48
MaxLogit33.3990.1539.2589.8742.3088.1237.0591.6638.0089.9531.4092.4127.7094.23
Energy33.3990.1639.2589.8842.3088.1337.0591.6638.0089.9631.4092.4227.7094.22
KNN1028.1894.4742.4393.0137.4391.7431.1394.7234.7993.4925.5896.2418.6096.28
KNN2028.7894.3242.4392.9038.0791.5832.0294.5535.3393.3425.5896.0218.6096.33
KNN5030.2293.9543.7192.6940.0691.2634.5494.1037.1393.0026.7495.5221.4096.14
KNN10030.8693.7144.1192.5640.6691.0535.4793.8837.7892.8026.7495.2221.7096.11
Pre-train on IIT-CDIP (no fine-tune)
-KNN1068.4980.4388.2369.8371.7583.1188.1173.3279.1476.6775.5884.3649.8092.02
KNN2071.7478.7790.2467.4175.6681.3889.0471.1481.6774.6881.4081.5562.2090.29
KNN5075.4676.4992.8163.8280.1778.7290.4267.8484.7271.7282.5677.1578.2087.49
KNN10077.6274.5994.4260.9483.1676.2591.8065.3086.7569.2784.8873.3488.2084.96
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.941 + ], + "angle": 0, + "content": "4995" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.941, + 0.509, + 0.953 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.097, + 0.884, + 0.124 + ], + "angle": 0, + "content": "Table 7: OOD detection performance for document classification with the different number of pre-training data from IIT-CDIP." + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.136, + 0.88, + 0.603 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
VITBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.89MSP55.8088.3748.6191.3863.9383.8355.5288.5555.9688.0352.0589.6034.1095.04
MaxLogit50.3691.5137.7794.3062.3787.9753.6992.1151.0591.4738.3694.2428.6096.06
Energy50.5691.4837.0894.3363.4987.8955.1992.0051.5891.4238.3694.2929.4095.96
GradNorm55.5679.7545.9684.7966.9274.0758.4481.0756.7279.9247.9582.0434.9091.68
KNN1050.4092.6043.5193.9251.6090.5474.4788.8755.0091.4820.5597.199.2098.21
KNN2049.8092.7040.3894.4353.3990.2674.7288.7754.5791.5423.2996.9810.4098.05
KNN5046.7292.8934.2795.2456.0789.9274.5588.4552.9091.6227.4096.5612.8097.80
KNN10045.4892.8929.3395.6757.6289.5675.0488.2551.8791.5930.1496.2115.0097.57
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.9243.0897.6749.0099.5254.4199.3540.2698.8646.6993.1592.516.9098.06
KNN2098.8842.4797.7548.5799.5253.7599.3539.5698.8846.0994.5292.248.6097.91
KNN5098.8041.7097.8348.0499.5252.9199.3538.6298.8845.3295.8991.8010.6097.66
KNN10098.7641.2097.7947.7099.4852.3299.3538.0198.8444.8198.6391.3114.5097.41
VITBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.62MSP54.3689.0151.6391.3164.5785.2360.5188.6757.7788.5660.2789.3444.2093.73
MaxLogit44.3292.1638.2194.1864.9287.6358.5691.3351.5091.3245.2192.6339.7094.36
Energy44.3692.1737.8994.2466.5687.5160.3991.2252.3091.2846.5892.6241.5094.18
GradNorm90.5154.9292.0451.6794.2945.4198.1332.3693.7446.0995.8940.4489.7059.01
KNN1052.2092.5845.8493.7353.7990.7577.8487.0257.4291.0217.8197.3316.9097.40
KNN2051.6092.6643.5594.1555.6390.4678.0486.7957.2091.0219.1897.0619.4097.11
KNN5050.1292.8639.9894.8258.0290.1878.7786.5456.7291.1019.1896.6323.1096.68
KNN10048.0492.9134.7595.2860.3889.8878.9886.4255.5491.1220.5596.2726.2096.35
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.1641.1397.5147.1299.4853.0599.3138.7998.6245.0294.5291.808.0097.41
KNN2098.1240.7197.5146.7999.4852.5299.3138.3198.6044.5894.5291.488.7097.25
KNN5098.0440.1097.5546.3199.4851.8499.3937.6398.6243.9795.8991.0111.5096.99
KNN10098.0039.7497.5545.9899.4851.3499.3937.2698.6043.5897.2690.5514.6096.70
VITBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.63MSP55.4888.6552.2791.5464.4985.5258.0889.2057.5888.7367.1284.6245.8093.82
MaxLogit47.1291.7440.0694.0961.0588.6856.5792.0151.2091.6369.8689.8132.9095.46
Energy47.1291.7339.9494.1062.3388.6258.6091.8852.0091.5869.8689.6532.7095.44
GradNorm47.0085.7641.9089.6460.6981.3753.7387.0650.8385.9664.3881.1234.0092.93
KNN1053.2892.1348.3392.9946.4592.2075.6188.8755.9291.5534.2595.536.8098.56
KNN2052.7692.2445.8893.5748.1291.9574.8488.7555.4091.6332.8895.217.8098.36
KNN5051.2892.5240.9494.5150.5291.7075.0888.4654.4691.8035.6294.6710.9098.04
KNN10050.3292.6236.1695.1253.3591.3675.9388.2453.9491.8439.7394.2513.6097.76
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.5640.6097.0346.2899.2453.7699.1539.6298.2445.0682.1992.021.0099.59
KNN2097.5640.0096.9545.8699.2453.1899.1539.1298.2244.5482.1991.631.0099.55
KNN5097.5639.2496.9945.2099.2452.3999.1538.4998.2443.8386.3091.071.0099.50
KNN10097.6038.7897.0344.7999.2451.7699.1538.1598.2643.3790.4190.671.2099.45
VITBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.79MSP54.2888.8049.1491.8064.6084.4558.8588.7856.7288.4661.6489.4441.0094.27
MaxLogit44.9692.1338.0194.5263.9787.9756.4991.8150.8691.6168.4990.6534.6095.26
Energy45.7292.1138.0194.5565.8487.8657.9191.7051.8791.5672.6090.4134.8095.14
GradNorm48.7284.2144.3687.5063.4978.0756.2584.7953.2083.6460.2782.9635.6091.24
KNN1045.1693.1439.1394.6251.6890.8573.5888.8152.3991.8650.6893.0910.4098.04
KNN2044.8893.1436.6495.0453.3590.5974.2788.6752.2891.8650.6892.6712.0097.81
KNN5043.6793.1931.1895.6056.7490.2975.2888.4951.7291.8957.5392.2315.6097.45
KNN10043.6393.1527.5295.9458.7490.0276.1888.3851.5291.8761.6492.0118.9097.18
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.0442.3593.9750.1797.4152.6898.0143.1996.6147.1012.3397.473.1098.38
KNN2097.1641.9994.0149.9697.8152.0198.0942.7396.7746.6715.0796.953.0098.31
KNN5096.9641.6294.3449.5698.0051.2098.0542.2496.8446.1621.9296.082.7098.18
KNN10097.0041.4894.9049.3198.1250.6598.1342.0397.0445.8736.9995.292.3098.27
" + }, + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.639, + 0.883, + 0.667 + ], + "angle": 0, + "content": "Table 8: OOD detection performance for document classification. Longformer\\(_{4096}\\) denotes the original model adopted from the Huggingface model hub. Longformer\\(_{4096}\\) (+) denotes the additional pre-training on IIT-CDIP." + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.671, + 0.879, + 0.903 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Longformer1006Fine-tune on RVL-CDIP (ID)
90.71MSP95.0064.3295.6262.1795.8960.5393.9566.8995.1263.4888.3777.5098.6054.72
MaxLogit97.1272.8497.0775.2298.2470.3995.8277.5797.0674.0090.7086.6299.6068.10
Energy97.4872.8297.3575.2198.3670.3796.5977.5697.4473.9991.8686.6399.8068.08
KNN1058.4588.2165.6586.8867.8083.9956.7889.5362.1787.1527.9196.0182.1086.31
KNN2058.9788.0465.5786.6068.1283.8057.3589.3462.5086.9429.0795.8282.6085.93
KNN5060.2587.6466.5786.2568.9183.4158.8188.9663.6486.5630.2395.4682.7085.27
KNN10061.9787.1968.1485.8170.1582.9560.4788.6065.1886.1434.8895.0482.8084.75
No fine-tune
-KNN1098.0455.4597.6359.9798.7651.7598.1353.1698.1455.0870.9388.69100.0064.97
KNN2098.1255.1997.6759.6498.8051.2798.1752.7198.1954.7070.9388.51100.0064.08
KNN5098.0054.8297.6359.1398.8050.5798.3052.0798.1854.1573.2688.29100.0062.82
KNN10097.9254.4897.6758.6298.8450.0098.3451.6298.1953.6874.4288.14100.0061.70
Longformer1006 (+)Pre-train on IIT-CDIP→fine-tune on RVL-CDIP (ID)
91.13MSP95.2064.0895.6261.3896.0559.4794.4863.1395.3462.0290.7067.2698.0055.52
MaxLogit96.9675.4196.5476.0397.8970.1596.7174.5697.0274.04100.0078.6599.7072.88
Energy97.2875.4096.5476.0398.2870.1497.1674.5597.3274.03100.0078.5999.7072.86
KNN1058.7389.2566.2187.5772.0383.7663.6888.7265.1687.3248.8494.7886.4087.84
KNN2058.6189.1865.9787.4571.6783.6963.3988.6164.9187.2348.8494.6285.3087.70
KNN5061.1788.9666.9787.2972.8383.4765.8388.3366.7087.0155.8194.2585.2087.39
KNN10061.7388.7966.9387.1173.3083.2466.1588.1567.0386.8255.8194.0084.7087.21
Pre-train on IIT-CDIP (no fine-tune)
-KNN1095.4861.4098.0753.6697.7355.5598.6648.7097.4954.8381.4091.1297.4046.27
KNN2095.5660.9297.9552.9597.4954.9798.5048.2197.3854.2684.8890.6297.5045.55
KNN5095.6059.9497.9551.7797.4153.9798.6247.2997.4053.2487.2189.9598.2044.18
KNN10095.6059.0497.9950.7497.2152.9998.5846.5197.3452.3288.3789.5298.5043.09
" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.929, + 0.522, + 0.953 + ], + "angle": 0, + "content": "4996 12" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.138, + 0.32, + 0.856, + 0.334 + ], + "angle": 0, + "content": "Table 9: OOD detection performance for document classification. All models are pre-trained on ImageNet." + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.344, + 0.879, + 0.681 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
ResNet-50Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.12MSP64.4987.8755.8990.9466.6087.3177.8880.8766.2286.7551.1692.7663.1090.36
MaxLogit64.8988.5947.9792.8165.4087.5277.5681.8763.9687.7041.8694.6254.0093.29
Energy67.0988.3047.8192.8666.6887.2478.5381.7565.0387.5439.5394.7348.5093.68
KNN1073.3886.8267.9887.4671.3187.8492.9077.7476.3984.966.9899.125.2098.98
KNN2074.9086.4166.2987.7973.8287.2193.9576.5177.2484.486.9898.965.5098.85
KNN5076.6686.0466.4188.4878.2986.3995.5074.7679.2283.925.8198.685.9098.70
KNN10077.5485.6165.4188.9982.1685.4396.2373.3780.3383.356.9898.346.3098.51
Pre-train on ImageNet
-KNN1096.9651.1494.6251.7598.7653.8499.5937.6097.4848.5883.5685.0020.8097.00
KNN2096.9650.3794.3451.5498.9252.9899.5936.6097.4547.8783.5684.4922.7096.71
KNN5096.9249.2994.2951.3099.0051.8499.5935.1597.4546.9083.5684.0326.7096.21
KNN10097.1248.6094.5451.2599.1651.1199.5534.3697.5946.3382.1983.3129.4095.67
Swin10Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
95.74MSP47.6488.0949.9088.1158.2283.1450.2888.9051.5187.0649.3291.3136.5093.63
MaxLogit42.3993.1142.4793.4558.6288.7945.9093.1847.3492.1350.6892.5032.2095.65
Energy43.1593.0542.9593.4059.0288.7046.7193.0747.9692.0652.0592.3833.6095.49
KNN1049.4492.8246.7392.8742.9092.5772.6988.4552.9491.6816.4496.736.1098.30
KNN2048.8492.9543.2793.5144.5392.3272.2888.3552.2391.7817.8196.527.4098.10
KNN5046.4493.2639.2594.5747.4192.0973.3487.8751.6191.9526.0396.158.6097.80
KNN10043.7693.4235.0395.2950.0891.7275.7787.4251.1691.9628.7795.9411.3097.55
Pre-train on ImageNet
-KNN1098.5652.7595.0655.1499.3658.8599.8041.8698.2052.1565.7593.262.1099.35
KNN2098.4451.8695.1854.7299.3257.8899.8040.6698.1851.2868.4992.522.6099.22
KNN5098.5250.6995.3854.1399.1656.6199.7639.0198.2050.1178.0891.143.4098.99
KNN10098.7249.9695.6653.8099.1655.8499.7638.1698.3249.4479.4589.894.3098.77
VITBasePre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
94.38MSP56.8189.1452.1991.8067.4884.2659.9088.7759.1088.4947.6792.9859.5091.99
MaxLogit50.7691.3744.6093.7568.0486.9455.1591.8154.6490.9740.7094.2052.4093.16
Energy51.1691.3144.5293.7569.4386.8156.0991.7755.3090.9138.3794.1153.2093.11
KNN1062.5790.1257.7390.9153.6790.3684.5086.1964.6289.4012.7997.9613.0097.92
KNN2063.0190.2456.0191.5155.0390.0284.3886.0164.6189.4415.1297.7614.9097.67
KNN5061.9790.6253.2392.6258.2689.5784.2585.6464.4389.6116.2897.3819.8097.24
KNN10060.2990.8549.7093.5360.3889.0784.0185.4363.6089.7216.2897.0523.6096.82
Pre-train on ImageNet
-KNN1098.4852.1595.0256.9499.4853.7799.4738.9098.1150.4493.1590.2720.4097.13
KNN2098.4851.4195.0656.6199.4452.9299.5537.6198.1349.6494.5289.4422.6096.80
KNN5098.3250.4394.8656.2199.4051.8699.5935.8298.0448.5897.2688.2326.6096.25
KNN10098.4049.7695.0655.9099.4451.1599.5934.5998.1247.8598.6387.2431.2095.76
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.941 + ], + "angle": 0, + "content": "4997" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.941, + 0.509, + 0.953 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.114, + 0.215, + 0.882, + 0.243 + ], + "angle": 0, + "content": "Table 10: OOD detection performance for document classification (select OOD categories achieve the best performance across most of the models with different modalities)." + }, + { + "type": "table", + "bbox": [ + 0.12, + 0.247, + 0.88, + 0.786 + ], + "angle": 0, + "content": "
REBERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
EmailResumeFile folderSci. publicationAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
86.13MSP96.2260.3890.6771.7293.8259.4793.8665.5193.6464.2791.8670.5793.0069.99
MaxLogit99.2166.5795.8073.6695.4766.8197.0965.6396.8968.1794.1977.1794.6074.69
Energy99.6066.5396.6473.5795.1466.8297.2165.3597.1568.0794.1977.4495.6074.90
KNN1083.7082.7769.0284.2888.3274.0686.1174.0281.7978.7843.0292.7472.0088.87
KNN2084.5082.3569.0684.2188.2073.7186.7274.0282.1278.5748.8492.3873.8088.31
KNN5084.9881.5768.8684.0688.0873.0187.0873.9482.2578.1454.6591.9275.4087.44
KNN10086.2580.8870.2683.8088.2872.4087.4473.8983.0677.7458.1491.5078.2086.68
Pre-train on pure-text data
-KNN1086.0975.6395.1258.6297.7159.7598.9550.5494.4761.1410.4798.4689.8063.01
KNN2086.2974.9295.0058.1497.7158.8899.0349.4994.5160.3612.7998.3590.8060.59
KNN5087.3273.5594.6457.5397.8357.5699.1548.1194.7359.1912.7998.1193.3056.61
KNN10089.2772.4894.2857.1297.9956.5299.1147.3795.1658.3711.6397.8994.3052.98
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.34MSP96.9060.5596.2059.1496.3155.7297.8255.1296.8157.6395.3580.4499.6052.82
MaxLogit98.9768.9797.6065.6495.6763.4298.6362.8797.7265.2397.6788.4299.7071.54
Energy99.4468.9697.9265.6395.8363.4298.7162.8397.9865.2197.6788.4699.9071.55
KNN1068.2888.7269.6283.3678.1785.0890.8874.9876.7483.0416.2896.9081.6086.94
KNN2068.0488.6170.1083.2277.5384.9290.7574.9576.6082.9216.2896.8481.8086.49
KNN5069.2888.2970.9882.9278.2984.4690.9674.8277.3882.6219.7796.5983.4085.71
KNN10069.2888.1571.3482.6978.4984.2190.4374.8677.3982.4822.0996.3883.9085.17
Pre-train on pure-text data
-KNN1097.4247.7795.7250.0997.6746.5899.5238.6197.5845.7645.3593.92100.0063.03
KNN2097.4646.9195.6049.8097.7146.0299.5238.2197.5745.2446.5193.77100.0061.92
KNN5097.5845.6895.5649.4597.7545.1999.5237.7297.6044.5150.0093.60100.0060.35
KNN10097.6644.7895.6049.1797.8744.6399.5637.5797.6744.0451.1693.48100.0058.89
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
85.25MSP60.5387.2669.5387.0027.8695.1394.0575.7962.9986.3091.7874.4027.8095.47
MaxLogit59.9889.2772.6188.0230.0495.4193.3975.3864.0087.0280.8279.8930.0095.29
Energy63.7189.1475.6487.5545.7194.1592.7775.0269.4686.4678.0881.0762.2093.44
KNN1072.4685.6885.6985.3068.6276.0196.1555.3580.7375.5936.9994.562.2099.37
KNN2076.1584.5588.6584.2266.1380.6796.5456.3181.8776.4438.3693.812.7099.28
KNN5080.3782.6192.0082.4960.9886.7796.9359.0682.5777.7347.9592.423.8099.11
KNN10084.7080.5495.1580.6451.2991.7897.1661.1982.0878.5450.6891.014.7098.91
Pre-train on ImageNet
-KNN1099.7240.9499.6521.5252.4791.0398.3345.4087.5449.7284.9384.3820.4097.12
KNN2099.6841.1899.6520.6850.6191.6398.4144.6587.0949.5486.3083.9423.4096.87
KNN5099.6441.5899.6519.4846.9792.3698.3743.4986.1649.2384.9383.7026.9096.43
KNN10099.6442.1999.6518.9844.9192.8498.3342.8685.6349.2284.9383.1229.2095.98
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.25MSP70.2381.8767.6885.3143.9792.6883.7879.4066.4284.8286.3078.2354.1091.62
MaxLogit54.7387.0446.5192.3017.2596.5190.8674.1152.3487.4982.1983.2034.4094.82
Energy54.0587.1144.3892.4916.3896.6391.2973.5951.5387.4684.9383.0733.8094.82
KNN1056.0890.6648.8092.8438.3193.3191.0266.9158.5585.9327.4096.033.3098.84
KNN2054.6190.9549.9892.6827.5895.2491.4468.5455.9086.8526.0396.354.0098.76
KNN5055.2590.6852.1592.3715.7597.2891.2571.6253.6087.9928.7796.104.9098.59
KNN10056.2090.3154.7592.179.1498.0091.1375.1152.8088.9030.1495.776.5098.35
Pre-train on ImageNet
-KNN1099.8443.5599.7620.6447.9293.2098.9137.5586.6148.7458.9093.881.6099.32
KNN2099.8444.4799.8018.3641.3194.1499.0336.4585.0048.3672.6092.692.6099.00
KNN5099.8845.2699.8017.9239.9794.3999.0336.7184.6748.5779.4591.973.7098.81
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
89.97MSP61.2585.8466.5785.0440.4493.1085.8481.8363.5286.4573.9780.6660.3090.41
MaxLogit53.0290.3755.7788.8619.9196.2592.3879.6955.2788.7976.7185.1650.6093.12
Energy51.7990.4955.0789.0317.5396.5392.6979.2054.2788.8179.4585.0150.1093.20
KNN1054.1391.1852.8691.1858.4987.4692.8865.9864.5983.9542.4795.0711.0097.94
KNN2054.2191.1853.1790.9950.6189.3593.0467.5262.7684.7643.8494.9813.1097.62
KNN5054.5391.0553.3390.7941.9592.8293.0072.0660.7086.6842.4794.7417.3097.12
KNN10054.6590.8154.1290.5630.7991.9098.7247.1088.2452.1995.8989.3122.0096.58
Pre-train on ImageNet
-KNN1099.8046.4699.6826.5058.6590.6198.7246.4089.2152.4987.6791.3919.9097.25
KNN2099.8046.0299.6525.6957.3091.0198.7246.4688.8752.3090.4190.8721.7097.01
KNN5099.8045.4899.6124.7655.1691.5298.7646.6988.3352.1194.5289.9924.3096.62
KNN10099.8045.3399.6524.4354.8191.9098.7247.1088.2452.1995.8989.3128.8096.27
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.929, + 0.52, + 0.941 + ], + "angle": 0, + "content": "4998" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.941, + 0.51, + 0.953 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.131, + 0.089, + 0.867, + 0.102 + ], + "angle": 0, + "content": "Table 11: OOD detection performance for document classification (randomly select four categories as OOD)." + }, + { + "type": "table", + "bbox": [ + 0.121, + 0.106, + 0.88, + 0.646 + ], + "angle": 0, + "content": "
RobERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
LetterHandwrittenAdvertisementMemoAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.86MSP70.2279.2150.1487.2484.6467.8091.4257.9974.1073.0695.3559.7594.3055.12
MaxLogit66.0487.5139.6592.5386.4777.0391.6771.8470.9682.23100.0077.8996.8071.96
Energy66.2087.5738.1992.5987.3577.0391.6771.8970.8582.27100.0077.9296.8071.96
KNN1062.6280.1960.9870.9075.6280.2485.8469.2071.2675.1394.1981.9990.4082.48
KNN2063.1880.1060.0771.1775.9080.0385.7268.8871.2275.0494.1981.7591.2081.89
KNN5063.7880.0057.3071.7076.3479.6785.8868.3870.8274.9494.1981.4591.8081.09
KNN10064.7779.9854.3371.9477.3779.3286.0867.8070.6474.7694.1981.2091.9080.47
Pre-train on pure-text data
-KNN1085.5359.9098.6121.7996.2156.7297.6958.3994.5149.2012.7998.0184.5065.73
KNN2085.4559.2798.7321.1996.2155.6397.9057.0594.5748.2812.7997.9186.1063.57
KNN5086.8057.9498.7720.4596.8954.1298.3055.3595.1946.9613.9597.6089.3059.64
KNN10088.4756.7198.8119.9796.8152.8998.1853.9395.5745.8813.9597.3891.1055.17
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
92.08MSP65.9669.5850.3877.9381.5260.8990.2154.2372.0265.6682.5660.1495.0050.90
MaxLogit62.1987.3544.6489.7979.9778.8488.3968.0868.8081.0280.2384.1994.3077.36
Energy61.2787.3543.6189.8179.1378.8588.1568.0868.0481.0280.2384.1994.3077.37
KNN1058.6579.5450.7771.8166.5683.4880.8775.1964.2177.5158.1492.7890.0077.76
KNN2057.8179.4351.4071.7267.0083.3581.1574.8664.3477.3458.1492.5789.7077.12
KNN5058.7779.3051.6071.6766.7283.1581.3174.3664.6077.1261.6392.2489.8076.17
KNN10061.3979.1652.7571.6167.8482.9381.7673.9165.9476.9062.7991.9989.8075.29
Pre-train on pure-text data
-KNN1099.4047.83100.0027.7598.2847.0393.2060.4097.7245.7546.5193.85100.0063.64
KNN2099.4447.33100.0027.4898.3246.4993.2460.2297.7545.3848.8493.70100.0062.79
KNN5099.4446.33100.0027.2398.4045.8593.4160.0597.8144.8651.1693.51100.0061.55
KNN10099.4445.67100.0027.3198.4445.2393.5359.9097.8544.5352.3393.40100.0060.31
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
87.80MSP70.5885.3555.2989.8864.2986.5471.1585.5865.3386.8454.7991.7077.2084.67
MaxLogit64.2587.4653.5990.7249.7090.6064.4588.7158.0089.3736.9995.1378.9086.86
Energy62.6687.6558.3390.3346.0091.2663.5689.0557.6489.5732.8895.6983.0087.05
KNN1090.9979.3756.3690.6472.4186.2089.1781.7477.2384.492.7499.3239.7093.70
KNN2092.1778.0047.4792.6168.2788.4290.8580.2374.6984.822.7499.2543.8093.08
KNN5094.3275.9628.4494.4965.6589.2792.7877.9170.3084.411.3798.9749.7092.09
KNN10095.5874.0227.2195.0760.4489.7894.2275.6369.3683.622.7498.6753.8091.10
Pre-train on ImageNet
-KNN1098.4642.2177.2981.4127.8791.1699.0843.4775.6864.5680.8289.9812.3098.17
KNN2098.6641.0076.7881.7029.2292.2799.0842.2975.9464.3283.5689.3014.1097.97
KNN5098.5839.5376.5881.8131.0192.0599.1240.8076.3263.5583.5688.5116.3097.61
KNN10098.6238.6277.1381.4932.6491.8499.1239.8676.8862.9583.5687.8019.5097.23
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
92.42MSP63.9687.0365.2188.1573.5679.7261.4088.4666.0385.8484.9374.3449.6092.49
MaxLogit56.4990.2275.3687.0072.6484.2644.2293.0162.1888.6272.6084.1629.1095.70
Energy57.4390.1177.0186.6073.4484.1743.7893.0662.9288.4873.9784.2528.0095.69
KNN1060.2790.1266.9090.7649.6689.1547.6792.6756.1290.6842.4794.287.2098.56
KNN2061.3290.0161.3791.3148.8390.3349.0092.5255.1391.0430.1495.568.8098.33
KNN5062.2289.7856.4491.5650.3489.5548.5292.3054.3890.8026.0395.7211.8097.97
KNN10062.6289.6054.9891.8550.7088.9347.6392.1853.9890.6430.1495.5413.9097.66
Pre-train on ImageNet
-KNN1099.1545.5786.0279.4432.4590.9899.5246.2079.2865.5524.6696.240.4099.78
KNN2099.1944.1186.8980.3533.4892.1999.6044.7979.7965.3627.4095.620.5099.73
KNN5099.2342.3987.9981.6636.7891.5999.6043.0780.9064.6843.8494.570.8099.63
KNN10099.1941.4689.0282.6340.6091.0599.6042.1482.1064.3252.0593.491.2099.53
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.03MSP69.6886.8169.6787.8872.2580.7869.3886.6170.2485.5267.1285.9758.5091.47
MaxLogit63.3589.2068.4088.5869.5884.3861.0889.9465.6088.0257.5389.4148.4093.04
Energy62.2289.2170.3488.4370.2684.3760.7590.0365.8988.0158.9089.4749.7093.03
KNN1068.1088.9954.9092.3053.4488.0558.1991.3458.6690.1738.3695.0222.9096.71
KNN2067.6188.9549.0192.8551.5389.2558.5991.1656.6890.5541.1094.4725.4096.35
KNN5067.2988.9142.5493.1553.9688.4358.7590.8855.6490.3442.4793.6029.9095.78
KNN10066.1988.9043.8093.1955.7187.7359.1190.6456.2090.1245.2192.8634.9095.27
Pre-train on ImageNet
-KNN1098.9041.9890.9677.1534.8790.6999.4041.2181.0362.7654.7994.2710.8098.47
KNN2098.9440.5491.6777.2036.8291.7199.4439.8581.7262.3264.3893.5712.7098.25
KNN5099.0738.7592.6176.9940.0091.1799.5238.1482.8061.2675.3492.4715.9097.87
KNN10099.1137.4393.2576.5643.3890.6899.5636.9383.8260.4082.1991.5218.9097.49
" + }, + { + "type": "table_caption", + "bbox": [ + 0.117, + 0.664, + 0.882, + 0.706 + ], + "angle": 0, + "content": "Table 12: OOD detection performance for document classification. All models are pre-trained on IIT-CDIP. For LayoutLM models, we adopt the checkpoints from the Huggingface model hub. For UDoc, we pre-train the model on our side. All models are fine-tuned on RVL-CDIP ID data." + }, + { + "type": "table", + "bbox": [ + 0.117, + 0.717, + 0.881, + 0.911 + ], + "angle": 0, + "content": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95
LayoutMv1Base97.28MSP47.4874.9159.7468.7266.4065.3658.8969.1258.1369.5343.0277.1572.40
MaxLogit27.0692.3837.9791.5245.6588.3635.9291.2236.6590.8724.4294.9657.30
Energy27.0692.4037.9791.5445.6588.3635.9291.2336.6590.8824.4294.9757.30
KNN1020.8296.0935.3293.8240.0691.3428.6594.8031.2194.0117.4497.0049.80
KNN2021.7495.9336.2093.7741.4291.1230.4494.6132.4593.8617.4496.8251.70
KNN5024.3495.5638.2593.4143.9390.6933.6494.1935.0493.4623.2696.4453.80
KNN10025.5495.3039.1393.2045.1790.3534.7893.9936.1693.2125.5896.2454.70
LayoutMv397.81MSP56.1670.8163.4467.1767.1665.3058.6069.5861.3468.2252.3372.7043.60
MaxLogit30.7089.1740.4288.1842.9884.0933.1288.2236.8087.4219.7794.5011.70
Energy30.7089.1840.4288.1842.9884.1033.1288.2336.8087.4219.7794.5111.70
KNN1021.7495.0335.6893.3832.8891.8618.5196.2627.2094.1311.6397.588.90
KNN2022.7494.9036.5693.2033.9691.6619.6496.1528.2293.9812.7997.4410.00
KNN5024.6294.6238.3792.7135.8391.3821.6395.9330.1193.6613.9597.2010.70
KNN10025.2294.3839.2992.3236.5591.0922.4895.7930.8893.4016.2897.0411.80
UDocNet5097.36MSP66.1365.7369.4364.0971.0363.2871.0663.2569.4164.0940.7078.4739.80
MaxLogit45.9682.1247.2186.3949.6483.1649.5983.1348.1083.702.3398.574.00
Energy45.9682.1247.2186.4049.6483.1649.5983.1348.1083.702.3398.604.00
KNN1030.0294.4741.2288.6641.9090.9936.6593.4837.4591.901.1699.135.50
KNN2031.1094.3641.9888.4442.1090.9038.0393.3538.3091.761.1699.046.90
KNN5033.9594.0743.3587.8944.0190.7240.7193.0640.5191.431.1698.847.40
KNN10034.8393.8443.7587.5145.0190.6141.9692.9041.3991.221.1698.728.30
" + }, + { + "type": "footer", + "bbox": [ + 0.482, + 0.929, + 0.52, + 0.94 + ], + "angle": 0, + "content": "4999" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.941, + 0.509, + 0.953 + ], + "angle": 0, + "content": "15" + } + ] +] \ No newline at end of file diff --git a/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_origin.pdf b/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..575fafe99339f914c1bd91e76e97b5f7121a4968 --- /dev/null +++ b/2023/A Critical Analysis of Document Out-of-Distribution Detection/dc0b7121-5749-4d5e-b65f-34d4dd4df565_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39813c1d6b0dc828bfba489a90a9b03c317e910fec3a20920c48ba8a8b812566 +size 12262040 diff --git a/2023/A Critical Analysis of Document Out-of-Distribution Detection/full.md b/2023/A Critical Analysis of Document Out-of-Distribution Detection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1130e1222e8dcca2aa313176415ab7f1df0472cf --- /dev/null +++ b/2023/A Critical Analysis of Document Out-of-Distribution Detection/full.md @@ -0,0 +1,500 @@ +# A Critical Analysis of Document Out-of-Distribution Detection + +Jiuxiang Gu $^{1*}$ Yifei Ming $^{2*†}$ Yi Zhou $^{3}$ Jason Kuen $^{1}$ +Vlad I. Morariu $^{1}$ Handong Zhao $^{1}$ Ruiyi Zhang $^{1}$ Nikolaos Barmpalios $^{1}$ +Anqi Liu $^{3}$ Yixuan Li $^{2}$ Tong Sun $^{1}$ Ani Nenkova $^{1}$ + +\(^{1}\)Adobe Research \(^{2}\)University of Wisconsin-Madison \(^{3}\)Johns Hopkins University \(^{1}\{jigu, kuen, morariu, hazhao, barmpali, ruizhang, tsun, nenkova\} @adobe.com \(^{2}\{alvinming, sharonli\} @cs.wisc.edu\) \(^{3}yzhou188@jhu.edu\) \(^{3}aliu@cs.jhu.edu\) + +# Abstract + +Large-scale pre-training is widely used in recent document understanding tasks. During deployment, one may expect that models should trigger a conservative fallback policy when encountering out-of-distribution (OOD) samples, which highlights the importance of OOD detection. However, most existing OOD detection methods focus on single-modal inputs such as images or texts. While documents are multimodal in nature, it is underexplored if and how multi-modal information in documents can be exploited for OOD detection. In this work, we first provide a systematic and in-depth analysis on OOD detection for document understanding models. We study the effects of model modality, pre-training, and fine-tuning across various types of OOD inputs. In particular, we find that spatial information is critical for document OOD detection. To better exploit spatial information, we propose a spatial-aware adapter, which serves as a parameter-efficient add-on module to adapt transformer-based language models to the document domain. Extensive experiments show that adding the spatial-aware adapter significantly improves the OOD detection performance compared to directly using the language model and achieves superior performance compared to competitive baselines. + +# 1 Introduction + +The recent success of large-scale pre-training has propelled the widespread deployment of deep learning models in the document domain, where model predictions are used to help humans make decisions in various applications such as tax form processing and medical reports analysis. However, models are typically pre-trained on data collected from the web but deployed in an environment with distributional shifts (Cui et al., 2021). For instance, the outbreak of COVID-19 has led to continually + +![](images/eac539accce9516c780df033975daeca77327f8b5dffae47796be839d97f480a.jpg) +Figure 1: Illustration of OOD detection for document classification. The pre-training and fine-tuning pipelines are shown on the top left and bottom left, respectively. Right: During inference time, an OOD score can be derived based on logits $g(x)$ or feature embeddings $z := h(x)$ . A document input $x$ is identified as OOD if its OOD score is below some threshold $\gamma$ . + +changing data distributions in machine-assisted medical document analysis systems (Velavan and Meyer, 2020). This motivates the need for reliable document understanding models against out-of-distribution (OOD) inputs. + +The goal of OOD detection is to categorize indistribution (ID) samples into one of the known categories and detect inputs that do not belong to any known classes at test time (Bendale and Boult, 2016). A plethora of OOD detection methods has been proposed for single-modal (image or text) inputs (Ge et al., 2017; Nalisnick et al., 2019; Oza and Patel, 2019; Tack et al., 2020; Hsu et al., 2020; Arora et al., 2021; Zhou et al., 2021; Xiao et al., 2020; Xu et al., 2021a; Li et al., 2021b; Shen et al., 2021; Jin et al., 2022; Zhou et al., 2022; Ming et al., 2022b,c; Podolskiy et al., 2021; Ren et al., 2023). Recent works (Fort et al., 2021; Esmaeilpour et al., 2022; Ming et al., 2022a; Ming and Li, 2023; Bitterwolf et al., 2023) also demonstrate promising OOD detection performance based on large-scale models pre-trained on text-image pairs, as pre-training enables models to learn powerful and transferable feature representations (Radford et al., 2021). However, it remains largely unexplored if existing findings in the OOD detection literature for images or texts can be naturally extended to the document + +domain. + +Multiple unique challenges exist for document OOD detection. Unlike natural images, texts, or image-text pairs, no captions can describe a document and images in documents rarely contain natural objects. Moreover, the spatial relationship of text blocks further differentiates multimodal learning in documents from multimodal learning in the vision-language domain (Lu et al., 2019; Li et al., 2020). In addition, while recent pre-training methods have demonstrated remarkable performance in downstream document understanding tasks (Xu et al., 2020, 2021b; Li et al., 2021a; Gu et al., 2022; Hong et al., 2022; Huang et al., 2022; Li et al., 2022; Wang et al., 2022a), existing pre-training datasets for documents are limited and lack diversity. This is in sharp contrast to common pretraining datasets for natural images. It remains underexplored whether existing OOD detection methods are reliable in the document domain and how pre-training impacts OOD reliability. + +In this work, we first present a comprehensive study to better understand OOD detection in the document domain through the following questions: (1) What is the role of document pre-training? How do pre-training datasets and tasks affect OOD detection performance? (2) Are existing OOD detection methods developed for natural images and texts transferrable to documents? (3) How does modality (textual, visual, and especially spatial information) affect OOD performance? In particular, we find that spatial information is critical for improving OOD reliability. Moreover, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa (Liu et al., 2019). Our module is computationally efficient and significantly improves both ID classification and OOD detection performance (Sec. 5.2). Our contributions are summarized as follows: + +- We provide an extensive and in-depth study to investigate the impacts of pre-training, fine-tuning, model-modality, and OOD scoring functions on a broad spectrum of document OOD detection tasks. Our codebase will be open-sourced to facilitate future research. +- We present unique insights on document OOD detection. For example, we observe that distance-based OOD scores are consistently advantageous over logit-based scores, which is underexplored + +in the recent OOD detection literature on vision-language pre-trained models. + +- We further propose a spatial-aware adapter module for transformer-based language models, facilitating easy adaptation of pre-trained language models to the document domain. Extensive experiments confirm the effectiveness of our module across diverse types of OOD data. + +# 2 Preliminaries and Related Works + +# 2.1 Document Models and Pre-Training + +Large-scale pre-trained models gradually gain popularity in the document domain due to their success in producing generic representations from large-scale unlabeled corpora in vision and natural language processing (NLP) tasks (Devlin et al., 2018; Lu et al., 2019; Su et al., 2019; Schiappa et al., 2022). As documents contain both visual and textual information distributed spatially in semantic regions, document-specific models and pre-training objectives are often necessary, which are distinct from vision or language domains. + +We summarize common model structures for document pre-training in Fig. 2a. Specifically, LayoutLM (Xu et al., 2020) takes a sequence of Optical Character Recognition (OCR) (Smith, 2007) words and word bounding boxes as inputs. It extends BERT to learn contextualized word representations for document images through multitask learning. LayoutLMv2 (Xu et al., 2021b) improves on the prior work with new pre-training tasks to model the interaction among texts, layouts, and images. DocFormer (Appalaraju et al., 2021) adopts a CNN model to extract image grid features, fusing the spatial information as an inductive bias for the self-attention module. LayoutLMv3 (Huang et al., 2022) further enhances visual and spatial characteristics with masked image modeling and word-patch alignment tasks. Another line of work focuses on various granularities of documents, such as region-level text/image blocks. Examples of such models include SelfDoc (Li et al., 2021a), UDoc (Gu et al., 2021), and MGDoc (Wang et al., 2022b), which are pre-trained with a cross-modal encoder to capture the relationship between visual and textual features. These models incorporate spatial information by fusing position embeddings at the output layer of their encoders, instead of the input layer. Additionally, OCR-free models (Kim et al., 2022; Tang et al., 2023) tackle document understanding as a se + +quence generation problem, unifying multiple tasks through an image-to-sequence generation network. + +While these pre-trained models demonstrate promising performance on downstream applications, their robustness to different types of OOD data, the influence of pre-training and fine-tuning, and the value of different modalities (e.g. spatial, textual, and visual) for document OOD detection remain largely unexplored. + +# 2.2 Out-of-Distribution Detection + +OOD detection has been extensively studied for open-world multi-class classification with natural image and text inputs, where the goal is to derive an OOD score that separates OOD from ID samples. A plethora of methods are proposed for deep neural networks, where the OOD scoring function is typically derived based on logits (without softmax scaling) (Hendrycks et al., 2022), softmax outputs (Liang et al., 2018; Hsu et al., 2020; Huang and Li, 2021; Sun et al., 2021), gradients (Huang et al., 2021), and feature embeddings (Tack et al., 2020; Fort et al., 2021; Ming et al., 2023). Despite their impressive performance on natural images and texts, it is underexplored if the results are transferrable to the document domain. A recent work (Larson et al., 2022) studied OOD detection for documents but only explored a limited number of models and OOD detection methods. The impacts of pre-training, fine-tuning, and spatial information remain unknown. In this work, we aim to provide a comprehensive and finer-grained analysis to shed light on the key factors for OOD robustness in the document domain. + +Notations. Following prior works on OOD detection with large-scale pre-trained models (Ming et al., 2022a; Ming and Li, 2023), the task of OOD detection is defined with respect to the downstream dataset, instead of the pre-training data which is often hard to characterize. In document classification, we use $\mathcal{X}^{\mathrm{in}}$ and $\mathcal{Y}^{\mathrm{in}} = \{1,\dots ,K\}$ to denote the input and label space, respectively. Let $\mathcal{D}^{\mathrm{in}} = \{(x_i^{\mathrm{in}},y_i^{\mathrm{in}})\}_{i = 1}^N$ be the ID dataset, where $x\in \mathcal{X}^{\mathrm{in}}$ and $y^{\mathrm{in}}\in \mathcal{Y}^{\mathrm{in}}$ . Let $\mathcal{D}^{\mathrm{out}} = \{(x_i^{\mathrm{out}},y_i^{\mathrm{out}})\}_{i = 1}^M$ denote an OOD test set where $y^{\mathrm{out}}\in \mathcal{Y}^{\mathrm{out}}$ , and $\mathcal{Y}^{\mathrm{out}}\cap \mathcal{Y}^{\mathrm{in}} = \emptyset$ . We express the neural network model $f\coloneqq g\circ h$ as a composition of a feature extractor $h:\mathcal{X}\to \mathbb{R}^{d}$ and a classifier $g:\mathbb{R}^{d}\to \mathbb{R}^{K}$ which maps the feature embedding of an input to $K$ real-valued numbers known as logits. During inference time, given an input $\pmb{x}$ , OOD detection + +can be formulated as: + +$$ +G _ {\gamma} (\boldsymbol {x}; h, g) = \left\{ \begin{array}{l l} \mathrm {I D} & S (\boldsymbol {x}; h, g) \geq \gamma \\ \mathrm {O O D} & S (\boldsymbol {x}; h, g) < \gamma \end{array} \right., +$$ + +where $S(\cdot)$ is a scoring function that measures OOD uncertainty. In practice, the threshold $q\gamma$ is often chosen so that a high fraction of ID data (e.g., 95%) is above the threshold. + +OOD detection scores. We focus on two major categories of computationally efficient OOD detection methods1: logit-based methods derive OOD scores from the logit layer of the model, while distance-based methods directly leverage feature embeddings, as shown in Fig. 1. We describe a few popular methods for each category as follows. + +- Logit-based: Maximum Softmax Probability (MSP) score (Hendrycks and Gimpel, 2017) $S_{\mathrm{MSP}} = \max_{i\in [K]}e^{f_i(\boldsymbol{x})} / \sum_{j = 1}^K e^{f_j(\boldsymbol{x})}$ naturally arises as a classic baseline as models often output lower softmax probabilities for OOD data; Energy score (Liu et al., 2020): $S_{\mathrm{Energy}} = \log \sum_{i\in [K]}e^{f_i(\boldsymbol{x})}$ utilizes the Helmholtz free energy of the data and theoretically aligns with the logarithm of the ID density; the simple MaxLogit score (Hendrycks et al., 2022): $S_{\mathrm{Maxlogit}} = \max_{i\in [K]}f_i(\boldsymbol{x})$ has demonstrated promising performance on large-scale natural image datasets. We select the above scores due to their simplicity and computational efficiency. In addition, recent studies demonstrate that such simple scores are particularly effective with large-scale pre-trained models in vision (Fort et al., 2021) and vision-language domains (Ming et al., 2022a; Bitterwolf et al., 2023). We complement previous studies and investigate their effectiveness for documents. + +- Distance-based: Distance-based methods directly leverage feature embeddings $\mathbf{z} = h(\mathbf{x})$ based on the idea that OOD inputs are relatively far away from ID clusters in the feature space, compared to ID inputs. Distance-based methods can be characterized as parametric and non-parametric. Parametric methods such as Mahalanobis score (Lee et al., 2018; Sehwag et al., 2021) assume ID embeddings follow class-conditional Gaussian distributions and use the Mahalanobis distance as the distance metric. On the other hand, non-parametric methods such as KNN+ (Sun et al., 2022) use cosine similarity as the distance metric. + +![](images/fa69a4a5a040426c3b4a5c6ea1a50dd5ffb253621aa2135ed5d8ea12ecf35d03.jpg) +(a) Illustration of common structures for document pretraining and classification. + +![](images/41c1a7bbaa1d47b5a7729e95f3246eef71d8d4b5ef2b773028c6fe2c610cc6a5.jpg) +(b) A detailed comparison of per-category accuracy on the RVL-CDIP test set. +Figure 2: (Left) Illustration of models for document pre-training and classification, with our proposed spatial-aware models in green blocks. Modality information is also shown atop each architecture. (Right) Evaluating fine-tuning performance for document classification of pre-trained models. Models are grouped into several categories (from left to right): language-only, vision-only, and multi-modal. For comparison, the performance of corresponding models in other groups is shown in gray. The average accuracy for each model is indicated in the parenthesis. + +Evaluation metrics. To evaluate OOD detection performance, we adopt the following commonly used metrics: the Area Under the Receiver Operating Characteristic (AUROC), False Positive Rate at $95\%$ Recall (FPR95), and the multi-class classification accuracy (ID Acc). + +# 3 Experimental Setup + +Models. Fig. 2a summarizes common structures for document pre-training and classification models2. While documents typically come in the form of images (Harley et al., 2015), an OCR system can be used to extract words and their coordinates from the input image. Therefore, models can use single-modal or multi-modal information. We categorize these models according to the input modalities into the following groups: (1) models using only visual features, (2) models using solely textual features, (3) models incorporating both visual and textual features, and (4) models integrating additional spatial (especially layout) information. Further details can be found in Appendix A. + +- Vision-only: Document classification can be viewed as a standard image classification problem. We consider ResNet-50 (He et al., 2016) and ViT (Fort et al., 2021) as exemplar document image classification models. We adopt two common pre-training settings: (1) only pre-trained on ImageNet (Deng et al., 2009) and (2) further pre-trained on IIT-CDIP (Lewis et al., 2006) with masked image modeling $(\mathrm{MIM})^3$ . After pretraining, we append a classifier for fine-tuning. + +- Text-only: Alternatively, we can view document classification as text classification since documents often contain text blocks. To this end, we use RoBERTa (Liu et al., 2019) and Longformer (Beltagy et al., 2020) as the backbones. RoBERTa can handle up to 512 input tokens while Longformer can handle up to 4,096 input tokens. We pre-train the language models with masked language modeling (MLM) on IIT-CDIP extracted text corpus. +- Text+Layout: Layout information plays a crucial role in the document domain, as shown in Fig. 3. To investigate the effect of layout information, we adopt LayoutLM as the backbone. We will show that spatial-aware models demonstrate promising OOD detection performance. However, such specialized models can be computationally expensive. Therefore, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa and transforms it into a spatial-aware model, which is computationally efficient and competitive for both ID classification and OOD detection (Sec. 5.2). +- Vision+Text+Layout: For comprehensiveness, we consider LayoutLMv3 and UDoc, which are large and computationally intensive. Both models are pre-trained on the full IIT-CDIP for fairness. These models utilize different input granularities and modalities, including textual, visual, and spatial information for document tasks. + +Constructing ID and OOD datasets. We construct ID datasets from RVL-CDIP (Harley et al., 2015), where 12 out of 16 classes are selected as ID classes. Dataset details are in Appendix A. We consider two OOD scenarios: in-domain and out-domain, based on the content (e.g., words, background) and layout characteristics. + +- In-domain OOD: To determine the OOD categories, we analyzed the performance of recent document classification models on the RVL-CDIP test set. Fig. 2b shows the per-category test accuracy of various models. Naturally, for the classes the models perform poorly on, we may expect the models to detect such inputs as OOD instead of assigning a specific ID class with low confidence. We observe that the 4 categories (letter, form, scientific report, and presentation) result in the worst performance across most of the models with different modalities. We use these as OOD categories and construct the OOD datasets accordingly. The ID dataset is constructed from the remaining 12 categories, which we refer to as in-domain OOD datasets, as they are also sourced from RVL-CDIP. + +- Out-domain OOD: In the open-world setting, test inputs can have significantly different color schemes and layouts compared to ID samples. To mimic such scenarios, we use two public datasets as out-domain OOD test sets: NJU-Fudan Paper-Poster Dataset (Qiang et al., 2019) and CORD (Park et al., 2019). NJU-Fudan Paper-Poster Dataset contains scientific posters in digital PDF format4. CORD is a receipt understanding dataset with significantly different inputs compared to RVL-CDIP. As shown in Fig. 3, receipt images can be challenging and require models to handle not only textual but also visual and spatial information. + +We further support our domain selection using OTDD (Alvarez-Melis and Fusi, 2020), a flexible geometric method for comparing probability distributions, which enables us to compare any two datasets regardless of their label sets. We observe a clear gap between in-domain and out-domain data, which aligns with our data selection. Further details can be found in Appendix A.1. + +# 4 Analyzing OOD Reliability for Documents + +# 4.1 OOD Detection Without Fine-Tuning + +In this section, we begin by examining the influence of pre-training datasets on zero-shot OOD detection. For each model, we adopt the same pretraining objective while adjusting the amount of pre-training data. Specifically, we increase the data diversity by appending 10, 20, 40, and $100\%$ of randomly sampled data from IIT-CDIP dataset (around 11M) and pre-train each model. After pre-training, we measure the OOD detection performance with $\mathrm{KNN + }$ score based on feature embeddings. + +We observe that: (1) for out-domain OOD data (Fig. 4a, right), increasing the amount of pretraining data can significantly improve the zero-shot OOD detection performance (w.o. fine-tuning) for models across different modalities. Our hypothesis is that pre-training with diverse data is beneficial for coarse-grained OOD detection, such as inputs from different domains (e.g., color schemes). (2) For in-domain OOD inputs, even increasing the amount of pre-training data by over $40\%$ provides negligible improvements (Fig. 4a, left). This suggests the necessity of fine-tuning for improving in-domain OOD detection performance (Fig. 6). + +We further explore a more restricted setting for zero-shot OOD detection where potential OOD categories are removed from the pre-training dataset IIT-CDIP. First, we use LayoutLM fine-tuned on RVL-CDIP to predict labels for all documents in IIT-CDIP. Fig. 4b summarizes the distribution of the predicted classes on IIT-CDIP. Next, we remove the "OOD" categories from IIT-CDIP and pretrain two models (RoBERTa and LayoutLM) with 10, 20, 40, and $100\%$ of randomly sampled data from the filtered IIT-CDIP (dubbed III- $\mathrm{CDIP^{-}}$ ), respectively. The zero-shot OOD performance for in-domain and out-domain OOD is shown in Fig. $4c^{5}$ . For RoBERTa, we observe similar trends as in Fig. 4a, where increasing the amount of pretraining data improves zero-shot OOD detection performance for out-domain data. However, the zero-shot performance of LayoutLM benefits from a larger pre-training dataset. In particular, given the same amount of pre-training data, LayoutLM consistently outperforms RoBERTa for both in-domain and out-domain OOD detection, which suggests that spatial information can be essential + +![](images/607b7ed811f4520c90c87ebfa687f7795cf55fce27dd9493771989b802367bb3.jpg) +Figure 3: (Top) Examples of ID inputs sampled from RVL-CDIP (top). (Bottom) In-domain OOD from RVL-CDIP, and out-domain OOD from Scientific Poster and Receipts. + +![](images/3ff57b7eefe7bba1b923228264429ceba557d3580b56565eed51f383cfef3a6b.jpg) +(a) Pre-train on IIT-CDIP. + +![](images/533b8df0e97e947ab30e1ad933d79182cfc6cf6d62aeaf752e4904d98a066b43.jpg) +Figure 4: The impact of pre-training data on zero-shot OOD detection performance. IIT-CDIP $^{-}$ denotes the filtered pre-training data after removing the "OOD" categories. + +![](images/ffe6c9e679e1b4e6dd6a4e6537ee55d3a938507aacc027050f195aeadfe410b5.jpg) +(b) Analysis of IIT-CDIP. +(c) Pre-train on IIT-CDIP-. + +for boosting the OOD reliability in the document domain. Motivated by the above observations, we dive deeper and analyze spatial-aware models next. + +While pre-trained models exhibit the capability to differentiate data from various domains as a result of being trained on a diverse range of data. We observe that achieving more precise separation for in-domain OOD inputs remains difficult. Given this observation, we further analyze the impacts of fine-tuning for OOD detection with fixed pretraining datasets in the next section. By combining pre-trained models with a simple classifier and fine-tuning on RVL-CDIP (ID), we find that fine-tuning is advantageous in enhancing the OOD detection performance for both types of OOD samples. + +# 4.2 The Impact of Fine-Tuning on Document OOD Detection + +Recent document models are often pre-trained on a large-scale dataset and adapted to the target task via fine-tuning. To better understand the role of fine-tuning, we explore the following questions: 1) How does fine-tuning impact OOD reliability for in-domain and out-domain OOD inputs? 2) How does model modality impact the performance? + +We consider a wide range of models pretrained on pure-text/image data (e.g., ImageNet and Wikipedia) described in Appendix A.3. During fine-tuning, we combine pre-trained models with a simple classifier and fine-tune on RVL-CDIP (ID). For models before and after fine-tuning, we extract the final feature embeddings and use a distance-based method KNN+ (Sun et al., 2022) for OOD detection. The results are shown in Fig. 6. We observe the following trends. First, fine-tuning largely improves OOD detection performance for both in-domain and out-domain OOD data. The same trend holds broadly across models with different modalities. Second, the improvement of fine-tuning is less significant for out-domain OOD data. For example, on Receipt (out-domain OOD), the AUROC for pre-trained ViT model is 97.13, whereas fine-tuning only improves by $0.79\%$ . This suggests that pre-trained models do have the potential to separate data from different domains due to the diversity of data used for pre-training, while it remains hard for pre-trained models to perform finer-grained separation for in-domain OOD inputs. Therefore, fine-tuning is beneficial for improving OOD detection performance for both types of OOD + +![](images/0314be8bd1bca90ef5bdab4e487c0f9cd588fa77d31947c7b6755267540bb088.jpg) +Figure 5: Comparison between representative feature-based scores and logit-based scores for spatial-aware and non-spatial-aware models. Spatial-aware models are colored in blue. + +![](images/da5abc0540cd0333359b641c58b0abc25ce6593a20535e58c75ebeef705c6902.jpg) + +![](images/3219f54bf54a194e6e31bb1117eb506c76bf9ac3c3eaf77178f10401e1c64d55.jpg) +Figure 6: OOD detection performance for pre-trained models w. and w.o. fine-tuning. We use a distance-based method KNN+ as the OOD scoring function. Fine-tuning significantly improves performance for both in and out-domain OOD data. + +![](images/4e8f8a80434205a841194eab1c0f8c2ebcab57b5807dc1125b3cc39484f32d04.jpg) + +![](images/ede391d9002a39d7640601e6dd684305bd9e813cab27211ac6c309fc5244bd8d.jpg) + +samples. To further validate our conclusion, we consider two additional in-domain OOD settings for our analysis: (1) selecting the classes the model performs well on, as in-domain OOD categories; (2) randomly selecting classes as OOD categories (Appendix A.2). We find that fine-tuning improves OOD detection for both settings, further verifying our observations. + +Next, we take a closer look at the impact of model modality on out-domain OOD detection. As shown in Fig. 6 (mid and right), both vision and text-based models demonstrate strong reliability against scientific posters (OOD). However, vision-based models display stronger performance than text-based models for Receipts (OOD). This can be explained by the fact that ViT was first pre-trained on ImageNet while scientific posters and receipts contain diverse visual information such as colors and edges for vision models to utilize (see Fig. 3). On the other hand, although fine-tuning text-based models largely improves the detection performance compared to pre-trained counterparts, utilizing only textual information can be inherently limited for out-domain OOD detection. + +# 5 The Importance of Spatial-Awareness + +In previous sections, we mainly focus on mainstream text-based and vision-based models for in- and out-domain OOD detection. Next, we consider + +models tailored to document processing, which we refer to as spatial-aware models, such as LayoutLMv3 and UDoc. Given fine-tuned models, we compare the performance of logit-based and distance-based OOD scores. + +![](images/70240005a0abddaad70c02836e857ccc66b79ca6b40b31ee70d80ff8cd54ca25.jpg) +Figure 7: Illustration of our spatial-aware adapter for language models. We present 2 adapter designs (marked in green box): (1) insert the adapter into the word embedding layer during pre-training and fine-tuning; (2) insert the adapter into the output layer for fine-tuning only. For the first design, we freeze the word embedding layer and learn the adapter and transformer layers. + +# 5.1 Analysis of Spatial-Aware Models + +We summarize key comparisons in Fig. 5, where we use MSP and Energy as exemplar logit-based scores and $\mathrm{KNN + }$ as the distance-based score. Full results are in Appendix C. We can see that the simple KNN-based score (KNN+) consistently outperforms logit-based scores for both in-domain and + +out-domain OOD data across different models with different modalities. This is in contrast with recent works that investigate large-scale pre-trained models in the vision-language domain, where logit-based scores demonstrate strong OOD detection performance (Fort et al., 2021). As documents are distinct from natural image-text pairs, observations in the vision-language domain do not seamlessly translate to the document domain. Moreover, spatial-aware models demonstrate stronger OOD detection performance for both in and out-domain OOD. For example, with the best scoring function $(\mathrm{KNN}+)$ , LayoutLMv3 improves the average AUROC by $7.09\%$ for out-domain OOD and $7.54\%$ for in-domain OOD data compared to RoBERTa. This further highlights the value of spatial information for improving OOD robustness for documents. + +Despite the impressive improvements brought by spatial-aware models, acquiring a large-scale pretraining dataset that includes spatial information remains challenging. In contrast, there is a growing abundance of pre-trained language models that are based on textual data. This motivates us to explore the possibility of leveraging these pre-trained language models by training an adapter on a small dataset containing document-specific information. By adopting this approach, we can effectively utilize existing models while minimizing the time and cost required for training. + +# 5.2 Towards Effective Spatial-Aware Adapter + +During our investigation into the effects of model modality, pre-training, and fine-tuning on various types of OOD inputs, we find that spatial/layout information plays a critical role in the document domain. However, existing pre-training models such as LayoutLM series, SelfDoc, and UDoc do not fully leverage the benefits of well-pre-trained language models. This raises the question of whether a large-scale language model, such as RoBERTa, can be adapted to detect OOD documents effectively. In this section, we demonstrate that incorporating an adapter module that accounts for spatial information with transformer-based pre-trained models can achieve strong performance with minimal changes to the code. To the best of our knowledge, this is the first study to apply the adapter idea to documents. + +Spatial-aware adapter. Given a pre-trained language model such as RoBERTa, we propose an adapter that utilizes spatial information. We consider two potential designs: 1) the adapter is ap- + +![](images/a56533c927e6b36ed598d7f41760e9bed8ccba8b7f7add36b364da44c80e960a.jpg) +Figure 8: Comparison of OOD detection performance of Spatial-RoBERTa and RoBERTa. All models are initialized with public pre-trained checkpoints trained on purely textual data and further pre-trained on IIT-CDIP. The only difference is that Spatial-RoBERTa has an additional spatial-ware adapter and takes word bounding boxes as additional inputs. + +pended to the word embedding layer, denoted as Spatial-RoBERTa (pre), which requires both pretraining and fine-tuning. This architecture is illustrated in the top row of Fig. 7.2) The adapter is appended to the final layer of the text encoder, denoted as Spatial-BoBERTa (post), which only requires fine-tuning as the model can utilize the pre-trained textual encoder, as shown in the bottom row of Fig. 7. + +For Spatial-RoBERTa (pre), we freeze the word embedding layer during pre-training for several considerations: 1) word embeddings learned from large-scale corpus already cover most of those words from documents; 2) pre-training on documents without strong language dependency may not help improve word embeddings. For example, in semi-structured documents (e.g., forms, receipts), language dependencies are not as strong as in text-rich documents (e.g., letters, resumes), which may degenerate the learned word representations. In practice, each word has a normalized bounding box $(x_0, y_0, x_1, y_1)$ , where $(x_0, y_0) / (x_1, y_1)$ corresponds to the position of the upper left / lower right in the bounding box. To encode positional information, we employ four position embedding layers, where each layer= encodes one coordinate $(e.g., x_0)$ and produces a corresponding position embedding. The special tokens ([CLS], [SEP], and [PAD]) are attached with an empty bounding box $(0, 0, 0, 0)$ . As depicted in the top row of Fig. 7, the spatial-aware word embeddings are formed by adding position embeddings to their corresponding word embeddings. + +For Spatial-RoBERTa (post), position embeddings are added through late fusion in the final hidden states during fine-tuning without affecting the + +![](images/110e122fbc8ca49348f8f64f04e6d3a599adf26422b92890e02a3c70f70deefd.jpg) +Figure 9: Correlation between ID accuracy and OOD detection performance. For most models, ID accuracy is positively correlated with OOD detection performance. Language models with spatial-aware adapters (highlighted in blue) achieve significantly higher ID accuracy and stronger OOD robustness (in AUROC) compared to language models without adapters. Here, $(+)$ represents further pre-training on the IIT-CDIP dataset. + +![](images/7be12378d725b6bc34a985a57f9fd1ef7fb47644aece228bf26244f5557a6be5.jpg) + +![](images/b9e0ddf8d677ce9e17b38f1e9c17810e93ab472b943962c10f3fbfa493e46079.jpg) + +pre-trained encoder. Our experiments demonstrate that introducing spatial-aware adapters during pretraining yields better results than only adding position embeddings during fine-tuning. For additional details, please refer to Appendix C. In the following, we focus on analyzing Spatial-RoBERTa (pre) and comparing both ID and OOD performance with that of the pure-text pre-trained RoBERTa. + +Spatial-RoBERTa significantly outperforms RoBERTa. To verify the effectiveness of Spatial-RoBERTa, we compare the OOD detection performance of pre-trained and fine-tuned models. The results are shown in Fig. 8, where OOD performance is based on $\mathrm{KNN + (K = 10)}$ . Full results can be seen in Table 6. Spatial-RoBERTa significantly improves the OOD detection performance, especially after fine-tuning. For example, compared to RoBERTa (base), Spatial-RoBERTa (base) improves AUROC significantly by $4.24\%$ averaged over four in-domain OOD datasets. This further confirms the importance of spatial information for OOD detection in the document domain. + +Spatial-RoBERTa is competitive for both ID classification and OOD detection. Beyond OOD detection performance, we also examine the multi-class ID classification accuracy and plot the two metrics for all models with different modalities in Fig. 9. We can clearly observe a positive correlation between ID accuracy and OOD detection performance (measured by AUROC) for both in-domain and out-domain OOD data. Moreover, spatial-aware models display superior ID accuracy and OOD robustness compared to text-only and + +vision-only models. Overall, Spatial-RoBERTa greatly improves upon RoBERTa and matches the performance of models with more complex and specialized architectures such as LayoutLM. Specifically, Spatial-RoBERTaLarge achieves 97.37 ID accuracy, which is even higher than LayoutLM (97.28) and UDoc (97.36). + +To summarize, our spatial-aware adapter effectively adapts pre-trained transformer-based text models to the document domain, improving both ID and OOD performance. In addition, by freezing the original word embeddings during pre-training, the models (Spatial-RoBERTaBase and Spatial-RoBERTaLarge) are parameter-efficient and thus reduce the training cost. + +# 6 Conclusions + +In this work, we provide a comprehensive and in-depth study on the impacts of pre-training, finetuning, model-modality, and OOD scores on a broad variety of document OOD detection tasks. We present novel insights on document OOD detection, which are under-explored or in contrast with OOD detection works based on vision-language models. In particular, we highlight that spatial information is critical for OOD detection in documents. We further propose a spatial-aware adapter as an add-on module to transformer-based models. Our module adapts pre-trained language models to the document domain. Extensive experiments on a broad range of datasets verify the effectiveness of our design. We hope our work will inspire future research toward improving OOD robustness for reliable document understanding. + +# 7 Limitations + +In this work, our main focus is on OOD detection for document understanding, with a specific emphasis on the context of document classification. As OOD detection based on document pre-trained models remains largely underexplored, we believe establishing an in-depth and extensive study of OOD detection for document classification would be a valuable stepping stone towards more complex tasks. Apart from document classification, in the Appendix B, we also investigate OOD detection for two entity-level tasks: document entity recognition and document object detection. We leave a more comprehensive treatment for future works. + +# References + +David Alvarez-Melis and Nicolo Fusi. 2020. Geometric dataset distances via optimal transport. In NeurIPS. +Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In ICCV. +Udit Arora, William Huang, and He He. 2021. Types of out-of-distribution texts and how to detect them. In EMNLP. +Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. +Abhijit Bendale and Terrance E Boult. 2016. Towards open set deep networks. In CVPR. +Julian Bitterwolf, Maximilian Mueller, and Matthias Hein. 2023. In or out? fixing imagenet out-of-distribution detection evaluation. In ICML. +Lei Cui, Yiheng Xu, Tengchao Lv, and Furu Wei. 2021. Document ai: Benchmarks, models and applications. arXiv preprint arXiv:2111.08609. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. +Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. 2022. Vos: Learning what you don't know by virtual outlier synthesis. In ICLR. +Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. 2022. Zero-shot open set detection by extending clip. In AAAI. + +Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. 2021. Exploring the limits of out-of-distribution detection. In NeurIPS. +ZongYuan Ge, Sergey Demyanov, Zetao Chen, and Rahul Garnavi. 2017. Generative openmax for multi-class open set classification. arXiv preprint arXiv:1707.07418. +Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, and Tong Sun. 2021. Unified pretraining framework for document understanding. In NeurIPS. +Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. 2022. Xlayoutlm: Towards layout-aware multimodal networks for visually-rich document understanding. In CVPR. +Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In ICDAR. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pages 770-778. +Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. 2022. Scaling out-of-distribution detection for real-world settings. In ICML. +Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR. +Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In AAAI. +Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. 2020. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR. +Rui Huang, Andrew Geng, and Yixuan Li. 2021. On the importance of gradients for detecting distributional shifts in the wild. In NeurIPS. +Rui Huang and Yixuan Li. 2021. Mos: Towards scaling out-of-distribution detection for large semantic space. In CVPR. +Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In ACMMM. +Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In IC-DAR Workshop. + +Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, and Dilek Hakkani-Tur. 2022. Towards textual out-of-domain detection without in-domain labels. TASLP. +Geewook Kim, Teakgyu Hong, Moonbin Yim, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. 2022. Donut: Document understanding transformer withoutOCR. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR. +Stefan Larson, Gordon Lim, Yutong Ai, David Kuang, and Kevin Leach. 2022. Evaluating out-of-distribution performance on document image classifiers. In NeurIPS. +Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS. +D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard. 2006. Building a test collection for complex document information processing. In SIGIR. +Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In AAAI. +Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. 2022. Dit: Self-supervised pretraining for document image transformer. In ACM MM. +Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021a. Selfdoc: Self-supervised document representation learning. In CVPR. +Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, and Jun Zhang. 2021b. kfolden: k-fold ensemble for out-of-distribution detection. In EMNLP. +Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR. +Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. 2020. Energy-based out-of-distribution detection. In NeurIPS. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS. + +Yifei Ming, Ziyang Cai, Jiumiang Gu, Yiyou Sun, Wei Li, and Yixuan Li. 2022a. Delving into out-of-distribution detection with vision-language representations. In NeurIPS. +Yifei Ming, Ying Fan, and Yixuan Li. 2022b. Poem: Out-of-distribution detection with posterior sampling. In ICML. PMLR. +Yifei Ming and Yixuan Li. 2023. How does fin-tuning impact out-of-distribution detection for vision-language models? IJCV. +Yifei Ming, Yiyou Sun, Ousmane Dia, and Yixuan Li. 2023. How to exploit hyperspherical embeddings for out-of-distribution detection? In ICLR. +Yifei Ming, Hang Yin, and Yixuan Li. 2022c. On the impact of spurious correlation for out-of-distribution detection. In AAAI. +Ajoy Mondal, Peter Lipps, and CV Jawahar. 2020. Iiit-13k: a new dataset for graphical object detection in documents. In International Workshop on Document Analysis Systems. +Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. 2019. Do deep generative models know what they don't know? In ICLR. +Poojan Oza and Vishal M Patel. 2019. C2ae: Class conditioned auto-encoder for open-set recognition. In CVPR. +Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: A consolidated receipt dataset for post-ocr parsing. In NeurIPS Workshop. +Alexander Podolskiy, Dmitry Lipin, Andrey Bout, Ekaterina Artemova, and Irina Piontkovskaya. 2021. Revisiting mahalanobis distance for transformer-based out-of-domain detection. In AAAI. +Yu-Ting Qiang, Yan-Wei Fu, Xiao Yu, Yan-Wen Guo, Zhi-Hua Zhou, and Leonid Sigal. 2019. Learning to generate posters of scientific papers by probabilistic graphical models. JCST. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML. +Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, and Peter J Liu. 2023. Out-of-distribution detection and selective generation for conditional language models. In ICLR. +Madeline C Schiappa, Yogesh S Rawat, Shruti Vyas, Vibhav Vineet, and Hamid Palangi. 2022. Multimodal robustness analysis against language and visual perturbations. In NeurIPS. + +Vikash Sehwag, Mung Chiang, and Prateek Mittal. 2021. Ssd: A unified framework for self-supervised outlier detection. In ICLR. +Yilin Shen, Yen-Chang Hsu, Avik Ray, and Hongxia Jin. 2021. Enhancing the generalization for intent classification and out-of-domain detection in SLU. In ACL-IJCNLP. +Ray Smith. 2007. An overview of the tesseractOCR engine. In ICDAR. +Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. VI-bert: Pre-training of generic visual-linguistic representations. In ICLR. +Yiyou Sun, Chuan Guo, and Yixuan Li. 2021. React: Out-of-distribution detection with rectified activations. In NeurIPS. +Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. 2022. Out-of-distribution detection with deep nearest neighbors. In ICML. +Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. 2020. Csi: Novelty detection via contrastive learning on distributionally shifted instances. In NeurIPS. +Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, and Mohit Bansal. 2023. Unifying vision, text, and layout for universal document processing. In CVPR. +Thirumalaisamy P Velavan and Christian G Meyer. 2020. The Covid-19 epidemic. Tropical medicine & international health, 25(3):278. +Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, Dianhai Yu, et al. 2022a. mmlayout: Multi-grained multimodal transformer for document understanding. In ACMMM. +Zilong Wang, Jiaxiang Gu, Chris Tensmeyer, Nikolaos Barmpalios, Ani Nenkova, Tong Sun, Jingbo Shang, and Vlad I Morariu. 2022b. Mgdoc: Pre-training with multi-granular hierarchy for document image understanding. In EMNLP. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. +Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. 2019. Detector2. https://github.com/facebookresearch/detectron2. +Zhisheng Xiao, Qing Yan, and Yali Amit. 2020. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In NeurIPS. + +Keyang Xu, Tongzheng Ren, Shikun Zhang, Yihao Feng, and Caiming Xiong. 2021a. Unsupervised out-of-domain detection via pre-trained transformers. In ACL. +Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. 2021b. Layoutmv2: Multi-modal pre-training for visually-rich document understanding. In ACL. +Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pre-training of text and layout for document image understanding. In SIGKDD. +Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. In ICDAR. +Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021. Contrastive out-of-distribution detection for pretrained transformers. In EMNLP. +Yunhua Zhou, Peiju Liu, and Xipeng Qiu. 2022. KNN-contrastive learning for out-of-domain intent classification. In ACL. + +# A Dataset and Model Details + +# A.1 Datasets + +The full RVL-CDIP dataset consists of 320K/40K/40K training/validation/testing images under 16 categories. We select 12 of them as the ID (In-domain) data. We employ the Google OCR engine to extract the text and layout information, which provides tokens, text blocks and the corresponding bounding boxes. + +# A.2 Quantifying OOD Dataset Construction + +The distance between datasets can be measured via Optimal Transport Dataset Distance (OTDD) $^{8}$ . We visualize the OTDD distance between ID and the OOD (both in-domain and out-domain) data in Fig. 10a, where we highlight the in-domain OOD data in blue and the out-domain OOD data in green. Specifically, we randomly sample 1000 images from each dataset and calculate the average distance between pairs of datasets. We can see a significant gap between the OTDD of in-domain OOD data and out-domain OOD data. To make the analysis more thorough, we consider two additional in-domain OOD settings: (1) select the classes the model performs well as OOD data; (2) randomly select classes as OOD data. The results are shown in Fig. 10b and Fig. 10c. We can see that the distance between ID and in-domain OOD is similar to the original scheme (Fig. 10a). This suggests that most in-domain OOD categories are not far from ID data. + +While this paper represents an initial endeavor, we hope that our work will serve as a stepping stone towards constructing more comprehensive and diverse OOD benchmarks in the document domain, akin to those available in the NLP and natural image domain. + +# A.3 Models and Training Details + +All models reported in Fig. 2b, except UDoc, are initialized with pre-trained weights from Huggingface and fine-tuned on the full RVL-CDIP training set. During fine-tuning, we train these models on RVL-CDIP with the cross-entropy loss. The models were optimized with Adam optimizer (Kingma and Ba, 2014) for 30 epochs with a batch size of 50 and a learning rate of $2 \times 10^{-5}$ on 8 A100 GPUs. + +The following are the hyperparameters of the models used in our paper: + +# Text-only: + +- BERT and RoBERTa: We adopt RoBERTaBase (12 layers) and BERTBase (12 layers) as backbones and set the maximum sequence length to 512. For RoBERTa, the classifier consists of two linear layers followed by a tanh activation function. +- LongformerBase: We also employ LongformerBase (12 layers) as the backbone and set the maximum sequence length to 4,096. + +# Vision-only: + +- ResNet50: We adopt ResNet50 pre-trained on ImageNet-1k as the backbone. We fine-tune the model at a resolution of $224 \times 224$ . +- ViT: We consider ViTBase (vit-base-patch16-224, pre-trained on ImageNet-21k) as the backbone and fine-tune at a resolution of $224 \times 224$ . +- SwinB: We also use the Swin Transformer (swin-base-patch4-window7-224-in22k, pretrained on ImageNet-21k) as the backbone and fine-tune the model at a resolution of $224 \times 224$ . + +# Text+Layout: + +- **LayoutLMv1:** This model employs the LayoutLM (layoutlm-base-uncased, 12 layers, pre-trained on IIT-CDIP) as the backbone. We set the maximum sequence length to 512. +- Spatial-RoBERTaBase (Pre): This model combines our spatial-aware adapter to the pretrained RoBERTaBase model. The adapter is applied to the word embedding layer. We freeze the pre-trained word embeddings and optimize the spatial-aware adapter and transformers. +- Spatial-RoBERTaBase (Post): Instead of inserting the spatial-aware adapter in the input layer, this model integrates the spatial-aware adapter at the output layer of the transformer. + +![](images/90af6de6831ddb1f6eb120fbda29199b32b303e1b9a862bfc4bdbf707ef2c2c9.jpg) +(a) OOD (Worst performance). + +![](images/dd2e9423677d9dc596b0519c26eb3f64df1deb8943d1f34f83ac9faeec506a27.jpg) +Figure 10: Visualization of optimal transport dataset distance for ID and OOD (in-domain and out-domain) datasets. We highlight the in-domain OOD data in blue and the out-domain OOD data in green. + +![](images/4d55b13de1f9a2cbe1a3e0dfc3ae97cf5463611f9a1f228ab8d7be62d03e1f0e.jpg) +(b) OOD (Best performance). +(c) OOD (Random selection). + +![](images/075807af9553c10e98933f376dc0e187355f594c655fb5afdd5be0b40c0edf76.jpg) + +![](images/14438878630e68d29777e5137aa31845596aed3bda2ad1df565207e6063ef4d2.jpg) +(a) RoBERTaBase (10%) + +![](images/a4f3a8b4ec2a2c7f337e06c72f660c8f915f92ce4076165a883f13ee07d9c79e.jpg) + +![](images/901da70de489a082be65312b3f3de0b01b9aa0f342d51cb4a59a8c4707eca283.jpg) +(b) RoBERTaBase (20%) + +![](images/e4836d6a18a3287fff4747411832a7273c9139688deecbd6e2ba33498a7c2c11.jpg) + +![](images/27cde30ec36119db3d2b1d13a779742c66fd0c75d4b18d51103a65553730bb77.jpg) + +![](images/3310b40122707b033c78dc92f8004821ba6350fbd59fbd099f0fb3a136065523.jpg) + +![](images/7bb3433e1b02ffca18eec3ccce6d720aaefda0f7829f8a45aff1c9efcc58fc61.jpg) + +![](images/b6bc09e143045d69ad07c9c1cb4350d135eaa51ce05c551d3adcc158703ee13e.jpg) + +![](images/fd7f589a65e340c0a995ffa12d8acfd4f3b78b2d36381737e7ebc9f714c8544a.jpg) +(e) $\mathrm{ViT_{Base}}$ (10%) + +![](images/a0eb1ce030971c848e4c5626a4f4fa7e369eac9f86805535dac8e5d8a872cc34.jpg) +(f) $\mathrm{ViT_{Base}}$ (20%) +Figure 11: Feature visualization for pre-trained (with different numbers of pre-training data) and fine-tuned models. We show both in-domain (RVL-CDIP) and out-domain (CORD) OOD datasets. + +![](images/3111f755a7d0ff1a749b26995277f72811df8e66e50a82c9549a54b80c3f4c86.jpg) +(c) RoBERTaBase (40%) +(g) $\mathrm{ViT_{Base}}$ (40%) + +![](images/fea5884ad3a01e9d91ea5681e5e1cf201c87eb9d4be6e37730e9ccd5374ae46f.jpg) +(d) RoBERTaBase (100%) +(h) $\mathrm{ViT_{Base}}$ (100%) + +![](images/8e18e6d992d98a892ba8037f96ab0a525b0b6088487dd5318b64b8782e511986.jpg) + +![](images/e4abd59173f9fc34556cee7805493b325a16fa764f5f4fa43d769b3e7844d2ec.jpg) + +![](images/faac36a23aa786e1a67155dfb68d9f1e4bc0aa8668956fbde9f449316527e24b.jpg) + +![](images/fe1eb8b820ce0708098b7e7e18c14a3c2bfb47f3907afdf75356b8fe35c93854.jpg) +Figure 12: MSP, Energy, KNN, and Maha score histogram distributions of ID (blue) and OOD (green) inputs derived from fine-tuned ResNet-50, RoBERTa, and LayoutLMv3. The KNN scores calculated from both vision and language models naturally form smooth distributions. In contrast, MSP and Maha scores for both in- and out-of-distribution data concentrate on high values. Overall our experiments show that using feature space makes the scores more distinguishable between and out-of-distributions and, as a result, enables more effective OOD detection. +Figure 13: The network architectures in green blocks are our proposed models. We also show the modality information on top of each architecture. + +# Vision+Text+Layout: + +- LaytouLMv3: We use LayoutLMv3 (layoutlmv3-base, 12 layers, pre-trained on IIT-CDIP) as the backbone. +- UDoc: We use a slight variant of UDoc with the only difference in the sentence encoder, where we adopt a smaller version of the pretrained sentence encoder (all-MiniLM-L6-v2, 6 layers) instead of the larger sentence encoder (bert-base-nli-mean-tokens, 12 layers). + +# B Beyond Document Classification + +In the main paper, we mainly focus on document classification to provide a thorough and in-depth analysis. In this section, we go beyond document classification and explore OOD detection for two entity-level tasks in documents: document entity recognition and document object detection. It is natural to detect and recognize basic units in documents such as text, tables, and figures. Document entity recognition aims to predict the label for each semantic entity with given bounding boxes. Document object detection is an object detection task for document images. Specifically, we denote the input as $x$ , the bounding box coordinates associated with object instances in the image as $\pmb{b} \in \mathbb{R}^4$ , and use the model with parameters $\theta$ to model the bounding box regression $p_{\theta}(b|x)$ and the label classification $p_{\theta}(y|x, b)$ . Given a test input $\hat{x}$ , the OOD detection scoring function for entity detection and recognition can be unified as $S(\hat{x}, \hat{b})$ , where $\hat{b}$ denotes the object instance predicted by the object detector. In particular, for document entity recognition, since the bounding boxes are provided, the OOD score can be simplified as $S(\hat{x}, \bar{b})$ , where $\bar{b}$ is the given object instance. + +Document Object Detection. For document object detection, we use PubLayNet as the ID dataset and construct the OOD dataset from IIIT-AR-13K. Unlike PubLayNet, where the documents are scientific articles, IIIT-AR-13K is a dataset for graphical object detection in business documents (e.g., annual reports), thus there exists an obvious domain gap. We select natural images as the OOD entity and filter images that contain the OOD entity. Two object detection models are considered in this paper: (1) Vanilla Faster-RCNN with ResNet-50 visual backbone, and (2) Faster-RCNN with VOS (Du et al., 2022), a recent unknown-aware learning framework to improve OOD detection performance for natural images. Following the original paper, we use 1,000 samples for each ID class to estimate the class-conditional Gaussian statistics. The models are trained for 180k iterations with a base learning rate of 0.01 and a batch size of 8 using the Detectron2 framework (Wu et al., 2019). The performance of the models is measured using the mean average precision (MAP) @ intersection over union (IOU) [0.50:0.95] of bounding boxes. + +Document Entity Recognition. For entity recognition, we construct ID and OOD datasets from FUNSD. Each semantic entity includes a list of words, a label, and a bounding box. The standard label set for this dataset contains four categories: question, answer, header, and other. In this paper, we select entities labeled as other or header as OOD data, and the entities belonging to the other three categories as ID. Instead of treating entity recognition as a named-entity recognition problem, we follow UDoc and solve this problem at the semantic region level. We replace the sentence encoder in UDoc with a smaller sentence encoder (all-MiniLM-L6-v2 $^{10}$ ) from Huggingface (Wolf et al., 2019). We also have the following model variants to verify the effectiveness of the combination of modalities: textual-only, visual-only, textual+spatial, visual+spatial, and visual+textual+spatial. + +We provide details on datasets and models as follows. + +# B.1 Datasets + +The original FUNSD (Jaume et al., 2019) dataset contains 149 training and 50 testing images. For document entity recognition, we treat entities with the category other/anchor as OOD entities. After + +the split, if we consider other as OOD, we have a total of 8,330 ID and 1,019 OOD entities. Otherwise, if we consider header as OOD, we have 8,981 ID and 368 OOD entities in total. + +For document object detection, we consider PubLayNet (Zhong et al., 2019), which contains $336\mathrm{K} / 11\mathrm{K}$ training/validation images with 6 categories (text, title, list, fig., and table). The original IIIT-AR-13K (Mondal et al., 2020) contains (table, fig., natural image, logo, and signature). In this paper, considering the overlap between IIIT-AR-13K and PubLayNet, we select those images containing natural images as the OOD test set. After filtering, we obtain 2,880 OOD entities across 1,837 document images. + +We consider three ID datasets in this experiment. (1) PubLayNet: This is the original PubLayNet dataset. We treat all the entities in training/validation images as ID entities. (2) Considering the domain shift between ID data (PubLayNet) and OOD data (IIIT-AR-13K). We combine the PubLayNet training data with the images from IIIT-AR-13K with overlapping annotations (table and figure) and train the object detection model. + +# B.2 Models + +Fig. 13 illustrates the entity recognition models used in this paper. We consider the entities on regions instead of tokens, as regions provide richer semantic information. As for the pre-trained model, we adopt UDoc (trained on IIT-CDIP) since it models inputs at the regional level. Based on the UDoc framework, we develop the following models. + +# Vision/Vision+Layout: + +- ResNet-50: This model is composed of the ResNet-50 from pre-trained UDoc. It adopts the RoI pooling followed by a classifier to extract the entity features. +- ResNet-50+Position: This model also adapts UDoc's pre-trained ResNet-50 for further improvement. It makes the RoI features spatially aware by adding position embeddings, which are mapped from the bounding boxes via a linear mapping layer. + +# Text/Text+Layout: + +- Sentence BERT: This model adopts the language branch of UDoc and appends the classifier to the output of the sentence encoder. + +![](images/f1c75d15ae21f76e4491afaf6a3fc5ad750c0f81e0fab7f2a57d7592f3089702.jpg) +(b) OOD detection results from different object detection methods and models. +Figure 14: Ablation on document entity recognition and object detection. Numbers are reported in FPR95. + +![](images/6a13b259845932a94df9bd7401b4c1fd5cb34fddcbd94e79d6e72b83660219b2.jpg) +(a) Comparison of OOD detection methods on different models on two OOD classes: other and header. + +- Sentence BERT+Position: This model is close to the above model but adds position embeddings to the sentence embeddings. + +# Vision+Text+Layout: + +- ResNet-50+sentence BERT: This model follows the same framework as UDoc, but replaces the sentence encoder in their original design with a more miniature sentence encoder (all-MiniLM-L6-v2). +- SwinT+Sentence BERT: This model replaces the ResNet-50 visual backbone with a pre-trained tiny Swin Transformer (swintiny-patch4-window7-224) adopted from the Huggingface. + +All the models are fine-tuned with the cross-entropy loss for 100 epochs, using a learning rate of $10^{-5}$ and a batch size of 8 on an A100 GPU. + +# B.3 Summary of Observations + +We provide a summary of observations here and hope to inspire future works on a thorough investigation of OOD detection for entity-level tasks. To identify entity types, models should not only understand the words but also utilize spatial and visual information. + +For document entity recognition, the comparison of distance-based and logit-based OOD detection methods with different models are shown in Fig. 14a. More details are shown in Table 2. We see that models can better predict the entity type and also achieve better OOD robustness with the help of spatial information. Considering the weak language dependency between entities, it is not surprising that vision-based models achieve better performance than text-based models. In particular, UDoc with ResNet-50 achieves the best performance on two OOD test sets, illustrating that visual information plays a major role in increasing the discrimination of entities with similar semantics. For document object detection, we summarize our findings in Fig. 14b and describe them in more + +detail in Table 1. We can see that the OOD detection performance is further improved by introducing document images from IIIT-AR-13K with the same ID annotations as training data. + +To provide more intuitions, in Fig. 15, we visualize the document entity recognition OOD detection results. In Fig. 16, we visualize the prediction on sample OOD images, using object detection models trained without VOS (top) and with VOS (bottom), respectively. We can see that vanilla Faster RCNN trained on PubLayNet produces false positives when applied to the OOD document images from IIIT-AR-13K. Table 1 shows that introducing the unknown-aware learning method optimized for both ID and OOD can reduce the FPR95 while preserving the mAP on the ID data. This experiment indicates that incorporating uncertainty estimation into the entity detection training procedure can improve the reliability of the document object detection system. + +# C Detailed Experimental Results + +- Table 2 corresponds to the results shown in Fig. 15 and Fig. 14a. +- Table 1 corresponds to the results shown in Fig. 16 and Fig. 14b. +- Table 3 and Table 7 correspond to the results shown in Fig. 4a. +- Table 4 and Table 5 correspond to the results shown in Fig. 4c. +- Table 6 corresponds to the results shown in Fig. 8 and Fig. 9. +- Table 9 and Table 8 correspond to the results shown in Fig. 6 and Fig. 9. +- Table 10 and Table 11 correspond to the analysis for Sec. 4 and Sec. 4.2. +- Table 12 corresponds to the results shown in Fig. 9. + +![](images/79de30db107355133e0f289c7f4db6feb3f9ac012befdf84ed5b4e4b131ef632.jpg) +Figure 15: Visualization of detected OOD entities on the form images. The top part shows the entities in blue are entities annotated as other. The bottom part shows the detected OOD entities (green). We also show failure cases on the right part. + +![](images/757e27df2d9e25d1d977d414ed4a1f7dabd77e97d7ddf7ae1f976564f4d51dff.jpg) +Figure 16: Visualization of detected objects on the OOD images (from IIIT-AR-13K) by a vanilla Faster-RCNN (top) and Faster-RCNN with VOS (bottom) is shown. Objects in blue boxes are detected and classified as one of the ID classes. The detected OOD objects (green) reduce false positives among detected objects. We also visualize detected objects on the ID images. There is a clear difference between PubLayNet and IIIT-AR-13K – entities and annotations of natural images rarely exist in PubLayNet. + +Table 1: Comparison with different training and detection methods. + +
ModelsID DatasetOOD ScoreIIIT-AR-13K (Natural Image as OOD)PubLayNet (ID)
FPR95AUROCAUPRmAP
Vanilla Faster-RCNNPubMedNetMSP74.3379.1298.4192.6
Energy55.9683.5598.73
Faster-RCNN with VOSPubMedNetMSP63.6579.3798.5792.2
Energy55.6180.6098.67
Faster-RCNN with VOSPubMedNet+IIIT-AR-13K(ID)MSP56.5782.9498.5992.4
Energy47.7384.0498.67
+ +Table 2: Comparison with different models on FUNSD OOD setting. All models are initialized with UDoc pretrained on IIT-CDIP and fine-tuned on FUNSD data with ID entities. All values are percentages. S-BERT deontes Sentence BERT. A lower FPR95 or a higher AUROC value indicates better performance. + +
Test F1MethodOther (OOD)IDHeader (OOD)IDTest F1MethodOther (OOD)IDHeader (OOD)ID
FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1
ResNet-5075.15KNN1059.4779.1481.7963.97ResNet-50+Position75.82KNN1073.2173.1990.2261.42
KNN2069.9778.1581.2563.66KNN2072.9173.4488.0461.54
KNN5084.4977.4082.6162.86KNN5075.9674.4382.8860.93
KNN10097.9477.0877.6584.2461.6278.04KNN10079.6974.8583.7059.3977.98
KNN20097.8477.1594.2959.74KNN20086.0675.1491.5857.42
KNN40097.1576.0994.8457.53KNN40087.9374.9295.9255.37
MSP50.5475.8075.8276.55MSP77.8267.6084.2466.58
MaxLogit52.4073.7073.6476.72MaxLogit76.9467.0584.2465.41
Energy52.5073.7075.8276.55Energy76.6466.9384.5164.98
S-BERT77.15KNN1093.7248.4492.6660.99S-BERT+Position82.69KNN1097.4541.2493.7562.38
KNN2093.9247.6592.9359.00KNN2097.5539.9193.4861.51
KNN5093.6248.9493.2157.90KNN5097.1539.5692.3961.76
KNN10093.9248.7993.2155.07KNN10097.0641.6791.8560.99
KNN20093.9247.8582.1293.4852.8682.41KNN20096.5741.8587.0859.0887.01
KNN40094.1146.2195.3849.86KNN40097.2540.8390.2254.03
MSP93.6254.9194.2952.14MSP88.4261.1190.7659.58
MaxLogit93.7254.7594.5756.51MaxLogit89.7060.1988.8660.92
Energy93.2354.8893.2158.22Energy90.4859.6189.9561.12
ResNet-50+S-BERT89.11KNN1045.9387.8553.8087.97SwimT+S-BERT86.00KNN1063.3083.6481.5264.08
KNN2053.5886.7155.7187.06KNN2066.7382.5381.5261.50
KNN5073.2184.3662.7785.49KNN5070.1780.2182.3457.77
KNN10089.7083.0169.0283.60KNN10083.9177.7183.1554.97
KNN20096.6681.9093.1375.5480.8593.18KNN20095.3975.7990.8250.5790.40
KNN40098.8281.0091.5877.42KNN40096.7675.4999.7347.45
MSP45.4487.8267.3972.85MSP69.2870.7080.7152.02
MaxLogit45.5390.5863.0472.39MaxLogit67.1274.4181.7952.77
Energy45.5390.5763.8672.37Energy67.2274.4181.7952.77
+ +Table 3: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP. ID (Acc) denotes the ID accuracy obtained by testing on ID test data. We report the KNN-based scores for both pre-trained and fine-tuned models. Sci. Poster denotes the document images converted from NJU-Fudan Paper-Poster Dataset. Receipt denotes the receipt images collected from the CORD receipt understanding dataset. For in-domain OOD test data, we also report the averaged scores. + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.59MSP92.7569.2492.2166.9394.6565.4092.0070.0992.9067.9296.5166.9399.1052.90
MaxLogit98.3677.8597.2378.5198.7672.8498.8678.0898.3076.82100.0078.69100.0063.74
Energy98.6077.8197.5578.4998.9672.7998.9478.0098.5176.77100.0078.68100.0063.70
GradNorm98.0479.2697.0776.8598.5672.8398.6280.5598.0777.37100.0085.23100.0064.10
KNN1063.2188.1865.8188.0573.0284.6367.7488.9267.4587.4469.7788.4990.5084.44
KNN2063.5388.0765.8987.9072.7584.4867.3388.8167.3887.3268.6088.1391.1084.09
KNN5064.1787.8966.9787.7773.3484.2367.2188.6067.9287.1272.0987.4791.6083.59
KNN10064.4987.6467.7887.5573.4683.9467.2988.3768.2686.8872.0986.8391.5083.21
Pre-train on 10% IIT-CDIP (no fine-tune)
-KNN1088.0766.9492.1366.6294.1361.9094.4054.5792.1862.5167.4487.0462.1084.94
KNN2088.5966.0292.6565.2594.1360.8394.7253.7992.5261.4777.9185.3864.6083.86
KNN5089.7564.4093.5363.1294.3758.9895.1752.3393.2059.7183.7282.9769.2082.29
KNN10090.2362.9493.8561.2894.4157.4595.1351.2893.4058.2483.7280.9170.1081.05
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.71MSP94.2868.0294.4665.9896.0162.9894.8165.9894.8965.7495.3563.5599.1054.99
MaxLogit97.3677.8297.1979.1698.4072.6498.3477.6897.8276.82100.0077.3699.6066.63
Energy98.0477.8097.4379.1598.7672.6198.5877.6498.2076.80100.0077.3299.6066.61
GradNorm97.3680.6896.8376.0498.4473.2997.8981.3797.6377.85100.0086.1899.5067.49
KNN1063.5788.3067.0687.0673.6683.9273.0987.8069.3486.7769.7788.0187.6083.81
KNN2063.8588.2067.4686.9073.9483.7872.9387.7069.5486.6469.7787.6388.3083.53
KNN5063.8988.0267.5486.7174.3883.5572.2487.4669.5186.4370.9387.0988.2083.12
KNN10064.8587.8167.6286.4574.9083.2572.6587.2470.0086.1972.0986.6588.3082.89
Pre-train on 20% IIT-CDIP (no fine-tune)
-KNN1087.1568.2790.8866.8992.2662.3995.0153.0291.3262.6443.0292.2957.0087.67
KNN2087.3167.3592.0465.5491.5461.4094.9752.3391.4661.6647.6791.1862.6086.61
KNN5088.3965.7192.6963.4592.1859.5795.2550.9792.1359.9256.9889.6465.7085.20
KNN10088.8364.2093.1361.6192.2257.9995.4549.9592.4158.4458.1488.3666.9084.17
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.76MSP92.6770.0993.9365.6995.0563.1995.5065.5494.2966.1395.3563.6395.4064.97
MaxLogit98.0878.7297.8779.8598.4471.6398.3075.4198.1776.4098.8478.0798.9075.65
Energy98.4878.6997.9179.8398.6871.6198.5075.4098.3976.38100.0078.0498.5075.60
GradNorm98.0481.0397.4776.7398.4472.7797.4079.1197.8477.41100.0087.4797.6077.12
KNN1060.5788.7968.8686.3675.2683.5573.9087.1269.6586.4667.4489.9072.7089.49
KNN2061.3788.7269.0686.2475.4683.4373.4687.0069.8486.3568.6089.6673.5089.25
KNN5062.2188.5269.1886.0875.6683.2173.4286.7170.1286.1370.9389.2074.7088.89
KNN10063.7788.3069.7985.8476.0282.9374.1986.4670.9485.8874.4288.8475.3088.69
Pre-train on 40% IIT-CDIP (no fine-tune)
-KNN1085.7169.0890.8468.6890.4662.5294.7651.7690.4463.0125.5895.8357.3088.60
KNN2085.2768.2191.6467.4889.7461.3294.8151.0190.3662.0029.0795.2262.3087.61
KNN5086.1966.6092.2165.5490.3059.3594.9349.6090.9160.2741.8694.3266.8086.25
KNN10087.1965.0492.5763.8390.5057.7495.0948.4491.3458.7645.3593.6668.3085.14
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP (no fine-tune)
-KNN1084.4370.2090.2068.5490.9863.1894.7252.1690.0863.5227.9194.1046.0091.37
KNN2084.5169.3091.2867.3590.3861.9694.7251.4390.2262.5133.7293.3951.5090.55
KNN5085.6767.7591.9265.3590.8259.7994.8949.7790.8260.6639.5392.2856.7089.32
KNN10086.5566.0892.9763.4691.4658.0095.4148.3991.6058.9844.1991.2961.6088.18
+ +Table 4: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP $^{-}$ (remove pseudo OOD categories). + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.62MSP90.0769.0089.9268.8692.5864.1691.0766.7890.9167.2096.5154.4796.7059.63
MaxLogit97.7678.4097.7180.5898.6471.2698.7076.3898.2076.66100.0073.5199.8073.32
Energy98.1678.3597.7580.5598.8471.2098.9076.3298.4176.60100.0073.4699.8073.31
GradNorm97.6879.9297.2779.4298.5671.3198.5079.4498.0077.52100.0082.6299.6075.85
KNN1065.8587.8966.6988.1275.9882.8274.5586.8570.7786.4287.2185.1683.9087.91
KNN2066.3387.8066.8588.0475.9482.7073.9486.7570.7686.3287.2184.6383.6087.71
KNN5066.7787.6667.3088.0076.0282.4973.6686.5270.9486.1788.3783.7383.9087.34
KNN10067.2587.4267.7487.8476.1882.1873.9986.2671.2985.9289.5382.8583.9086.98
Pre-train on 10% IIT-CDIP(- no fine-tune)
-KNN1086.3565.4885.7470.8492.9459.5593.1456.6289.5463.1229.0795.4287.6083.13
KNN2086.8764.4887.1469.6893.3058.4193.3055.9190.1562.1237.2194.7588.0081.44
KNN5087.7562.7388.9967.8093.5056.5493.7554.5291.0060.4047.6793.7190.3078.97
KNN10088.4361.1789.5966.0593.6254.9193.9953.4091.4158.8848.8493.0991.5077.00
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.65MSP96.0467.5894.9068.3296.0564.9296.2368.6295.8067.36100.0061.4998.7056.38
MaxLogit97.9676.9297.5980.6898.4872.3198.7477.7298.1976.91100.0075.9199.5069.21
Energy98.1676.8998.2380.6598.8872.2699.0777.6798.5876.87100.0075.8999.5069.18
GradNorm97.8478.2397.3178.5798.0071.4498.4680.0397.9077.07100.0085.8099.0069.54
KNN1066.0587.6067.7087.9473.4283.1073.5087.9670.1786.6577.9190.1990.1084.32
KNN2066.1787.5068.3887.8373.9082.9373.6687.8270.5386.5277.9189.8489.8084.13
KNN5067.2187.2668.4687.7374.1882.6373.6687.5870.8886.3079.0789.2489.6083.80
KNN10068.7886.9869.1487.5375.5082.3074.2787.3671.9286.0482.5688.6889.8083.59
Pre-train on 20% IIT-CDIP(- no fine-tune)
-KNN1085.6366.1085.1770.3492.5860.2993.4356.8589.2063.4030.2395.7283.2083.84
KNN2086.3165.1785.9869.1393.3059.0993.4756.0589.7762.3634.8895.0884.9082.16
KNN5087.3163.5087.6367.1193.3857.1794.1654.6090.6260.6044.1994.0787.5079.74
KNN10087.8362.0688.2765.3193.6255.6594.3253.5691.0159.1448.8493.4888.8077.77
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.72MSP93.8468.8693.6967.6295.4163.9194.2065.2594.2866.4196.5163.3298.9054.02
MaxLogit97.1678.5696.8780.1898.6871.8498.5874.4497.8276.26100.0076.7299.1065.41
Energy97.4078.5397.1580.1798.6871.7998.7874.3998.0076.22100.0076.6799.5065.39
GradNorm97.2480.5996.9578.0198.5272.1298.3477.1697.7676.97100.0086.9499.7067.46
KNN1066.8987.9168.5886.9077.6182.3176.5885.3972.4185.6375.5889.4586.4084.23
KNN2067.5787.8068.9086.7977.7782.1976.3085.2272.6485.5080.2389.1786.8083.85
KNN5067.9787.5869.6786.6778.0181.9876.6684.8573.0885.2780.2388.6387.2083.21
KNN10069.4687.3471.2386.4779.0181.7277.4884.5774.3085.0282.5688.1988.0082.72
Pre-train on 40% IIT-CDIP(- no fine-tune)
-KNN1088.7966.1488.3568.9293.5060.3095.5451.0991.5461.6137.2195.3755.9091.90
KNN2089.5965.0789.8067.6193.8959.1095.5850.1792.2160.4946.5194.4161.5091.00
KNN5090.5963.3991.6465.6893.7757.3595.6648.6392.9258.7653.4993.0666.4089.72
KNN10091.1961.7992.3763.9093.6655.7895.6247.4293.2157.2265.1291.9968.3088.72
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.74MSP94.1268.2494.2966.1895.9363.8395.2165.6694.8965.9898.8459.2596.5065.42
MaxLogit97.2478.1597.1980.2798.3672.1698.3875.8297.7976.60100.0073.2899.3075.58
Energy97.3278.1397.5180.2698.6472.1298.7075.7898.0476.57100.0073.2799.6075.52
GradNorm97.1680.0797.3977.8698.4071.8398.0579.0897.7577.21100.0086.3299.4073.52
KNN1066.8187.8669.6786.9177.4982.6074.5986.2872.1485.9181.4087.7476.9088.49
KNN2066.7387.7570.3186.7877.8982.5175.2886.1372.5585.7981.4087.4377.5088.39
KNN5067.2587.5470.5986.6277.8582.3275.4185.8472.7885.5883.7286.8577.8088.23
KNN10068.1387.3471.4786.3978.0582.0876.1485.6073.4585.3583.7286.3978.5088.21
Pre-train on 100% IIT-CDIP(- no fine-tune)
-KNN1087.9566.4484.4972.3495.0158.4796.2349.0790.9261.5831.4096.1941.6094.78
KNN2088.9165.3985.7071.2595.3357.1996.5948.0691.6360.4734.8895.5048.4094.12
KNN5090.5963.6987.1469.4595.5354.9397.0846.2692.5858.5843.0294.5155.2093.05
KNN10091.7562.0888.5567.8595.8953.0597.2044.8193.3556.9550.0093.6061.1092.04
+ +Table 5: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP $^{-}$ (remove pseudo OOD categories). + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
LayoutLMyBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.89MSP42.4376.3156.0569.3954.3170.2547.0073.9349.9572.4743.0276.5544.1075.68
MaxLogit41.9191.2755.0489.3354.1985.2044.9790.9349.0389.1838.3794.2741.3091.38
Energy41.8391.2954.9289.3554.1185.2245.0190.9748.9789.2138.3794.2941.1091.42
GradNorm39.1591.8054.0486.9351.8886.0542.4991.6546.8989.1138.3791.7941.4091.82
KNN1031.6394.2546.5290.9846.7790.4940.8392.7941.4492.1324.4295.9530.3095.66
KNN2032.0394.1146.6590.8947.0190.3241.6092.6341.8291.9926.7495.7631.8095.44
KNN5034.3993.7549.3490.4649.3689.9444.5292.2344.4091.6033.7295.3333.2095.38
KNN10036.1593.4751.2790.1951.3689.6546.6391.9946.3591.3233.7295.1035.1095.16
Pre-train on 10% IIT-CDIP-(no fine-tune)
-KNN1090.9572.3094.6665.4990.9472.3894.4067.3292.7469.3748.8491.5656.0075.08
KNN2091.5970.5494.9863.9191.6670.7494.8165.9593.2667.7853.4990.4157.6073.51
KNN5093.0767.7695.5461.2492.7868.2795.2564.0194.1665.3255.8188.3758.5071.06
KNN10093.5565.4195.9059.1393.1066.1995.5462.4194.5263.2867.4486.4460.2069.09
LayoutLMyBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.84MSP49.2076.7861.5170.1362.3769.4955.5273.6457.1572.5150.0077.9950.7075.90
MaxLogit41.0391.5754.0088.4556.4285.7047.0090.1949.6188.9838.3793.6241.8090.56
Energy40.9591.6053.7688.4756.1985.7246.7990.2249.4289.0038.3793.6541.7090.59
GradNorm37.1591.8954.1684.9953.0386.2843.9590.9447.0788.5240.7090.4142.4090.91
KNN1031.6394.1747.6990.2947.4990.5040.5492.9241.8491.9731.4095.6534.5095.15
KNN2032.5594.0347.8990.2248.3290.3440.9192.7642.4291.8433.7295.4535.4094.97
KNN5035.7193.6749.7489.8251.0489.9944.1292.3945.1591.4736.0595.0136.2094.92
KNN10036.7593.3850.3089.6051.6889.7144.9792.1745.9291.2236.0594.7336.5094.71
Pre-train on 20% IIT-CDIP-(no fine-tune)
-KNN1090.3975.2579.5979.4393.1472.4197.1266.9990.0673.5250.0091.3624.7096.34
KNN2090.6373.7580.4778.5193.8170.5897.1665.5490.5272.1055.8189.9126.9095.94
KNN5091.6771.1982.5676.9094.4567.8297.3662.9891.5169.7267.4487.2929.1095.31
KNN10091.9569.1983.7375.5595.3365.3797.3660.8492.0967.7474.4284.7830.3094.75
LayoutLMyBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.01MSP51.7675.7662.3969.6363.3768.7554.2274.0357.9472.0455.8171.6942.5080.56
MaxLogit42.0391.2954.2489.4757.3084.4445.6690.0249.8188.8052.3393.0833.0092.89
Energy41.8791.3154.2089.4957.2684.4745.5090.0549.7188.8352.3393.1332.5092.92
GradNorm38.1991.6653.6486.8555.0385.6643.1891.4547.5188.9052.3392.3934.6092.95
KNN1031.4794.4347.1390.6348.2090.4538.1193.3041.2392.2027.9195.7824.7096.09
KNN2032.5994.2947.6190.5549.6090.2739.2593.1442.2692.0632.5695.6025.5095.95
KNN5034.8793.9349.5090.1052.1189.8742.2992.7544.6991.6638.3795.1626.4095.95
KNN10036.5593.6550.3889.8253.5589.5743.7192.5146.0591.3943.0294.8927.7095.77
Pre-train on 40% IIT-CDIP-(no fine-tune)
-KNN1087.0780.4471.7683.7286.7582.3196.1076.3685.4280.7175.5884.965.9098.24
KNN2088.9579.0374.9382.3188.9981.1196.7175.0187.4079.3680.2382.567.2097.93
KNN5091.4777.2380.3991.7891.7879.7597.4072.6090.2677.3787.2178.199.0097.92
KNN10090.7575.2784.7777.4891.7478.3197.1670.2691.1075.3389.5374.1114.2097.49
LayoutLMyBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.38MSP43.4376.1257.2169.1658.3868.5646.1474.7651.2972.1538.3778.6728.3083.78
MaxLogit35.1991.2950.2288.9853.1984.5439.9890.7144.6488.8824.4296.3921.4095.57
Energy35.2391.3250.2289.0053.1984.5539.9890.7344.6588.9024.4296.4421.4095.58
GradNorm30.3092.5448.6188.1848.9686.5836.1692.6341.0189.9819.7796.7119.2096.35
KNN1026.5094.9543.4791.6945.0990.9534.0993.8637.2992.8619.7797.3917.8096.37
KNN2027.2294.8344.0791.5845.4190.7934.6293.7137.8392.7319.7797.2218.4096.26
KNN5029.4694.4946.2891.1247.6990.4537.5093.3340.2392.3517.4497.0418.7096.80
KNN10032.1594.2648.1790.8550.6490.2140.3893.1242.8392.1119.7796.8820.7096.74
Pre-train on 100% IIT-CDIP-(no fine-tune)
-KNN1078.7481.6774.4580.8680.5383.7195.0177.3382.1880.8938.3794.6217.7096.12
KNN2082.3980.1377.8679.3183.4882.7595.4575.9384.8079.5344.1993.4214.6096.13
KNN5086.0377.6582.8076.6086.9181.3096.1073.0787.9677.1654.6591.099.6097.21
KNN10089.1175.5188.0374.0890.6279.7896.7170.4391.1274.9566.2888.5018.0096.82
+ +Table 6: OOD detection performance for document classification. Spatial-RoBERTaBase (Pre) or SRBase (Pre) denotes applying the spatial-aware adapter in the word embedding layer. Spatial-RoBERTaBase (Post) or SRBase (Post) denotes applying the spatial-aware adaptor at the output layer. + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTBlasteFine-tune on RVL-CDIP (ID)
90.19MSP91.1973.7090.8473.4991.8271.5391.0372.3591.2272.7793.0280.9497.6074.59
MaxLogit96.8879.0496.8779.3898.0475.8598.5477.4597.5877.93100.0082.7699.4079.99
Energy97.4878.9697.2379.3198.4075.7199.0777.2598.0477.81100.0082.7199.2080.06
KNN1053.2088.9458.5088.6261.3786.2563.7288.2959.2088.0222.0996.5268.6092.47
KNN2053.4488.8158.9088.5061.6586.0763.6088.1559.4087.8827.9196.3871.7092.02
KNN5053.8488.5259.4288.4262.0185.8164.1687.8059.8687.6432.5696.0774.3091.37
KNN10055.5688.1060.6788.2063.6985.4164.7787.4261.1787.2834.8895.6776.5090.81
No fine-tune
-KNN1093.1163.5288.1566.3494.5766.9298.4253.3793.5662.5425.5895.9986.0072.99
KNN2092.9963.1888.3965.7894.5766.0898.4252.1093.5961.7826.7495.7187.3070.44
KNN5092.6762.4189.3164.7294.1764.7498.3450.0793.6260.4826.7495.0290.8066.04
KNN10092.6761.5789.5963.5794.0163.4598.1748.3393.6159.2329.0794.3492.8061.62
SRBase(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.11MSP46.8074.5254.6470.5856.2669.7254.3070.7453.0071.3944.1975.7957.2069.23
MaxLogit39.4388.6446.4889.9249.9685.7548.3087.6646.0487.9933.7293.4250.6088.70
Energy39.4388.6646.4889.9450.0085.7648.3087.6746.0588.0133.7293.4550.6088.71
KNN1031.9194.4142.1992.6546.6589.3142.0992.6540.7192.2610.4797.4552.1092.93
KNN2032.3194.2842.5992.6447.0189.2143.4392.5341.3492.1611.6397.3153.3092.80
KNN5034.3993.9943.8392.3649.0488.9345.4192.1943.1791.8712.7997.0153.1092.51
KNN10035.1593.7644.2792.1549.4888.6546.1491.9743.7691.6315.1296.8149.7092.44
Pre-train on IIT-CDIP (no fine-tune)
-KNN1078.8278.9279.9973.8977.6981.3291.4876.5282.0077.6610.4798.0887.3080.89
KNN2079.7477.9582.6472.1779.8180.4092.1375.1183.5876.4116.2897.6092.1076.94
KNN5080.4276.8785.1369.6282.1278.9392.9873.0185.1674.6122.0996.6695.2070.53
KNN10081.4375.7086.9067.1983.4077.1293.3871.0786.2872.7727.9195.8696.6064.56
SRBase(Post)Fine-tune on RVL-CDIP (ID)
97.10MSP58.0578.3776.4665.4465.8075.0061.8177.5965.5374.1054.6581.6593.5052.85
MaxLogit49.2089.8272.3680.2857.8287.2852.5290.0457.9886.8634.8894.8891.6073.37
Energy47.5689.8771.9680.3056.5887.3251.1890.1056.8286.9034.8895.0491.3073.39
KNN1037.4393.3764.0886.8349.4489.8246.9292.1749.4790.5526.7496.3890.1080.21
KNN2038.2793.2565.3386.5250.8089.6648.0991.9950.6290.3526.7496.2391.2079.57
KNN5040.4392.9867.3886.0252.8389.3850.6591.5852.8289.9926.7495.8992.1078.48
KNN10041.9992.7767.9485.6253.8789.1751.2291.3353.7689.7229.0795.6792.6077.68
SRLarge(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.37MSP62.3767.8271.2763.3672.8762.5470.2563.8469.1964.3976.7460.6167.0065.48
MaxLogit33.3990.1539.2589.8742.3088.1237.0591.6638.0089.9531.4092.4127.7094.23
Energy33.3990.1639.2589.8842.3088.1337.0591.6638.0089.9631.4092.4227.7094.22
KNN1028.1894.4742.4393.0137.4391.7431.1394.7234.7993.4925.5896.2418.6096.28
KNN2028.7894.3242.4392.9038.0791.5832.0294.5535.3393.3425.5896.0218.6096.33
KNN5030.2293.9543.7192.6940.0691.2634.5494.1037.1393.0026.7495.5221.4096.14
KNN10030.8693.7144.1192.5640.6691.0535.4793.8837.7892.8026.7495.2221.7096.11
Pre-train on IIT-CDIP (no fine-tune)
-KNN1068.4980.4388.2369.8371.7583.1188.1173.3279.1476.6775.5884.3649.8092.02
KNN2071.7478.7790.2467.4175.6681.3889.0471.1481.6774.6881.4081.5562.2090.29
KNN5075.4676.4992.8163.8280.1778.7290.4267.8484.7271.7282.5677.1578.2087.49
KNN10077.6274.5994.4260.9483.1676.2591.8065.3086.7569.2784.8873.3488.2084.96
+ +Table 7: OOD detection performance for document classification with the different number of pre-training data from IIT-CDIP. + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
VITBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.89MSP55.8088.3748.6191.3863.9383.8355.5288.5555.9688.0352.0589.6034.1095.04
MaxLogit50.3691.5137.7794.3062.3787.9753.6992.1151.0591.4738.3694.2428.6096.06
Energy50.5691.4837.0894.3363.4987.8955.1992.0051.5891.4238.3694.2929.4095.96
GradNorm55.5679.7545.9684.7966.9274.0758.4481.0756.7279.9247.9582.0434.9091.68
KNN1050.4092.6043.5193.9251.6090.5474.4788.8755.0091.4820.5597.199.2098.21
KNN2049.8092.7040.3894.4353.3990.2674.7288.7754.5791.5423.2996.9810.4098.05
KNN5046.7292.8934.2795.2456.0789.9274.5588.4552.9091.6227.4096.5612.8097.80
KNN10045.4892.8929.3395.6757.6289.5675.0488.2551.8791.5930.1496.2115.0097.57
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.9243.0897.6749.0099.5254.4199.3540.2698.8646.6993.1592.516.9098.06
KNN2098.8842.4797.7548.5799.5253.7599.3539.5698.8846.0994.5292.248.6097.91
KNN5098.8041.7097.8348.0499.5252.9199.3538.6298.8845.3295.8991.8010.6097.66
KNN10098.7641.2097.7947.7099.4852.3299.3538.0198.8444.8198.6391.3114.5097.41
VITBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.62MSP54.3689.0151.6391.3164.5785.2360.5188.6757.7788.5660.2789.3444.2093.73
MaxLogit44.3292.1638.2194.1864.9287.6358.5691.3351.5091.3245.2192.6339.7094.36
Energy44.3692.1737.8994.2466.5687.5160.3991.2252.3091.2846.5892.6241.5094.18
GradNorm90.5154.9292.0451.6794.2945.4198.1332.3693.7446.0995.8940.4489.7059.01
KNN1052.2092.5845.8493.7353.7990.7577.8487.0257.4291.0217.8197.3316.9097.40
KNN2051.6092.6643.5594.1555.6390.4678.0486.7957.2091.0219.1897.0619.4097.11
KNN5050.1292.8639.9894.8258.0290.1878.7786.5456.7291.1019.1896.6323.1096.68
KNN10048.0492.9134.7595.2860.3889.8878.9886.4255.5491.1220.5596.2726.2096.35
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.1641.1397.5147.1299.4853.0599.3138.7998.6245.0294.5291.808.0097.41
KNN2098.1240.7197.5146.7999.4852.5299.3138.3198.6044.5894.5291.488.7097.25
KNN5098.0440.1097.5546.3199.4851.8499.3937.6398.6243.9795.8991.0111.5096.99
KNN10098.0039.7497.5545.9899.4851.3499.3937.2698.6043.5897.2690.5514.6096.70
VITBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.63MSP55.4888.6552.2791.5464.4985.5258.0889.2057.5888.7367.1284.6245.8093.82
MaxLogit47.1291.7440.0694.0961.0588.6856.5792.0151.2091.6369.8689.8132.9095.46
Energy47.1291.7339.9494.1062.3388.6258.6091.8852.0091.5869.8689.6532.7095.44
GradNorm47.0085.7641.9089.6460.6981.3753.7387.0650.8385.9664.3881.1234.0092.93
KNN1053.2892.1348.3392.9946.4592.2075.6188.8755.9291.5534.2595.536.8098.56
KNN2052.7692.2445.8893.5748.1291.9574.8488.7555.4091.6332.8895.217.8098.36
KNN5051.2892.5240.9494.5150.5291.7075.0888.4654.4691.8035.6294.6710.9098.04
KNN10050.3292.6236.1695.1253.3591.3675.9388.2453.9491.8439.7394.2513.6097.76
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.5640.6097.0346.2899.2453.7699.1539.6298.2445.0682.1992.021.0099.59
KNN2097.5640.0096.9545.8699.2453.1899.1539.1298.2244.5482.1991.631.0099.55
KNN5097.5639.2496.9945.2099.2452.3999.1538.4998.2443.8386.3091.071.0099.50
KNN10097.6038.7897.0344.7999.2451.7699.1538.1598.2643.3790.4190.671.2099.45
VITBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.79MSP54.2888.8049.1491.8064.6084.4558.8588.7856.7288.4661.6489.4441.0094.27
MaxLogit44.9692.1338.0194.5263.9787.9756.4991.8150.8691.6168.4990.6534.6095.26
Energy45.7292.1138.0194.5565.8487.8657.9191.7051.8791.5672.6090.4134.8095.14
GradNorm48.7284.2144.3687.5063.4978.0756.2584.7953.2083.6460.2782.9635.6091.24
KNN1045.1693.1439.1394.6251.6890.8573.5888.8152.3991.8650.6893.0910.4098.04
KNN2044.8893.1436.6495.0453.3590.5974.2788.6752.2891.8650.6892.6712.0097.81
KNN5043.6793.1931.1895.6056.7490.2975.2888.4951.7291.8957.5392.2315.6097.45
KNN10043.6393.1527.5295.9458.7490.0276.1888.3851.5291.8761.6492.0118.9097.18
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.0442.3593.9750.1797.4152.6898.0143.1996.6147.1012.3397.473.1098.38
KNN2097.1641.9994.0149.9697.8152.0198.0942.7396.7746.6715.0796.953.0098.31
KNN5096.9641.6294.3449.5698.0051.2098.0542.2496.8446.1621.9296.082.7098.18
KNN10097.0041.4894.9049.3198.1250.6598.1342.0397.0445.8736.9995.292.3098.27
+ +Table 8: OOD detection performance for document classification. Longformer $_{4096}$ denotes the original model adopted from the Huggingface model hub. Longformer $_{4096}$ (+) denotes the additional pre-training on IIT-CDIP. + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Longformer1006Fine-tune on RVL-CDIP (ID)
90.71MSP95.0064.3295.6262.1795.8960.5393.9566.8995.1263.4888.3777.5098.6054.72
MaxLogit97.1272.8497.0775.2298.2470.3995.8277.5797.0674.0090.7086.6299.6068.10
Energy97.4872.8297.3575.2198.3670.3796.5977.5697.4473.9991.8686.6399.8068.08
KNN1058.4588.2165.6586.8867.8083.9956.7889.5362.1787.1527.9196.0182.1086.31
KNN2058.9788.0465.5786.6068.1283.8057.3589.3462.5086.9429.0795.8282.6085.93
KNN5060.2587.6466.5786.2568.9183.4158.8188.9663.6486.5630.2395.4682.7085.27
KNN10061.9787.1968.1485.8170.1582.9560.4788.6065.1886.1434.8895.0482.8084.75
No fine-tune
-KNN1098.0455.4597.6359.9798.7651.7598.1353.1698.1455.0870.9388.69100.0064.97
KNN2098.1255.1997.6759.6498.8051.2798.1752.7198.1954.7070.9388.51100.0064.08
KNN5098.0054.8297.6359.1398.8050.5798.3052.0798.1854.1573.2688.29100.0062.82
KNN10097.9254.4897.6758.6298.8450.0098.3451.6298.1953.6874.4288.14100.0061.70
Longformer1006 (+)Pre-train on IIT-CDIP→fine-tune on RVL-CDIP (ID)
91.13MSP95.2064.0895.6261.3896.0559.4794.4863.1395.3462.0290.7067.2698.0055.52
MaxLogit96.9675.4196.5476.0397.8970.1596.7174.5697.0274.04100.0078.6599.7072.88
Energy97.2875.4096.5476.0398.2870.1497.1674.5597.3274.03100.0078.5999.7072.86
KNN1058.7389.2566.2187.5772.0383.7663.6888.7265.1687.3248.8494.7886.4087.84
KNN2058.6189.1865.9787.4571.6783.6963.3988.6164.9187.2348.8494.6285.3087.70
KNN5061.1788.9666.9787.2972.8383.4765.8388.3366.7087.0155.8194.2585.2087.39
KNN10061.7388.7966.9387.1173.3083.2466.1588.1567.0386.8255.8194.0084.7087.21
Pre-train on IIT-CDIP (no fine-tune)
-KNN1095.4861.4098.0753.6697.7355.5598.6648.7097.4954.8381.4091.1297.4046.27
KNN2095.5660.9297.9552.9597.4954.9798.5048.2197.3854.2684.8890.6297.5045.55
KNN5095.6059.9497.9551.7797.4153.9798.6247.2997.4053.2487.2189.9598.2044.18
KNN10095.6059.0497.9950.7497.2152.9998.5846.5197.3452.3288.3789.5298.5043.09
+ +Table 9: OOD detection performance for document classification. All models are pre-trained on ImageNet. + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
ResNet-50Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.12MSP64.4987.8755.8990.9466.6087.3177.8880.8766.2286.7551.1692.7663.1090.36
MaxLogit64.8988.5947.9792.8165.4087.5277.5681.8763.9687.7041.8694.6254.0093.29
Energy67.0988.3047.8192.8666.6887.2478.5381.7565.0387.5439.5394.7348.5093.68
KNN1073.3886.8267.9887.4671.3187.8492.9077.7476.3984.966.9899.125.2098.98
KNN2074.9086.4166.2987.7973.8287.2193.9576.5177.2484.486.9898.965.5098.85
KNN5076.6686.0466.4188.4878.2986.3995.5074.7679.2283.925.8198.685.9098.70
KNN10077.5485.6165.4188.9982.1685.4396.2373.3780.3383.356.9898.346.3098.51
Pre-train on ImageNet
-KNN1096.9651.1494.6251.7598.7653.8499.5937.6097.4848.5883.5685.0020.8097.00
KNN2096.9650.3794.3451.5498.9252.9899.5936.6097.4547.8783.5684.4922.7096.71
KNN5096.9249.2994.2951.3099.0051.8499.5935.1597.4546.9083.5684.0326.7096.21
KNN10097.1248.6094.5451.2599.1651.1199.5534.3697.5946.3382.1983.3129.4095.67
Swin10Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
95.74MSP47.6488.0949.9088.1158.2283.1450.2888.9051.5187.0649.3291.3136.5093.63
MaxLogit42.3993.1142.4793.4558.6288.7945.9093.1847.3492.1350.6892.5032.2095.65
Energy43.1593.0542.9593.4059.0288.7046.7193.0747.9692.0652.0592.3833.6095.49
KNN1049.4492.8246.7392.8742.9092.5772.6988.4552.9491.6816.4496.736.1098.30
KNN2048.8492.9543.2793.5144.5392.3272.2888.3552.2391.7817.8196.527.4098.10
KNN5046.4493.2639.2594.5747.4192.0973.3487.8751.6191.9526.0396.158.6097.80
KNN10043.7693.4235.0395.2950.0891.7275.7787.4251.1691.9628.7795.9411.3097.55
Pre-train on ImageNet
-KNN1098.5652.7595.0655.1499.3658.8599.8041.8698.2052.1565.7593.262.1099.35
KNN2098.4451.8695.1854.7299.3257.8899.8040.6698.1851.2868.4992.522.6099.22
KNN5098.5250.6995.3854.1399.1656.6199.7639.0198.2050.1178.0891.143.4098.99
KNN10098.7249.9695.6653.8099.1655.8499.7638.1698.3249.4479.4589.894.3098.77
VITBasePre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
94.38MSP56.8189.1452.1991.8067.4884.2659.9088.7759.1088.4947.6792.9859.5091.99
MaxLogit50.7691.3744.6093.7568.0486.9455.1591.8154.6490.9740.7094.2052.4093.16
Energy51.1691.3144.5293.7569.4386.8156.0991.7755.3090.9138.3794.1153.2093.11
KNN1062.5790.1257.7390.9153.6790.3684.5086.1964.6289.4012.7997.9613.0097.92
KNN2063.0190.2456.0191.5155.0390.0284.3886.0164.6189.4415.1297.7614.9097.67
KNN5061.9790.6253.2392.6258.2689.5784.2585.6464.4389.6116.2897.3819.8097.24
KNN10060.2990.8549.7093.5360.3889.0784.0185.4363.6089.7216.2897.0523.6096.82
Pre-train on ImageNet
-KNN1098.4852.1595.0256.9499.4853.7799.4738.9098.1150.4493.1590.2720.4097.13
KNN2098.4851.4195.0656.6199.4452.9299.5537.6198.1349.6494.5289.4422.6096.80
KNN5098.3250.4394.8656.2199.4051.8699.5935.8298.0448.5897.2688.2326.6096.25
KNN10098.4049.7695.0655.9099.4451.1599.5934.5998.1247.8598.6387.2431.2095.76
+ +Table 10: OOD detection performance for document classification (select OOD categories achieve the best performance across most of the models with different modalities). + +
REBERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
EmailResumeFile folderSci. publicationAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
86.13MSP96.2260.3890.6771.7293.8259.4793.8665.5193.6464.2791.8670.5793.0069.99
MaxLogit99.2166.5795.8073.6695.4766.8197.0965.6396.8968.1794.1977.1794.6074.69
Energy99.6066.5396.6473.5795.1466.8297.2165.3597.1568.0794.1977.4495.6074.90
KNN1083.7082.7769.0284.2888.3274.0686.1174.0281.7978.7843.0292.7472.0088.87
KNN2084.5082.3569.0684.2188.2073.7186.7274.0282.1278.5748.8492.3873.8088.31
KNN5084.9881.5768.8684.0688.0873.0187.0873.9482.2578.1454.6591.9275.4087.44
KNN10086.2580.8870.2683.8088.2872.4087.4473.8983.0677.7458.1491.5078.2086.68
Pre-train on pure-text data
-KNN1086.0975.6395.1258.6297.7159.7598.9550.5494.4761.1410.4798.4689.8063.01
KNN2086.2974.9295.0058.1497.7158.8899.0349.4994.5160.3612.7998.3590.8060.59
KNN5087.3273.5594.6457.5397.8357.5699.1548.1194.7359.1912.7998.1193.3056.61
KNN10089.2772.4894.2857.1297.9956.5299.1147.3795.1658.3711.6397.8994.3052.98
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.34MSP96.9060.5596.2059.1496.3155.7297.8255.1296.8157.6395.3580.4499.6052.82
MaxLogit98.9768.9797.6065.6495.6763.4298.6362.8797.7265.2397.6788.4299.7071.54
Energy99.4468.9697.9265.6395.8363.4298.7162.8397.9865.2197.6788.4699.9071.55
KNN1068.2888.7269.6283.3678.1785.0890.8874.9876.7483.0416.2896.9081.6086.94
KNN2068.0488.6170.1083.2277.5384.9290.7574.9576.6082.9216.2896.8481.8086.49
KNN5069.2888.2970.9882.9278.2984.4690.9674.8277.3882.6219.7796.5983.4085.71
KNN10069.2888.1571.3482.6978.4984.2190.4374.8677.3982.4822.0996.3883.9085.17
Pre-train on pure-text data
-KNN1097.4247.7795.7250.0997.6746.5899.5238.6197.5845.7645.3593.92100.0063.03
KNN2097.4646.9195.6049.8097.7146.0299.5238.2197.5745.2446.5193.77100.0061.92
KNN5097.5845.6895.5649.4597.7545.1999.5237.7297.6044.5150.0093.60100.0060.35
KNN10097.6644.7895.6049.1797.8744.6399.5637.5797.6744.0451.1693.48100.0058.89
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
85.25MSP60.5387.2669.5387.0027.8695.1394.0575.7962.9986.3091.7874.4027.8095.47
MaxLogit59.9889.2772.6188.0230.0495.4193.3975.3864.0087.0280.8279.8930.0095.29
Energy63.7189.1475.6487.5545.7194.1592.7775.0269.4686.4678.0881.0762.2093.44
KNN1072.4685.6885.6985.3068.6276.0196.1555.3580.7375.5936.9994.562.2099.37
KNN2076.1584.5588.6584.2266.1380.6796.5456.3181.8776.4438.3693.812.7099.28
KNN5080.3782.6192.0082.4960.9886.7796.9359.0682.5777.7347.9592.423.8099.11
KNN10084.7080.5495.1580.6451.2991.7897.1661.1982.0878.5450.6891.014.7098.91
Pre-train on ImageNet
-KNN1099.7240.9499.6521.5252.4791.0398.3345.4087.5449.7284.9384.3820.4097.12
KNN2099.6841.1899.6520.6850.6191.6398.4144.6587.0949.5486.3083.9423.4096.87
KNN5099.6441.5899.6519.4846.9792.3698.3743.4986.1649.2384.9383.7026.9096.43
KNN10099.6442.1999.6518.9844.9192.8498.3342.8685.6349.2284.9383.1229.2095.98
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.25MSP70.2381.8767.6885.3143.9792.6883.7879.4066.4284.8286.3078.2354.1091.62
MaxLogit54.7387.0446.5192.3017.2596.5190.8674.1152.3487.4982.1983.2034.4094.82
Energy54.0587.1144.3892.4916.3896.6391.2973.5951.5387.4684.9383.0733.8094.82
KNN1056.0890.6648.8092.8438.3193.3191.0266.9158.5585.9327.4096.033.3098.84
KNN2054.6190.9549.9892.6827.5895.2491.4468.5455.9086.8526.0396.354.0098.76
KNN5055.2590.6852.1592.3715.7597.2891.2571.6253.6087.9928.7796.104.9098.59
KNN10056.2090.3154.7592.179.1498.0091.1375.1152.8088.9030.1495.776.5098.35
Pre-train on ImageNet
-KNN1099.8443.5599.7620.6447.9293.2098.9137.5586.6148.7458.9093.881.6099.32
KNN2099.8444.4799.8018.3641.3194.1499.0336.4585.0048.3672.6092.692.6099.00
KNN5099.8845.2699.8017.9239.9794.3999.0336.7184.6748.5779.4591.973.7098.81
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
89.97MSP61.2585.8466.5785.0440.4493.1085.8481.8363.5286.4573.9780.6660.3090.41
MaxLogit53.0290.3755.7788.8619.9196.2592.3879.6955.2788.7976.7185.1650.6093.12
Energy51.7990.4955.0789.0317.5396.5392.6979.2054.2788.8179.4585.0150.1093.20
KNN1054.1391.1852.8691.1858.4987.4692.8865.9864.5983.9542.4795.0711.0097.94
KNN2054.2191.1853.1790.9950.6189.3593.0467.5262.7684.7643.8494.9813.1097.62
KNN5054.5391.0553.3390.7941.9592.8293.0072.0660.7086.6842.4794.7417.3097.12
KNN10054.6590.8154.1290.5630.7991.9098.7247.1088.2452.1995.8989.3122.0096.58
Pre-train on ImageNet
-KNN1099.8046.4699.6826.5058.6590.6198.7246.4089.2152.4987.6791.3919.9097.25
KNN2099.8046.0299.6525.6957.3091.0198.7246.4688.8752.3090.4190.8721.7097.01
KNN5099.8045.4899.6124.7655.1691.5298.7646.6988.3352.1194.5289.9924.3096.62
KNN10099.8045.3399.6524.4354.8191.9098.7247.1088.2452.1995.8989.3128.8096.27
+ +Table 11: OOD detection performance for document classification (randomly select four categories as OOD). + +
RobERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
LetterHandwrittenAdvertisementMemoAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.86MSP70.2279.2150.1487.2484.6467.8091.4257.9974.1073.0695.3559.7594.3055.12
MaxLogit66.0487.5139.6592.5386.4777.0391.6771.8470.9682.23100.0077.8996.8071.96
Energy66.2087.5738.1992.5987.3577.0391.6771.8970.8582.27100.0077.9296.8071.96
KNN1062.6280.1960.9870.9075.6280.2485.8469.2071.2675.1394.1981.9990.4082.48
KNN2063.1880.1060.0771.1775.9080.0385.7268.8871.2275.0494.1981.7591.2081.89
KNN5063.7880.0057.3071.7076.3479.6785.8868.3870.8274.9494.1981.4591.8081.09
KNN10064.7779.9854.3371.9477.3779.3286.0867.8070.6474.7694.1981.2091.9080.47
Pre-train on pure-text data
-KNN1085.5359.9098.6121.7996.2156.7297.6958.3994.5149.2012.7998.0184.5065.73
KNN2085.4559.2798.7321.1996.2155.6397.9057.0594.5748.2812.7997.9186.1063.57
KNN5086.8057.9498.7720.4596.8954.1298.3055.3595.1946.9613.9597.6089.3059.64
KNN10088.4756.7198.8119.9796.8152.8998.1853.9395.5745.8813.9597.3891.1055.17
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
92.08MSP65.9669.5850.3877.9381.5260.8990.2154.2372.0265.6682.5660.1495.0050.90
MaxLogit62.1987.3544.6489.7979.9778.8488.3968.0868.8081.0280.2384.1994.3077.36
Energy61.2787.3543.6189.8179.1378.8588.1568.0868.0481.0280.2384.1994.3077.37
KNN1058.6579.5450.7771.8166.5683.4880.8775.1964.2177.5158.1492.7890.0077.76
KNN2057.8179.4351.4071.7267.0083.3581.1574.8664.3477.3458.1492.5789.7077.12
KNN5058.7779.3051.6071.6766.7283.1581.3174.3664.6077.1261.6392.2489.8076.17
KNN10061.3979.1652.7571.6167.8482.9381.7673.9165.9476.9062.7991.9989.8075.29
Pre-train on pure-text data
-KNN1099.4047.83100.0027.7598.2847.0393.2060.4097.7245.7546.5193.85100.0063.64
KNN2099.4447.33100.0027.4898.3246.4993.2460.2297.7545.3848.8493.70100.0062.79
KNN5099.4446.33100.0027.2398.4045.8593.4160.0597.8144.8651.1693.51100.0061.55
KNN10099.4445.67100.0027.3198.4445.2393.5359.9097.8544.5352.3393.40100.0060.31
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
87.80MSP70.5885.3555.2989.8864.2986.5471.1585.5865.3386.8454.7991.7077.2084.67
MaxLogit64.2587.4653.5990.7249.7090.6064.4588.7158.0089.3736.9995.1378.9086.86
Energy62.6687.6558.3390.3346.0091.2663.5689.0557.6489.5732.8895.6983.0087.05
KNN1090.9979.3756.3690.6472.4186.2089.1781.7477.2384.492.7499.3239.7093.70
KNN2092.1778.0047.4792.6168.2788.4290.8580.2374.6984.822.7499.2543.8093.08
KNN5094.3275.9628.4494.4965.6589.2792.7877.9170.3084.411.3798.9749.7092.09
KNN10095.5874.0227.2195.0760.4489.7894.2275.6369.3683.622.7498.6753.8091.10
Pre-train on ImageNet
-KNN1098.4642.2177.2981.4127.8791.1699.0843.4775.6864.5680.8289.9812.3098.17
KNN2098.6641.0076.7881.7029.2292.2799.0842.2975.9464.3283.5689.3014.1097.97
KNN5098.5839.5376.5881.8131.0192.0599.1240.8076.3263.5583.5688.5116.3097.61
KNN10098.6238.6277.1381.4932.6491.8499.1239.8676.8862.9583.5687.8019.5097.23
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
92.42MSP63.9687.0365.2188.1573.5679.7261.4088.4666.0385.8484.9374.3449.6092.49
MaxLogit56.4990.2275.3687.0072.6484.2644.2293.0162.1888.6272.6084.1629.1095.70
Energy57.4390.1177.0186.6073.4484.1743.7893.0662.9288.4873.9784.2528.0095.69
KNN1060.2790.1266.9090.7649.6689.1547.6792.6756.1290.6842.4794.287.2098.56
KNN2061.3290.0161.3791.3148.8390.3349.0092.5255.1391.0430.1495.568.8098.33
KNN5062.2289.7856.4491.5650.3489.5548.5292.3054.3890.8026.0395.7211.8097.97
KNN10062.6289.6054.9891.8550.7088.9347.6392.1853.9890.6430.1495.5413.9097.66
Pre-train on ImageNet
-KNN1099.1545.5786.0279.4432.4590.9899.5246.2079.2865.5524.6696.240.4099.78
KNN2099.1944.1186.8980.3533.4892.1999.6044.7979.7965.3627.4095.620.5099.73
KNN5099.2342.3987.9981.6636.7891.5999.6043.0780.9064.6843.8494.570.8099.63
KNN10099.1941.4689.0282.6340.6091.0599.6042.1482.1064.3252.0593.491.2099.53
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.03MSP69.6886.8169.6787.8872.2580.7869.3886.6170.2485.5267.1285.9758.5091.47
MaxLogit63.3589.2068.4088.5869.5884.3861.0889.9465.6088.0257.5389.4148.4093.04
Energy62.2289.2170.3488.4370.2684.3760.7590.0365.8988.0158.9089.4749.7093.03
KNN1068.1088.9954.9092.3053.4488.0558.1991.3458.6690.1738.3695.0222.9096.71
KNN2067.6188.9549.0192.8551.5389.2558.5991.1656.6890.5541.1094.4725.4096.35
KNN5067.2988.9142.5493.1553.9688.4358.7590.8855.6490.3442.4793.6029.9095.78
KNN10066.1988.9043.8093.1955.7187.7359.1190.6456.2090.1245.2192.8634.9095.27
Pre-train on ImageNet
-KNN1098.9041.9890.9677.1534.8790.6999.4041.2181.0362.7654.7994.2710.8098.47
KNN2098.9440.5491.6777.2036.8291.7199.4439.8581.7262.3264.3893.5712.7098.25
KNN5099.0738.7592.6176.9940.0091.1799.5238.1482.8061.2675.3492.4715.9097.87
KNN10099.1137.4393.2576.5643.3890.6899.5636.9383.8260.4082.1991.5218.9097.49
+ +Table 12: OOD detection performance for document classification. All models are pre-trained on IIT-CDIP. For LayoutLM models, we adopt the checkpoints from the Huggingface model hub. For UDoc, we pre-train the model on our side. All models are fine-tuned on RVL-CDIP ID data. + +
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95
LayoutMv1Base97.28MSP47.4874.9159.7468.7266.4065.3658.8969.1258.1369.5343.0277.1572.40
MaxLogit27.0692.3837.9791.5245.6588.3635.9291.2236.6590.8724.4294.9657.30
Energy27.0692.4037.9791.5445.6588.3635.9291.2336.6590.8824.4294.9757.30
KNN1020.8296.0935.3293.8240.0691.3428.6594.8031.2194.0117.4497.0049.80
KNN2021.7495.9336.2093.7741.4291.1230.4494.6132.4593.8617.4496.8251.70
KNN5024.3495.5638.2593.4143.9390.6933.6494.1935.0493.4623.2696.4453.80
KNN10025.5495.3039.1393.2045.1790.3534.7893.9936.1693.2125.5896.2454.70
LayoutMv397.81MSP56.1670.8163.4467.1767.1665.3058.6069.5861.3468.2252.3372.7043.60
MaxLogit30.7089.1740.4288.1842.9884.0933.1288.2236.8087.4219.7794.5011.70
Energy30.7089.1840.4288.1842.9884.1033.1288.2336.8087.4219.7794.5111.70
KNN1021.7495.0335.6893.3832.8891.8618.5196.2627.2094.1311.6397.588.90
KNN2022.7494.9036.5693.2033.9691.6619.6496.1528.2293.9812.7997.4410.00
KNN5024.6294.6238.3792.7135.8391.3821.6395.9330.1193.6613.9597.2010.70
KNN10025.2294.3839.2992.3236.5591.0922.4895.7930.8893.4016.2897.0411.80
UDocNet5097.36MSP66.1365.7369.4364.0971.0363.2871.0663.2569.4164.0940.7078.4739.80
MaxLogit45.9682.1247.2186.3949.6483.1649.5983.1348.1083.702.3398.574.00
Energy45.9682.1247.2186.4049.6483.1649.5983.1348.1083.702.3398.604.00
KNN1030.0294.4741.2288.6641.9090.9936.6593.4837.4591.901.1699.135.50
KNN2031.1094.3641.9888.4442.1090.9038.0393.3538.3091.761.1699.046.90
KNN5033.9594.0743.3587.8944.0190.7240.7193.0640.5191.431.1698.847.40
KNN10034.8393.8443.7587.5145.0190.6141.9692.9041.3991.221.1698.728.30
\ No newline at end of file diff --git a/2023/A Critical Analysis of Document Out-of-Distribution Detection/images.zip b/2023/A Critical Analysis of Document Out-of-Distribution Detection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1004eda4cc44fc6dadf4cf125072918f53932f9f --- /dev/null +++ b/2023/A Critical Analysis of Document Out-of-Distribution Detection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fae49a6b79688751cf169ad3edcb3527c47d908b42e1c4ca2a5acd95acfa458b +size 3990064 diff --git a/2023/A Critical Analysis of Document Out-of-Distribution Detection/layout.json b/2023/A Critical Analysis of Document Out-of-Distribution Detection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9822770d679c208213b9b70f359911622a94e020 --- /dev/null +++ b/2023/A Critical Analysis of Document Out-of-Distribution Detection/layout.json @@ -0,0 +1,14801 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 102, + 76, + 489, + 92 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 102, + 76, + 489, + 92 + ], + "spans": [ + { + "bbox": [ + 102, + 76, + 489, + 92 + ], + "type": "text", + "content": "A Critical Analysis of Document Out-of-Distribution Detection" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "spans": [ + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": "Jiuxiang Gu" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Yifei Ming" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{2*†}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Yi Zhou" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Jason Kuen" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " \nVlad I. Morariu" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Handong Zhao" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Ruiyi Zhang" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Nikolaos Barmpalios" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " \nAnqi Liu" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Yixuan Li" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Tong Sun" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "text", + "content": " Ani Nenkova" + }, + { + "bbox": [ + 100, + 105, + 495, + 147 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 87, + 147, + 509, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 147, + 509, + 190 + ], + "spans": [ + { + "bbox": [ + 87, + 147, + 509, + 190 + ], + "type": "text", + "content": "\\(^{1}\\)Adobe Research \\(^{2}\\)University of Wisconsin-Madison \\(^{3}\\)Johns Hopkins University \\(^{1}\\{jigu, kuen, morariu, hazhao, barmpali, ruizhang, tsun, nenkova\\} @adobe.com \\(^{2}\\{alvinming, sharonli\\} @cs.wisc.edu\\) \\(^{3}yzhou188@jhu.edu\\) \\(^{3}aliu@cs.jhu.edu\\)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 224 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 84, + 238, + 274, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 238, + 274, + 572 + ], + "spans": [ + { + "bbox": [ + 84, + 238, + 274, + 572 + ], + "type": "text", + "content": "Large-scale pre-training is widely used in recent document understanding tasks. During deployment, one may expect that models should trigger a conservative fallback policy when encountering out-of-distribution (OOD) samples, which highlights the importance of OOD detection. However, most existing OOD detection methods focus on single-modal inputs such as images or texts. While documents are multimodal in nature, it is underexplored if and how multi-modal information in documents can be exploited for OOD detection. In this work, we first provide a systematic and in-depth analysis on OOD detection for document understanding models. We study the effects of model modality, pre-training, and fine-tuning across various types of OOD inputs. In particular, we find that spatial information is critical for document OOD detection. To better exploit spatial information, we propose a spatial-aware adapter, which serves as a parameter-efficient add-on module to adapt transformer-based language models to the document domain. Extensive experiments show that adding the spatial-aware adapter significantly improves the OOD detection performance compared to directly using the language model and achieves superior performance compared to competitive baselines." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 585, + 154, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 585, + 154, + 597 + ], + "spans": [ + { + "bbox": [ + 68, + 585, + 154, + 597 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 607, + 291, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 607, + 291, + 743 + ], + "spans": [ + { + "bbox": [ + 67, + 607, + 291, + 743 + ], + "type": "text", + "content": "The recent success of large-scale pre-training has propelled the widespread deployment of deep learning models in the document domain, where model predictions are used to help humans make decisions in various applications such as tax form processing and medical reports analysis. However, models are typically pre-trained on data collected from the web but deployed in an environment with distributional shifts (Cui et al., 2021). For instance, the outbreak of COVID-19 has led to continually" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 303, + 211, + 525, + 290 + ], + "blocks": [ + { + "bbox": [ + 303, + 211, + 525, + 290 + ], + "lines": [ + { + "bbox": [ + 303, + 211, + 525, + 290 + ], + "spans": [ + { + "bbox": [ + 303, + 211, + 525, + 290 + ], + "type": "image", + "image_path": "eac539accce9516c780df033975daeca77327f8b5dffae47796be839d97f480a.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "lines": [ + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "spans": [ + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "text", + "content": "Figure 1: Illustration of OOD detection for document classification. The pre-training and fine-tuning pipelines are shown on the top left and bottom left, respectively. Right: During inference time, an OOD score can be derived based on logits " + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "inline_equation", + "content": "g(x)" + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "text", + "content": " or feature embeddings " + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "inline_equation", + "content": "z := h(x)" + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "text", + "content": ". A document input " + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "text", + "content": " is identified as OOD if its OOD score is below some threshold " + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 302, + 297, + 526, + 381 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 406, + 525, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 406, + 525, + 473 + ], + "spans": [ + { + "bbox": [ + 302, + 406, + 525, + 473 + ], + "type": "text", + "content": "changing data distributions in machine-assisted medical document analysis systems (Velavan and Meyer, 2020). This motivates the need for reliable document understanding models against out-of-distribution (OOD) inputs." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 476, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 476, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 476, + 526, + 772 + ], + "type": "text", + "content": "The goal of OOD detection is to categorize indistribution (ID) samples into one of the known categories and detect inputs that do not belong to any known classes at test time (Bendale and Boult, 2016). A plethora of OOD detection methods has been proposed for single-modal (image or text) inputs (Ge et al., 2017; Nalisnick et al., 2019; Oza and Patel, 2019; Tack et al., 2020; Hsu et al., 2020; Arora et al., 2021; Zhou et al., 2021; Xiao et al., 2020; Xu et al., 2021a; Li et al., 2021b; Shen et al., 2021; Jin et al., 2022; Zhou et al., 2022; Ming et al., 2022b,c; Podolskiy et al., 2021; Ren et al., 2023). Recent works (Fort et al., 2021; Esmaeilpour et al., 2022; Ming et al., 2022a; Ming and Li, 2023; Bitterwolf et al., 2023) also demonstrate promising OOD detection performance based on large-scale models pre-trained on text-image pairs, as pre-training enables models to learn powerful and transferable feature representations (Radford et al., 2021). However, it remains largely unexplored if existing findings in the OOD detection literature for images or texts can be naturally extended to the document" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 84, + 750, + 162, + 760 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 750, + 162, + 760 + ], + "spans": [ + { + "bbox": [ + 84, + 750, + 162, + 760 + ], + "type": "text", + "content": "* Equal contribution" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 84, + 760, + 280, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 760, + 280, + 772 + ], + "spans": [ + { + "bbox": [ + 84, + 760, + 280, + 772 + ], + "type": "text", + "content": "† Work done during the internship at Adobe Research" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "4973" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 129, + 795, + 464, + 818 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 795, + 464, + 818 + ], + "spans": [ + { + "bbox": [ + 129, + 795, + 464, + 818 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4973-4999 December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 72, + 106, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 72, + 106, + 83 + ], + "spans": [ + { + "bbox": [ + 67, + 72, + 106, + 83 + ], + "type": "text", + "content": "domain." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 85, + 291, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 85, + 291, + 356 + ], + "spans": [ + { + "bbox": [ + 67, + 85, + 291, + 356 + ], + "type": "text", + "content": "Multiple unique challenges exist for document OOD detection. Unlike natural images, texts, or image-text pairs, no captions can describe a document and images in documents rarely contain natural objects. Moreover, the spatial relationship of text blocks further differentiates multimodal learning in documents from multimodal learning in the vision-language domain (Lu et al., 2019; Li et al., 2020). In addition, while recent pre-training methods have demonstrated remarkable performance in downstream document understanding tasks (Xu et al., 2020, 2021b; Li et al., 2021a; Gu et al., 2022; Hong et al., 2022; Huang et al., 2022; Li et al., 2022; Wang et al., 2022a), existing pre-training datasets for documents are limited and lack diversity. This is in sharp contrast to common pretraining datasets for natural images. It remains underexplored whether existing OOD detection methods are reliable in the document domain and how pre-training impacts OOD reliability." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 357, + 291, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 357, + 291, + 613 + ], + "spans": [ + { + "bbox": [ + 69, + 357, + 291, + 613 + ], + "type": "text", + "content": "In this work, we first present a comprehensive study to better understand OOD detection in the document domain through the following questions: (1) What is the role of document pre-training? How do pre-training datasets and tasks affect OOD detection performance? (2) Are existing OOD detection methods developed for natural images and texts transferrable to documents? (3) How does modality (textual, visual, and especially spatial information) affect OOD performance? In particular, we find that spatial information is critical for improving OOD reliability. Moreover, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa (Liu et al., 2019). Our module is computationally efficient and significantly improves both ID classification and OOD detection performance (Sec. 5.2). Our contributions are summarized as follows:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 626, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 67, + 626, + 290, + 706 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 626, + 290, + 706 + ], + "spans": [ + { + "bbox": [ + 67, + 626, + 290, + 706 + ], + "type": "text", + "content": "- We provide an extensive and in-depth study to investigate the impacts of pre-training, fine-tuning, model-modality, and OOD scoring functions on a broad spectrum of document OOD detection tasks. Our codebase will be open-sourced to facilitate future research." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 719, + 291, + 772 + ], + "type": "text", + "content": "- We present unique insights on document OOD detection. For example, we observe that distance-based OOD scores are consistently advantageous over logit-based scores, which is underexplored" + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 311, + 71, + 526, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 71, + 526, + 98 + ], + "spans": [ + { + "bbox": [ + 311, + 71, + 526, + 98 + ], + "type": "text", + "content": "in the recent OOD detection literature on vision-language pre-trained models." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 303, + 111, + 527, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 111, + 527, + 191 + ], + "spans": [ + { + "bbox": [ + 303, + 111, + 527, + 191 + ], + "type": "text", + "content": "- We further propose a spatial-aware adapter module for transformer-based language models, facilitating easy adaptation of pre-trained language models to the document domain. Extensive experiments confirm the effectiveness of our module across diverse types of OOD data." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 203, + 495, + 216 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 203, + 495, + 216 + ], + "spans": [ + { + "bbox": [ + 303, + 203, + 495, + 216 + ], + "type": "text", + "content": "2 Preliminaries and Related Works" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 303, + 226, + 499, + 240 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 226, + 499, + 240 + ], + "spans": [ + { + "bbox": [ + 303, + 226, + 499, + 240 + ], + "type": "text", + "content": "2.1 Document Models and Pre-Training" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 245, + 526, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 245, + 526, + 393 + ], + "spans": [ + { + "bbox": [ + 302, + 245, + 526, + 393 + ], + "type": "text", + "content": "Large-scale pre-trained models gradually gain popularity in the document domain due to their success in producing generic representations from large-scale unlabeled corpora in vision and natural language processing (NLP) tasks (Devlin et al., 2018; Lu et al., 2019; Su et al., 2019; Schiappa et al., 2022). As documents contain both visual and textual information distributed spatially in semantic regions, document-specific models and pre-training objectives are often necessary, which are distinct from vision or language domains." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 395, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 395, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 395, + 526, + 772 + ], + "type": "text", + "content": "We summarize common model structures for document pre-training in Fig. 2a. Specifically, LayoutLM (Xu et al., 2020) takes a sequence of Optical Character Recognition (OCR) (Smith, 2007) words and word bounding boxes as inputs. It extends BERT to learn contextualized word representations for document images through multitask learning. LayoutLMv2 (Xu et al., 2021b) improves on the prior work with new pre-training tasks to model the interaction among texts, layouts, and images. DocFormer (Appalaraju et al., 2021) adopts a CNN model to extract image grid features, fusing the spatial information as an inductive bias for the self-attention module. LayoutLMv3 (Huang et al., 2022) further enhances visual and spatial characteristics with masked image modeling and word-patch alignment tasks. Another line of work focuses on various granularities of documents, such as region-level text/image blocks. Examples of such models include SelfDoc (Li et al., 2021a), UDoc (Gu et al., 2021), and MGDoc (Wang et al., 2022b), which are pre-trained with a cross-modal encoder to capture the relationship between visual and textual features. These models incorporate spatial information by fusing position embeddings at the output layer of their encoders, instead of the input layer. Additionally, OCR-free models (Kim et al., 2022; Tang et al., 2023) tackle document understanding as a se" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "text", + "content": "4974" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 98 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 98 + ], + "type": "text", + "content": "quence generation problem, unifying multiple tasks through an image-to-sequence generation network." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 99, + 291, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 99, + 291, + 194 + ], + "spans": [ + { + "bbox": [ + 67, + 99, + 291, + 194 + ], + "type": "text", + "content": "While these pre-trained models demonstrate promising performance on downstream applications, their robustness to different types of OOD data, the influence of pre-training and fine-tuning, and the value of different modalities (e.g. spatial, textual, and visual) for document OOD detection remain largely unexplored." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 204, + 235, + 216 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 204, + 235, + 216 + ], + "spans": [ + { + "bbox": [ + 67, + 204, + 235, + 216 + ], + "type": "text", + "content": "2.2 Out-of-Distribution Detection" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 222, + 291, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 222, + 291, + 534 + ], + "spans": [ + { + "bbox": [ + 67, + 222, + 291, + 534 + ], + "type": "text", + "content": "OOD detection has been extensively studied for open-world multi-class classification with natural image and text inputs, where the goal is to derive an OOD score that separates OOD from ID samples. A plethora of methods are proposed for deep neural networks, where the OOD scoring function is typically derived based on logits (without softmax scaling) (Hendrycks et al., 2022), softmax outputs (Liang et al., 2018; Hsu et al., 2020; Huang and Li, 2021; Sun et al., 2021), gradients (Huang et al., 2021), and feature embeddings (Tack et al., 2020; Fort et al., 2021; Ming et al., 2023). Despite their impressive performance on natural images and texts, it is underexplored if the results are transferrable to the document domain. A recent work (Larson et al., 2022) studied OOD detection for documents but only explored a limited number of models and OOD detection methods. The impacts of pre-training, fine-tuning, and spatial information remain unknown. In this work, we aim to provide a comprehensive and finer-grained analysis to shed light on the key factors for OOD robustness in the document domain." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": "Notations. Following prior works on OOD detection with large-scale pre-trained models (Ming et al., 2022a; Ming and Li, 2023), the task of OOD detection is defined with respect to the downstream dataset, instead of the pre-training data which is often hard to characterize. In document classification, we use " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{X}^{\\mathrm{in}}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{Y}^{\\mathrm{in}} = \\{1,\\dots ,K\\}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " to denote the input and label space, respectively. Let " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^{\\mathrm{in}} = \\{(x_i^{\\mathrm{in}},y_i^{\\mathrm{in}})\\}_{i = 1}^N" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " be the ID dataset, where " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "x\\in \\mathcal{X}^{\\mathrm{in}}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "y^{\\mathrm{in}}\\in \\mathcal{Y}^{\\mathrm{in}}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^{\\mathrm{out}} = \\{(x_i^{\\mathrm{out}},y_i^{\\mathrm{out}})\\}_{i = 1}^M" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " denote an OOD test set where " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "y^{\\mathrm{out}}\\in \\mathcal{Y}^{\\mathrm{out}}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\mathcal{Y}^{\\mathrm{out}}\\cap \\mathcal{Y}^{\\mathrm{in}} = \\emptyset" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": ". We express the neural network model " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "f\\coloneqq g\\circ h" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " as a composition of a feature extractor " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "h:\\mathcal{X}\\to \\mathbb{R}^{d}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " and a classifier " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "g:\\mathbb{R}^{d}\\to \\mathbb{R}^{K}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " which maps the feature embedding of an input to " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": " real-valued numbers known as logits. During inference time, given an input " + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "inline_equation", + "content": "\\pmb{x}" + }, + { + "bbox": [ + 67, + 543, + 291, + 773 + ], + "type": "text", + "content": ", OOD detection" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 71, + 401, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 401, + 83 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 401, + 83 + ], + "type": "text", + "content": "can be formulated as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 321, + 87, + 506, + 121 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 87, + 506, + 121 + ], + "spans": [ + { + "bbox": [ + 321, + 87, + 506, + 121 + ], + "type": "interline_equation", + "content": "G _ {\\gamma} (\\boldsymbol {x}; h, g) = \\left\\{ \\begin{array}{l l} \\mathrm {I D} & S (\\boldsymbol {x}; h, g) \\geq \\gamma \\\\ \\mathrm {O O D} & S (\\boldsymbol {x}; h, g) < \\gamma \\end{array} \\right.,", + "image_path": "7de8e393312d6a15da0cdc324392d3297e083f70dc7e3f260796f4ddff94948b.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 126, + 525, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 126, + 525, + 180 + ], + "spans": [ + { + "bbox": [ + 302, + 126, + 525, + 180 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 126, + 525, + 180 + ], + "type": "inline_equation", + "content": "S(\\cdot)" + }, + { + "bbox": [ + 302, + 126, + 525, + 180 + ], + "type": "text", + "content": " is a scoring function that measures OOD uncertainty. In practice, the threshold " + }, + { + "bbox": [ + 302, + 126, + 525, + 180 + ], + "type": "inline_equation", + "content": "q\\gamma" + }, + { + "bbox": [ + 302, + 126, + 525, + 180 + ], + "type": "text", + "content": " is often chosen so that a high fraction of ID data (e.g., 95%) is above the threshold." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 187, + 526, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 187, + 526, + 282 + ], + "spans": [ + { + "bbox": [ + 302, + 187, + 526, + 282 + ], + "type": "text", + "content": "OOD detection scores. We focus on two major categories of computationally efficient OOD detection methods1: logit-based methods derive OOD scores from the logit layer of the model, while distance-based methods directly leverage feature embeddings, as shown in Fig. 1. We describe a few popular methods for each category as follows." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "spans": [ + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "text", + "content": "- Logit-based: Maximum Softmax Probability (MSP) score (Hendrycks and Gimpel, 2017) " + }, + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{MSP}} = \\max_{i\\in [K]}e^{f_i(\\boldsymbol{x})} / \\sum_{j = 1}^K e^{f_j(\\boldsymbol{x})}" + }, + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "text", + "content": " naturally arises as a classic baseline as models often output lower softmax probabilities for OOD data; Energy score (Liu et al., 2020): " + }, + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{Energy}} = \\log \\sum_{i\\in [K]}e^{f_i(\\boldsymbol{x})}" + }, + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "text", + "content": " utilizes the Helmholtz free energy of the data and theoretically aligns with the logarithm of the ID density; the simple MaxLogit score (Hendrycks et al., 2022): " + }, + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{Maxlogit}} = \\max_{i\\in [K]}f_i(\\boldsymbol{x})" + }, + { + "bbox": [ + 302, + 287, + 527, + 558 + ], + "type": "text", + "content": " has demonstrated promising performance on large-scale natural image datasets. We select the above scores due to their simplicity and computational efficiency. In addition, recent studies demonstrate that such simple scores are particularly effective with large-scale pre-trained models in vision (Fort et al., 2021) and vision-language domains (Ming et al., 2022a; Bitterwolf et al., 2023). We complement previous studies and investigate their effectiveness for documents." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 559, + 526, + 747 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 559, + 526, + 747 + ], + "spans": [ + { + "bbox": [ + 303, + 559, + 526, + 747 + ], + "type": "text", + "content": "- Distance-based: Distance-based methods directly leverage feature embeddings " + }, + { + "bbox": [ + 303, + 559, + 526, + 747 + ], + "type": "inline_equation", + "content": "\\mathbf{z} = h(\\mathbf{x})" + }, + { + "bbox": [ + 303, + 559, + 526, + 747 + ], + "type": "text", + "content": " based on the idea that OOD inputs are relatively far away from ID clusters in the feature space, compared to ID inputs. Distance-based methods can be characterized as parametric and non-parametric. Parametric methods such as Mahalanobis score (Lee et al., 2018; Sehwag et al., 2021) assume ID embeddings follow class-conditional Gaussian distributions and use the Mahalanobis distance as the distance metric. On the other hand, non-parametric methods such as KNN+ (Sun et al., 2022) use cosine similarity as the distance metric." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "text", + "content": "1We also investigate gradient-based methods such as Grad-Norm (Huang et al., 2021) in Appendix C." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "4975" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 71, + 68, + 292, + 168 + ], + "blocks": [ + { + "bbox": [ + 71, + 68, + 292, + 168 + ], + "lines": [ + { + "bbox": [ + 71, + 68, + 292, + 168 + ], + "spans": [ + { + "bbox": [ + 71, + 68, + 292, + 168 + ], + "type": "image", + "image_path": "fa69a4a5a040426c3b4a5c6ea1a50dd5ffb253621aa2135ed5d8ea12ecf35d03.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 70, + 172, + 293, + 192 + ], + "lines": [ + { + "bbox": [ + 70, + 172, + 293, + 192 + ], + "spans": [ + { + "bbox": [ + 70, + 172, + 293, + 192 + ], + "type": "text", + "content": "(a) Illustration of common structures for document pretraining and classification." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 299, + 69, + 518, + 169 + ], + "blocks": [ + { + "bbox": [ + 299, + 69, + 518, + 169 + ], + "lines": [ + { + "bbox": [ + 299, + 69, + 518, + 169 + ], + "spans": [ + { + "bbox": [ + 299, + 69, + 518, + 169 + ], + "type": "image", + "image_path": "41c1a7bbaa1d47b5a7729e95f3246eef71d8d4b5ef2b773028c6fe2c610cc6a5.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 298, + 172, + 518, + 191 + ], + "lines": [ + { + "bbox": [ + 298, + 172, + 518, + 191 + ], + "spans": [ + { + "bbox": [ + 298, + 172, + 518, + 191 + ], + "type": "text", + "content": "(b) A detailed comparison of per-category accuracy on the RVL-CDIP test set." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 67, + 200, + 525, + 260 + ], + "lines": [ + { + "bbox": [ + 67, + 200, + 525, + 260 + ], + "spans": [ + { + "bbox": [ + 67, + 200, + 525, + 260 + ], + "type": "text", + "content": "Figure 2: (Left) Illustration of models for document pre-training and classification, with our proposed spatial-aware models in green blocks. Modality information is also shown atop each architecture. (Right) Evaluating fine-tuning performance for document classification of pre-trained models. Models are grouped into several categories (from left to right): language-only, vision-only, and multi-modal. For comparison, the performance of corresponding models in other groups is shown in gray. The average accuracy for each model is indicated in the parenthesis." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 280, + 290, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 280, + 290, + 362 + ], + "spans": [ + { + "bbox": [ + 67, + 280, + 290, + 362 + ], + "type": "text", + "content": "Evaluation metrics. To evaluate OOD detection performance, we adopt the following commonly used metrics: the Area Under the Receiver Operating Characteristic (AUROC), False Positive Rate at " + }, + { + "bbox": [ + 67, + 280, + 290, + 362 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 67, + 280, + 290, + 362 + ], + "type": "text", + "content": " Recall (FPR95), and the multi-class classification accuracy (ID Acc)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 371, + 191, + 386 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 371, + 191, + 386 + ], + "spans": [ + { + "bbox": [ + 67, + 371, + 191, + 386 + ], + "type": "text", + "content": "3 Experimental Setup" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 392, + 291, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 392, + 291, + 581 + ], + "spans": [ + { + "bbox": [ + 67, + 392, + 291, + 581 + ], + "type": "text", + "content": "Models. Fig. 2a summarizes common structures for document pre-training and classification models2. While documents typically come in the form of images (Harley et al., 2015), an OCR system can be used to extract words and their coordinates from the input image. Therefore, models can use single-modal or multi-modal information. We categorize these models according to the input modalities into the following groups: (1) models using only visual features, (2) models using solely textual features, (3) models incorporating both visual and textual features, and (4) models integrating additional spatial (especially layout) information. Further details can be found in Appendix A." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 589, + 291, + 724 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 589, + 291, + 724 + ], + "spans": [ + { + "bbox": [ + 67, + 589, + 291, + 724 + ], + "type": "text", + "content": "- Vision-only: Document classification can be viewed as a standard image classification problem. We consider ResNet-50 (He et al., 2016) and ViT (Fort et al., 2021) as exemplar document image classification models. We adopt two common pre-training settings: (1) only pre-trained on ImageNet (Deng et al., 2009) and (2) further pre-trained on IIT-CDIP (Lewis et al., 2006) with masked image modeling " + }, + { + "bbox": [ + 67, + 589, + 291, + 724 + ], + "type": "inline_equation", + "content": "(\\mathrm{MIM})^3" + }, + { + "bbox": [ + 67, + 589, + 291, + 724 + ], + "type": "text", + "content": ". After pretraining, we append a classifier for fine-tuning." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 303, + 280, + 526, + 731 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 303, + 280, + 526, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 280, + 526, + 417 + ], + "spans": [ + { + "bbox": [ + 303, + 280, + 526, + 417 + ], + "type": "text", + "content": "- Text-only: Alternatively, we can view document classification as text classification since documents often contain text blocks. To this end, we use RoBERTa (Liu et al., 2019) and Longformer (Beltagy et al., 2020) as the backbones. RoBERTa can handle up to 512 input tokens while Longformer can handle up to 4,096 input tokens. We pre-train the language models with masked language modeling (MLM) on IIT-CDIP extracted text corpus." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 432, + 526, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 432, + 526, + 620 + ], + "spans": [ + { + "bbox": [ + 303, + 432, + 526, + 620 + ], + "type": "text", + "content": "- Text+Layout: Layout information plays a crucial role in the document domain, as shown in Fig. 3. To investigate the effect of layout information, we adopt LayoutLM as the backbone. We will show that spatial-aware models demonstrate promising OOD detection performance. However, such specialized models can be computationally expensive. Therefore, we propose a new spatial-aware adapter, a small learned module that can be inserted within a pre-trained language model such as RoBERTa and transforms it into a spatial-aware model, which is computationally efficient and competitive for both ID classification and OOD detection (Sec. 5.2)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 303, + 637, + 526, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 637, + 526, + 731 + ], + "spans": [ + { + "bbox": [ + 303, + 637, + 526, + 731 + ], + "type": "text", + "content": "- Vision+Text+Layout: For comprehensiveness, we consider LayoutLMv3 and UDoc, which are large and computationally intensive. Both models are pre-trained on the full IIT-CDIP for fairness. These models utilize different input granularities and modalities, including textual, visual, and spatial information for document tasks." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 729, + 290, + 761 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 729, + 290, + 761 + ], + "spans": [ + { + "bbox": [ + 67, + 729, + 290, + 761 + ], + "type": "inline_equation", + "content": "{}^{2}" + }, + { + "bbox": [ + 67, + 729, + 290, + 761 + ], + "type": "text", + "content": " Apart from document classification, in the Appendix B, we also investigate OOD detection for two entity-level tasks: document entity recognition and document object detection." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 80, + 761, + 289, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 761, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 80, + 761, + 289, + 772 + ], + "type": "text", + "content": "Note that the document classification dataset we used in" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 741, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 525, + 772 + ], + "type": "text", + "content": "this paper, RVL-CDIP (Harley et al., 2015), is a subset of IIT-CDIP. Hence, unless otherwise specified, the IIT-CDIP pre-training data used in this paper excludes RVL-CDIP." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "4976" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 167 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 167 + ], + "type": "text", + "content": "Constructing ID and OOD datasets. We construct ID datasets from RVL-CDIP (Harley et al., 2015), where 12 out of 16 classes are selected as ID classes. Dataset details are in Appendix A. We consider two OOD scenarios: in-domain and out-domain, based on the content (e.g., words, background) and layout characteristics." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 184, + 292, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 184, + 292, + 415 + ], + "spans": [ + { + "bbox": [ + 68, + 184, + 292, + 415 + ], + "type": "text", + "content": "- In-domain OOD: To determine the OOD categories, we analyzed the performance of recent document classification models on the RVL-CDIP test set. Fig. 2b shows the per-category test accuracy of various models. Naturally, for the classes the models perform poorly on, we may expect the models to detect such inputs as OOD instead of assigning a specific ID class with low confidence. We observe that the 4 categories (letter, form, scientific report, and presentation) result in the worst performance across most of the models with different modalities. We use these as OOD categories and construct the OOD datasets accordingly. The ID dataset is constructed from the remaining 12 categories, which we refer to as in-domain OOD datasets, as they are also sourced from RVL-CDIP." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 430, + 293, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 430, + 293, + 620 + ], + "spans": [ + { + "bbox": [ + 68, + 430, + 293, + 620 + ], + "type": "text", + "content": "- Out-domain OOD: In the open-world setting, test inputs can have significantly different color schemes and layouts compared to ID samples. To mimic such scenarios, we use two public datasets as out-domain OOD test sets: NJU-Fudan Paper-Poster Dataset (Qiang et al., 2019) and CORD (Park et al., 2019). NJU-Fudan Paper-Poster Dataset contains scientific posters in digital PDF format4. CORD is a receipt understanding dataset with significantly different inputs compared to RVL-CDIP. As shown in Fig. 3, receipt images can be challenging and require models to handle not only textual but also visual and spatial information." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 638, + 292, + 747 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 292, + 747 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 292, + 747 + ], + "type": "text", + "content": "We further support our domain selection using OTDD (Alvarez-Melis and Fusi, 2020), a flexible geometric method for comparing probability distributions, which enables us to compare any two datasets regardless of their label sets. We observe a clear gap between in-domain and out-domain data, which aligns with our data selection. Further details can be found in Appendix A.1." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 70, + 481, + 97 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 70, + 481, + 97 + ], + "spans": [ + { + "bbox": [ + 302, + 70, + 481, + 97 + ], + "type": "text", + "content": "4 Analyzing OOD Reliability for Documents" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 105, + 505, + 119 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 105, + 505, + 119 + ], + "spans": [ + { + "bbox": [ + 302, + 105, + 505, + 119 + ], + "type": "text", + "content": "4.1 OOD Detection Without Fine-Tuning" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 122, + 527, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 122, + 527, + 258 + ], + "spans": [ + { + "bbox": [ + 302, + 122, + 527, + 258 + ], + "type": "text", + "content": "In this section, we begin by examining the influence of pre-training datasets on zero-shot OOD detection. For each model, we adopt the same pretraining objective while adjusting the amount of pre-training data. Specifically, we increase the data diversity by appending 10, 20, 40, and " + }, + { + "bbox": [ + 302, + 122, + 527, + 258 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 302, + 122, + 527, + 258 + ], + "type": "text", + "content": " of randomly sampled data from IIT-CDIP dataset (around 11M) and pre-train each model. After pre-training, we measure the OOD detection performance with " + }, + { + "bbox": [ + 302, + 122, + 527, + 258 + ], + "type": "inline_equation", + "content": "\\mathrm{KNN + }" + }, + { + "bbox": [ + 302, + 122, + 527, + 258 + ], + "type": "text", + "content": " score based on feature embeddings." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 259, + 527, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 259, + 527, + 434 + ], + "spans": [ + { + "bbox": [ + 302, + 259, + 527, + 434 + ], + "type": "text", + "content": "We observe that: (1) for out-domain OOD data (Fig. 4a, right), increasing the amount of pretraining data can significantly improve the zero-shot OOD detection performance (w.o. fine-tuning) for models across different modalities. Our hypothesis is that pre-training with diverse data is beneficial for coarse-grained OOD detection, such as inputs from different domains (e.g., color schemes). (2) For in-domain OOD inputs, even increasing the amount of pre-training data by over " + }, + { + "bbox": [ + 302, + 259, + 527, + 434 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 302, + 259, + 527, + 434 + ], + "type": "text", + "content": " provides negligible improvements (Fig. 4a, left). This suggests the necessity of fine-tuning for improving in-domain OOD detection performance (Fig. 6)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "spans": [ + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "text", + "content": "We further explore a more restricted setting for zero-shot OOD detection where potential OOD categories are removed from the pre-training dataset IIT-CDIP. First, we use LayoutLM fine-tuned on RVL-CDIP to predict labels for all documents in IIT-CDIP. Fig. 4b summarizes the distribution of the predicted classes on IIT-CDIP. Next, we remove the \"OOD\" categories from IIT-CDIP and pretrain two models (RoBERTa and LayoutLM) with 10, 20, 40, and " + }, + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "text", + "content": " of randomly sampled data from the filtered IIT-CDIP (dubbed III- " + }, + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "inline_equation", + "content": "\\mathrm{CDIP^{-}}" + }, + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "text", + "content": "), respectively. The zero-shot OOD performance for in-domain and out-domain OOD is shown in Fig. " + }, + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "inline_equation", + "content": "4c^{5}" + }, + { + "bbox": [ + 302, + 435, + 527, + 745 + ], + "type": "text", + "content": ". For RoBERTa, we observe similar trends as in Fig. 4a, where increasing the amount of pretraining data improves zero-shot OOD detection performance for out-domain data. However, the zero-shot performance of LayoutLM benefits from a larger pre-training dataset. In particular, given the same amount of pre-training data, LayoutLM consistently outperforms RoBERTa for both in-domain and out-domain OOD detection, which suggests that spatial information can be essential" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "text", + "content": "5Note that we do not show " + }, + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "inline_equation", + "content": "0\\%" + }, + { + "bbox": [ + 302, + 751, + 525, + 772 + ], + "type": "text", + "content": " in Fig. 4c since we pre-train LayoutLM from scratch." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 80, + 760, + 287, + 772 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 760, + 287, + 772 + ], + "spans": [ + { + "bbox": [ + 80, + 760, + 287, + 772 + ], + "type": "text", + "content": "Extracted using https://github.com/pymupdf/PyMuPDF" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "4977" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 68, + 526, + 208 + ], + "blocks": [ + { + "bbox": [ + 70, + 68, + 526, + 208 + ], + "lines": [ + { + "bbox": [ + 70, + 68, + 526, + 208 + ], + "spans": [ + { + "bbox": [ + 70, + 68, + 526, + 208 + ], + "type": "image", + "image_path": "607b7ed811f4520c90c87ebfa687f7795cf55fce27dd9493771989b802367bb3.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 219, + 525, + 245 + ], + "lines": [ + { + "bbox": [ + 67, + 219, + 525, + 245 + ], + "spans": [ + { + "bbox": [ + 67, + 219, + 525, + 245 + ], + "type": "text", + "content": "Figure 3: (Top) Examples of ID inputs sampled from RVL-CDIP (top). (Bottom) In-domain OOD from RVL-CDIP, and out-domain OOD from Scientific Poster and Receipts." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 73, + 257, + 238, + 354 + ], + "blocks": [ + { + "bbox": [ + 73, + 257, + 238, + 354 + ], + "lines": [ + { + "bbox": [ + 73, + 257, + 238, + 354 + ], + "spans": [ + { + "bbox": [ + 73, + 257, + 238, + 354 + ], + "type": "image", + "image_path": "3ff57b7eefe7bba1b923228264429ceba557d3580b56565eed51f383cfef3a6b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 108, + 363, + 202, + 374 + ], + "lines": [ + { + "bbox": [ + 108, + 363, + 202, + 374 + ], + "spans": [ + { + "bbox": [ + 108, + 363, + 202, + 374 + ], + "type": "text", + "content": "(a) Pre-train on IIT-CDIP." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 240, + 256, + 352, + 356 + ], + "blocks": [ + { + "bbox": [ + 240, + 256, + 352, + 356 + ], + "lines": [ + { + "bbox": [ + 240, + 256, + 352, + 356 + ], + "spans": [ + { + "bbox": [ + 240, + 256, + 352, + 356 + ], + "type": "image", + "image_path": "533b8df0e97e947ab30e1ad933d79182cfc6cf6d62aeaf752e4904d98a066b43.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 383, + 525, + 408 + ], + "lines": [ + { + "bbox": [ + 67, + 383, + 525, + 408 + ], + "spans": [ + { + "bbox": [ + 67, + 383, + 525, + 408 + ], + "type": "text", + "content": "Figure 4: The impact of pre-training data on zero-shot OOD detection performance. IIT-CDIP" + }, + { + "bbox": [ + 67, + 383, + 525, + 408 + ], + "type": "inline_equation", + "content": "^{-}" + }, + { + "bbox": [ + 67, + 383, + 525, + 408 + ], + "type": "text", + "content": " denotes the filtered pre-training data after removing the \"OOD\" categories." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 356, + 259, + 520, + 353 + ], + "blocks": [ + { + "bbox": [ + 249, + 363, + 343, + 374 + ], + "lines": [ + { + "bbox": [ + 249, + 363, + 343, + 374 + ], + "spans": [ + { + "bbox": [ + 249, + 363, + 343, + 374 + ], + "type": "text", + "content": "(b) Analysis of IIT-CDIP." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 356, + 259, + 520, + 353 + ], + "lines": [ + { + "bbox": [ + 356, + 259, + 520, + 353 + ], + "spans": [ + { + "bbox": [ + 356, + 259, + 520, + 353 + ], + "type": "image", + "image_path": "ffe6c9e679e1b4e6dd6a4e6537ee55d3a938507aacc027050f195aeadfe410b5.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 387, + 363, + 489, + 373 + ], + "lines": [ + { + "bbox": [ + 387, + 363, + 489, + 373 + ], + "spans": [ + { + "bbox": [ + 387, + 363, + 489, + 373 + ], + "type": "text", + "content": "(c) Pre-train on IIT-CDIP-." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 428, + 290, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 428, + 290, + 470 + ], + "spans": [ + { + "bbox": [ + 67, + 428, + 290, + 470 + ], + "type": "text", + "content": "for boosting the OOD reliability in the document domain. Motivated by the above observations, we dive deeper and analyze spatial-aware models next." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 470, + 291, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 470, + 291, + 633 + ], + "spans": [ + { + "bbox": [ + 67, + 470, + 291, + 633 + ], + "type": "text", + "content": "While pre-trained models exhibit the capability to differentiate data from various domains as a result of being trained on a diverse range of data. We observe that achieving more precise separation for in-domain OOD inputs remains difficult. Given this observation, we further analyze the impacts of fine-tuning for OOD detection with fixed pretraining datasets in the next section. By combining pre-trained models with a simple classifier and fine-tuning on RVL-CDIP (ID), we find that fine-tuning is advantageous in enhancing the OOD detection performance for both types of OOD samples." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 645, + 287, + 671 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 645, + 287, + 671 + ], + "spans": [ + { + "bbox": [ + 67, + 645, + 287, + 671 + ], + "type": "text", + "content": "4.2 The Impact of Fine-Tuning on Document OOD Detection" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 678, + 290, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 678, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 678, + 290, + 773 + ], + "type": "text", + "content": "Recent document models are often pre-trained on a large-scale dataset and adapted to the target task via fine-tuning. To better understand the role of fine-tuning, we explore the following questions: 1) How does fine-tuning impact OOD reliability for in-domain and out-domain OOD inputs? 2) How does model modality impact the performance?" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 428, + 526, + 767 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 428, + 526, + 767 + ], + "spans": [ + { + "bbox": [ + 302, + 428, + 526, + 767 + ], + "type": "text", + "content": "We consider a wide range of models pretrained on pure-text/image data (e.g., ImageNet and Wikipedia) described in Appendix A.3. During fine-tuning, we combine pre-trained models with a simple classifier and fine-tune on RVL-CDIP (ID). For models before and after fine-tuning, we extract the final feature embeddings and use a distance-based method KNN+ (Sun et al., 2022) for OOD detection. The results are shown in Fig. 6. We observe the following trends. First, fine-tuning largely improves OOD detection performance for both in-domain and out-domain OOD data. The same trend holds broadly across models with different modalities. Second, the improvement of fine-tuning is less significant for out-domain OOD data. For example, on Receipt (out-domain OOD), the AUROC for pre-trained ViT model is 97.13, whereas fine-tuning only improves by " + }, + { + "bbox": [ + 302, + 428, + 526, + 767 + ], + "type": "inline_equation", + "content": "0.79\\%" + }, + { + "bbox": [ + 302, + 428, + 526, + 767 + ], + "type": "text", + "content": ". This suggests that pre-trained models do have the potential to separate data from different domains due to the diversity of data used for pre-training, while it remains hard for pre-trained models to perform finer-grained separation for in-domain OOD inputs. Therefore, fine-tuning is beneficial for improving OOD detection performance for both types of OOD" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "4978" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 68, + 295, + 153 + ], + "blocks": [ + { + "bbox": [ + 69, + 68, + 295, + 153 + ], + "lines": [ + { + "bbox": [ + 69, + 68, + 295, + 153 + ], + "spans": [ + { + "bbox": [ + 69, + 68, + 295, + 153 + ], + "type": "image", + "image_path": "0314be8bd1bca90ef5bdab4e487c0f9cd588fa77d31947c7b6755267540bb088.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 161, + 525, + 186 + ], + "lines": [ + { + "bbox": [ + 67, + 161, + 525, + 186 + ], + "spans": [ + { + "bbox": [ + 67, + 161, + 525, + 186 + ], + "type": "text", + "content": "Figure 5: Comparison between representative feature-based scores and logit-based scores for spatial-aware and non-spatial-aware models. Spatial-aware models are colored in blue." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 299, + 68, + 524, + 153 + ], + "blocks": [ + { + "bbox": [ + 299, + 68, + 524, + 153 + ], + "lines": [ + { + "bbox": [ + 299, + 68, + 524, + 153 + ], + "spans": [ + { + "bbox": [ + 299, + 68, + 524, + 153 + ], + "type": "image", + "image_path": "da5abc0540cd0333359b641c58b0abc25ce6593a20535e58c75ebeef705c6902.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 70, + 196, + 223, + 299 + ], + "blocks": [ + { + "bbox": [ + 70, + 196, + 223, + 299 + ], + "lines": [ + { + "bbox": [ + 70, + 196, + 223, + 299 + ], + "spans": [ + { + "bbox": [ + 70, + 196, + 223, + 299 + ], + "type": "image", + "image_path": "3219f54bf54a194e6e31bb1117eb506c76bf9ac3c3eaf77178f10401e1c64d55.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 310, + 525, + 346 + ], + "lines": [ + { + "bbox": [ + 67, + 310, + 525, + 346 + ], + "spans": [ + { + "bbox": [ + 67, + 310, + 525, + 346 + ], + "type": "text", + "content": "Figure 6: OOD detection performance for pre-trained models w. and w.o. fine-tuning. We use a distance-based method KNN+ as the OOD scoring function. Fine-tuning significantly improves performance for both in and out-domain OOD data." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 227, + 196, + 373, + 299 + ], + "blocks": [ + { + "bbox": [ + 227, + 196, + 373, + 299 + ], + "lines": [ + { + "bbox": [ + 227, + 196, + 373, + 299 + ], + "spans": [ + { + "bbox": [ + 227, + 196, + 373, + 299 + ], + "type": "image", + "image_path": "4e8f8a80434205a841194eab1c0f8c2ebcab57b5807dc1125b3cc39484f32d04.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 378, + 196, + 524, + 299 + ], + "blocks": [ + { + "bbox": [ + 378, + 196, + 524, + 299 + ], + "lines": [ + { + "bbox": [ + 378, + 196, + 524, + 299 + ], + "spans": [ + { + "bbox": [ + 378, + 196, + 524, + 299 + ], + "type": "image", + "image_path": "ede391d9002a39d7640601e6dd684305bd9e813cab27211ac6c309fc5244bd8d.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 367, + 290, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 367, + 290, + 474 + ], + "spans": [ + { + "bbox": [ + 67, + 367, + 290, + 474 + ], + "type": "text", + "content": "samples. To further validate our conclusion, we consider two additional in-domain OOD settings for our analysis: (1) selecting the classes the model performs well on, as in-domain OOD categories; (2) randomly selecting classes as OOD categories (Appendix A.2). We find that fine-tuning improves OOD detection for both settings, further verifying our observations." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 477, + 291, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 477, + 291, + 693 + ], + "spans": [ + { + "bbox": [ + 67, + 477, + 291, + 693 + ], + "type": "text", + "content": "Next, we take a closer look at the impact of model modality on out-domain OOD detection. As shown in Fig. 6 (mid and right), both vision and text-based models demonstrate strong reliability against scientific posters (OOD). However, vision-based models display stronger performance than text-based models for Receipts (OOD). This can be explained by the fact that ViT was first pre-trained on ImageNet while scientific posters and receipts contain diverse visual information such as colors and edges for vision models to utilize (see Fig. 3). On the other hand, although fine-tuning text-based models largely improves the detection performance compared to pre-trained counterparts, utilizing only textual information can be inherently limited for out-domain OOD detection." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 708, + 282, + 722 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 708, + 282, + 722 + ], + "spans": [ + { + "bbox": [ + 67, + 708, + 282, + 722 + ], + "type": "text", + "content": "5 The Importance of Spatial-Awareness" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 733, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 291, + 772 + ], + "type": "text", + "content": "In previous sections, we mainly focus on mainstream text-based and vision-based models for in- and out-domain OOD detection. Next, we consider" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 367, + 526, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 367, + 526, + 433 + ], + "spans": [ + { + "bbox": [ + 302, + 367, + 526, + 433 + ], + "type": "text", + "content": "models tailored to document processing, which we refer to as spatial-aware models, such as LayoutLMv3 and UDoc. Given fine-tuned models, we compare the performance of logit-based and distance-based OOD scores." + } + ] + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 305, + 443, + 523, + 555 + ], + "blocks": [ + { + "bbox": [ + 305, + 443, + 523, + 555 + ], + "lines": [ + { + "bbox": [ + 305, + 443, + 523, + 555 + ], + "spans": [ + { + "bbox": [ + 305, + 443, + 523, + 555 + ], + "type": "image", + "image_path": "70240005a0abddaad70c02836e857ccc66b79ca6b40b31ee70d80ff8cd54ca25.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 565, + 525, + 650 + ], + "lines": [ + { + "bbox": [ + 302, + 565, + 525, + 650 + ], + "spans": [ + { + "bbox": [ + 302, + 565, + 525, + 650 + ], + "type": "text", + "content": "Figure 7: Illustration of our spatial-aware adapter for language models. We present 2 adapter designs (marked in green box): (1) insert the adapter into the word embedding layer during pre-training and fine-tuning; (2) insert the adapter into the output layer for fine-tuning only. For the first design, we freeze the word embedding layer and learn the adapter and transformer layers." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 674, + 489, + 687 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 674, + 489, + 687 + ], + "spans": [ + { + "bbox": [ + 302, + 674, + 489, + 687 + ], + "type": "text", + "content": "5.1 Analysis of Spatial-Aware Models" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "content": "We summarize key comparisons in Fig. 5, where we use MSP and Energy as exemplar logit-based scores and " + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "inline_equation", + "content": "\\mathrm{KNN + }" + }, + { + "bbox": [ + 302, + 692, + 525, + 772 + ], + "type": "text", + "content": " as the distance-based score. Full results are in Appendix C. We can see that the simple KNN-based score (KNN+) consistently outperforms logit-based scores for both in-domain and" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 791 + ], + "type": "text", + "content": "4979" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "spans": [ + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "text", + "content": "out-domain OOD data across different models with different modalities. This is in contrast with recent works that investigate large-scale pre-trained models in the vision-language domain, where logit-based scores demonstrate strong OOD detection performance (Fort et al., 2021). As documents are distinct from natural image-text pairs, observations in the vision-language domain do not seamlessly translate to the document domain. Moreover, spatial-aware models demonstrate stronger OOD detection performance for both in and out-domain OOD. For example, with the best scoring function " + }, + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "inline_equation", + "content": "(\\mathrm{KNN}+)" + }, + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "text", + "content": ", LayoutLMv3 improves the average AUROC by " + }, + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "inline_equation", + "content": "7.09\\%" + }, + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "text", + "content": " for out-domain OOD and " + }, + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "inline_equation", + "content": "7.54\\%" + }, + { + "bbox": [ + 66, + 71, + 293, + 301 + ], + "type": "text", + "content": " for in-domain OOD data compared to RoBERTa. This further highlights the value of spatial information for improving OOD robustness for documents." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 302, + 291, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 302, + 291, + 464 + ], + "spans": [ + { + "bbox": [ + 66, + 302, + 291, + 464 + ], + "type": "text", + "content": "Despite the impressive improvements brought by spatial-aware models, acquiring a large-scale pretraining dataset that includes spatial information remains challenging. In contrast, there is a growing abundance of pre-trained language models that are based on textual data. This motivates us to explore the possibility of leveraging these pre-trained language models by training an adapter on a small dataset containing document-specific information. By adopting this approach, we can effectively utilize existing models while minimizing the time and cost required for training." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 475, + 290, + 488 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 475, + 290, + 488 + ], + "spans": [ + { + "bbox": [ + 67, + 475, + 290, + 488 + ], + "type": "text", + "content": "5.2 Towards Effective Spatial-Aware Adapter" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 493, + 291, + 709 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 493, + 291, + 709 + ], + "spans": [ + { + "bbox": [ + 66, + 493, + 291, + 709 + ], + "type": "text", + "content": "During our investigation into the effects of model modality, pre-training, and fine-tuning on various types of OOD inputs, we find that spatial/layout information plays a critical role in the document domain. However, existing pre-training models such as LayoutLM series, SelfDoc, and UDoc do not fully leverage the benefits of well-pre-trained language models. This raises the question of whether a large-scale language model, such as RoBERTa, can be adapted to detect OOD documents effectively. In this section, we demonstrate that incorporating an adapter module that accounts for spatial information with transformer-based pre-trained models can achieve strong performance with minimal changes to the code. To the best of our knowledge, this is the first study to apply the adapter idea to documents." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 66, + 719, + 293, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 719, + 293, + 775 + ], + "spans": [ + { + "bbox": [ + 66, + 719, + 293, + 775 + ], + "type": "text", + "content": "Spatial-aware adapter. Given a pre-trained language model such as RoBERTa, we propose an adapter that utilizes spatial information. We consider two potential designs: 1) the adapter is ap-" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 305, + 69, + 524, + 162 + ], + "blocks": [ + { + "bbox": [ + 305, + 69, + 524, + 162 + ], + "lines": [ + { + "bbox": [ + 305, + 69, + 524, + 162 + ], + "spans": [ + { + "bbox": [ + 305, + 69, + 524, + 162 + ], + "type": "image", + "image_path": "a56533c927e6b36ed598d7f41760e9bed8ccba8b7f7add36b364da44c80e960a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 170, + 527, + 255 + ], + "lines": [ + { + "bbox": [ + 302, + 170, + 527, + 255 + ], + "spans": [ + { + "bbox": [ + 302, + 170, + 527, + 255 + ], + "type": "text", + "content": "Figure 8: Comparison of OOD detection performance of Spatial-RoBERTa and RoBERTa. All models are initialized with public pre-trained checkpoints trained on purely textual data and further pre-trained on IIT-CDIP. The only difference is that Spatial-RoBERTa has an additional spatial-ware adapter and takes word bounding boxes as additional inputs." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 281, + 526, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 281, + 526, + 402 + ], + "spans": [ + { + "bbox": [ + 302, + 281, + 526, + 402 + ], + "type": "text", + "content": "pended to the word embedding layer, denoted as Spatial-RoBERTa (pre), which requires both pretraining and fine-tuning. This architecture is illustrated in the top row of Fig. 7.2) The adapter is appended to the final layer of the text encoder, denoted as Spatial-BoBERTa (post), which only requires fine-tuning as the model can utilize the pre-trained textual encoder, as shown in the bottom row of Fig. 7." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "spans": [ + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "text", + "content": "For Spatial-RoBERTa (pre), we freeze the word embedding layer during pre-training for several considerations: 1) word embeddings learned from large-scale corpus already cover most of those words from documents; 2) pre-training on documents without strong language dependency may not help improve word embeddings. For example, in semi-structured documents (e.g., forms, receipts), language dependencies are not as strong as in text-rich documents (e.g., letters, resumes), which may degenerate the learned word representations. In practice, each word has a normalized bounding box " + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "inline_equation", + "content": "(x_0, y_0, x_1, y_1)" + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "inline_equation", + "content": "(x_0, y_0) / (x_1, y_1)" + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "text", + "content": " corresponds to the position of the upper left / lower right in the bounding box. To encode positional information, we employ four position embedding layers, where each layer= encodes one coordinate " + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "inline_equation", + "content": "(e.g., x_0)" + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "text", + "content": " and produces a corresponding position embedding. The special tokens ([CLS], [SEP], and [PAD]) are attached with an empty bounding box " + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "inline_equation", + "content": "(0, 0, 0, 0)" + }, + { + "bbox": [ + 302, + 405, + 526, + 729 + ], + "type": "text", + "content": ". As depicted in the top row of Fig. 7, the spatial-aware word embeddings are formed by adding position embeddings to their corresponding word embeddings." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 733, + 527, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 527, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 527, + 773 + ], + "type": "text", + "content": "For Spatial-RoBERTa (post), position embeddings are added through late fusion in the final hidden states during fine-tuning without affecting the" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "text", + "content": "4980" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 68, + 226, + 178 + ], + "blocks": [ + { + "bbox": [ + 70, + 68, + 226, + 178 + ], + "lines": [ + { + "bbox": [ + 70, + 68, + 226, + 178 + ], + "spans": [ + { + "bbox": [ + 70, + 68, + 226, + 178 + ], + "type": "image", + "image_path": "110e122fbc8ca49348f8f64f04e6d3a599adf26422b92890e02a3c70f70deefd.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 193, + 526, + 242 + ], + "lines": [ + { + "bbox": [ + 67, + 193, + 526, + 242 + ], + "spans": [ + { + "bbox": [ + 67, + 193, + 526, + 242 + ], + "type": "text", + "content": "Figure 9: Correlation between ID accuracy and OOD detection performance. For most models, ID accuracy is positively correlated with OOD detection performance. Language models with spatial-aware adapters (highlighted in blue) achieve significantly higher ID accuracy and stronger OOD robustness (in AUROC) compared to language models without adapters. Here, " + }, + { + "bbox": [ + 67, + 193, + 526, + 242 + ], + "type": "inline_equation", + "content": "(+)" + }, + { + "bbox": [ + 67, + 193, + 526, + 242 + ], + "type": "text", + "content": " represents further pre-training on the IIT-CDIP dataset." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 221, + 68, + 375, + 178 + ], + "blocks": [ + { + "bbox": [ + 221, + 68, + 375, + 178 + ], + "lines": [ + { + "bbox": [ + 221, + 68, + 375, + 178 + ], + "spans": [ + { + "bbox": [ + 221, + 68, + 375, + 178 + ], + "type": "image", + "image_path": "7be12378d725b6bc34a985a57f9fd1ef7fb47644aece228bf26244f5557a6be5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 375, + 68, + 524, + 178 + ], + "blocks": [ + { + "bbox": [ + 375, + 68, + 524, + 178 + ], + "lines": [ + { + "bbox": [ + 375, + 68, + 524, + 178 + ], + "spans": [ + { + "bbox": [ + 375, + 68, + 524, + 178 + ], + "type": "image", + "image_path": "b9e0ddf8d677ce9e17b38f1e9c17810e93ab472b943962c10f3fbfa493e46079.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 263, + 291, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 263, + 291, + 371 + ], + "spans": [ + { + "bbox": [ + 67, + 263, + 291, + 371 + ], + "type": "text", + "content": "pre-trained encoder. Our experiments demonstrate that introducing spatial-aware adapters during pretraining yields better results than only adding position embeddings during fine-tuning. For additional details, please refer to Appendix C. In the following, we focus on analyzing Spatial-RoBERTa (pre) and comparing both ID and OOD performance with that of the pure-text pre-trained RoBERTa." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 379, + 291, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 379, + 291, + 567 + ], + "spans": [ + { + "bbox": [ + 67, + 379, + 291, + 567 + ], + "type": "text", + "content": "Spatial-RoBERTa significantly outperforms RoBERTa. To verify the effectiveness of Spatial-RoBERTa, we compare the OOD detection performance of pre-trained and fine-tuned models. The results are shown in Fig. 8, where OOD performance is based on " + }, + { + "bbox": [ + 67, + 379, + 291, + 567 + ], + "type": "inline_equation", + "content": "\\mathrm{KNN + (K = 10)}" + }, + { + "bbox": [ + 67, + 379, + 291, + 567 + ], + "type": "text", + "content": ". Full results can be seen in Table 6. Spatial-RoBERTa significantly improves the OOD detection performance, especially after fine-tuning. For example, compared to RoBERTa (base), Spatial-RoBERTa (base) improves AUROC significantly by " + }, + { + "bbox": [ + 67, + 379, + 291, + 567 + ], + "type": "inline_equation", + "content": "4.24\\%" + }, + { + "bbox": [ + 67, + 379, + 291, + 567 + ], + "type": "text", + "content": " averaged over four in-domain OOD datasets. This further confirms the importance of spatial information for OOD detection in the document domain." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 575, + 291, + 724 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 575, + 291, + 724 + ], + "spans": [ + { + "bbox": [ + 67, + 575, + 291, + 724 + ], + "type": "text", + "content": "Spatial-RoBERTa is competitive for both ID classification and OOD detection. Beyond OOD detection performance, we also examine the multi-class ID classification accuracy and plot the two metrics for all models with different modalities in Fig. 9. We can clearly observe a positive correlation between ID accuracy and OOD detection performance (measured by AUROC) for both in-domain and out-domain OOD data. Moreover, spatial-aware models display superior ID accuracy and OOD robustness compared to text-only and" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 262, + 526, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 262, + 526, + 356 + ], + "spans": [ + { + "bbox": [ + 302, + 262, + 526, + 356 + ], + "type": "text", + "content": "vision-only models. Overall, Spatial-RoBERTa greatly improves upon RoBERTa and matches the performance of models with more complex and specialized architectures such as LayoutLM. Specifically, Spatial-RoBERTaLarge achieves 97.37 ID accuracy, which is even higher than LayoutLM (97.28) and UDoc (97.36)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 358, + 527, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 358, + 527, + 465 + ], + "spans": [ + { + "bbox": [ + 302, + 358, + 527, + 465 + ], + "type": "text", + "content": "To summarize, our spatial-aware adapter effectively adapts pre-trained transformer-based text models to the document domain, improving both ID and OOD performance. In addition, by freezing the original word embeddings during pre-training, the models (Spatial-RoBERTaBase and Spatial-RoBERTaLarge) are parameter-efficient and thus reduce the training cost." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 476, + 387, + 487 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 476, + 387, + 487 + ], + "spans": [ + { + "bbox": [ + 302, + 476, + 387, + 487 + ], + "type": "text", + "content": "6 Conclusions" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 301, + 497, + 526, + 727 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 497, + 526, + 727 + ], + "spans": [ + { + "bbox": [ + 301, + 497, + 526, + 727 + ], + "type": "text", + "content": "In this work, we provide a comprehensive and in-depth study on the impacts of pre-training, finetuning, model-modality, and OOD scores on a broad variety of document OOD detection tasks. We present novel insights on document OOD detection, which are under-explored or in contrast with OOD detection works based on vision-language models. In particular, we highlight that spatial information is critical for OOD detection in documents. We further propose a spatial-aware adapter as an add-on module to transformer-based models. Our module adapts pre-trained language models to the document domain. Extensive experiments on a broad range of datasets verify the effectiveness of our design. We hope our work will inspire future research toward improving OOD robustness for reliable document understanding." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 731, + 291, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 731, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 731, + 291, + 772 + ], + "type": "text", + "content": "Spatial-RoBERTaBase (pre) incorporates position information during both pre-training and fine-tuning, while Spatial-RoBERTaBase (post) only inserts the adapter into the output layer for fine-tuning." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4981" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 149, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 149, + 83 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 149, + 83 + ], + "type": "text", + "content": "7 Limitations" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 93, + 291, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 93, + 291, + 269 + ], + "spans": [ + { + "bbox": [ + 67, + 93, + 291, + 269 + ], + "type": "text", + "content": "In this work, our main focus is on OOD detection for document understanding, with a specific emphasis on the context of document classification. As OOD detection based on document pre-trained models remains largely underexplored, we believe establishing an in-depth and extensive study of OOD detection for document classification would be a valuable stepping stone towards more complex tasks. Apart from document classification, in the Appendix B, we also investigate OOD detection for two entity-level tasks: document entity recognition and document object detection. We leave a more comprehensive treatment for future works." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 291, + 127, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 291, + 127, + 304 + ], + "spans": [ + { + "bbox": [ + 68, + 291, + 127, + 304 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 311, + 290, + 772 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 69, + 311, + 289, + 334 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 311, + 289, + 334 + ], + "spans": [ + { + "bbox": [ + 69, + 311, + 289, + 334 + ], + "type": "text", + "content": "David Alvarez-Melis and Nicolo Fusi. 2020. Geometric dataset distances via optimal transport. In NeurIPS." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 343, + 290, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 343, + 290, + 386 + ], + "spans": [ + { + "bbox": [ + 69, + 343, + 290, + 386 + ], + "type": "text", + "content": "Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In ICCV." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 396, + 289, + 429 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 396, + 289, + 429 + ], + "spans": [ + { + "bbox": [ + 69, + 396, + 289, + 429 + ], + "type": "text", + "content": "Udit Arora, William Huang, and He He. 2021. Types of out-of-distribution texts and how to detect them. In EMNLP." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 439, + 289, + 473 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 439, + 289, + 473 + ], + "spans": [ + { + "bbox": [ + 69, + 439, + 289, + 473 + ], + "type": "text", + "content": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 482, + 289, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 482, + 289, + 505 + ], + "spans": [ + { + "bbox": [ + 69, + 482, + 289, + 505 + ], + "type": "text", + "content": "Abhijit Bendale and Terrance E Boult. 2016. Towards open set deep networks. In CVPR." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 514, + 290, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 514, + 290, + 547 + ], + "spans": [ + { + "bbox": [ + 69, + 514, + 290, + 547 + ], + "type": "text", + "content": "Julian Bitterwolf, Maximilian Mueller, and Matthias Hein. 2023. In or out? fixing imagenet out-of-distribution detection evaluation. In ICML." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 556, + 290, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 556, + 290, + 590 + ], + "spans": [ + { + "bbox": [ + 69, + 556, + 290, + 590 + ], + "type": "text", + "content": "Lei Cui, Yiheng Xu, Tengchao Lv, and Furu Wei. 2021. Document ai: Benchmarks, models and applications. arXiv preprint arXiv:2111.08609." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 599, + 289, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 599, + 289, + 633 + ], + "spans": [ + { + "bbox": [ + 69, + 599, + 289, + 633 + ], + "type": "text", + "content": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 642, + 289, + 687 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 642, + 289, + 687 + ], + "spans": [ + { + "bbox": [ + 69, + 642, + 289, + 687 + ], + "type": "text", + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 696, + 289, + 729 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 696, + 289, + 729 + ], + "spans": [ + { + "bbox": [ + 69, + 696, + 289, + 729 + ], + "type": "text", + "content": "Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. 2022. Vos: Learning what you don't know by virtual outlier synthesis. In ICLR." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 69, + 739, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 739, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 739, + 289, + 772 + ], + "type": "text", + "content": "Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. 2022. Zero-shot open set detection by extending clip. In AAAI." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 772 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 304, + 72, + 525, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 525, + 105 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 525, + 105 + ], + "type": "text", + "content": "Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. 2021. Exploring the limits of out-of-distribution detection. In NeurIPS." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 114, + 524, + 157 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 114, + 524, + 157 + ], + "spans": [ + { + "bbox": [ + 304, + 114, + 524, + 157 + ], + "type": "text", + "content": "ZongYuan Ge, Sergey Demyanov, Zetao Chen, and Rahul Garnavi. 2017. Generative openmax for multi-class open set classification. arXiv preprint arXiv:1707.07418." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 166, + 524, + 210 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 166, + 524, + 210 + ], + "spans": [ + { + "bbox": [ + 304, + 166, + 524, + 210 + ], + "type": "text", + "content": "Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, and Tong Sun. 2021. Unified pretraining framework for document understanding. In NeurIPS." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 218, + 525, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 218, + 525, + 271 + ], + "spans": [ + { + "bbox": [ + 304, + 218, + 525, + 271 + ], + "type": "text", + "content": "Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. 2022. Xlayoutlm: Towards layout-aware multimodal networks for visually-rich document understanding. In CVPR." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 280, + 525, + 324 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 280, + 525, + 324 + ], + "spans": [ + { + "bbox": [ + 304, + 280, + 525, + 324 + ], + "type": "text", + "content": "Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In ICDAR." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 333, + 525, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 333, + 525, + 378 + ], + "spans": [ + { + "bbox": [ + 304, + 333, + 525, + 378 + ], + "type": "text", + "content": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pages 770-778." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 386, + 525, + 429 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 386, + 525, + 429 + ], + "spans": [ + { + "bbox": [ + 304, + 386, + 525, + 429 + ], + "type": "text", + "content": "Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. 2022. Scaling out-of-distribution detection for real-world settings. In ICML." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 438, + 525, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 438, + 525, + 470 + ], + "spans": [ + { + "bbox": [ + 304, + 438, + 525, + 470 + ], + "type": "text", + "content": "Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 478, + 525, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 478, + 525, + 533 + ], + "spans": [ + { + "bbox": [ + 304, + 478, + 525, + 533 + ], + "type": "text", + "content": "Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In AAAI." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 541, + 525, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 541, + 525, + 585 + ], + "spans": [ + { + "bbox": [ + 304, + 541, + 525, + 585 + ], + "type": "text", + "content": "Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. 2020. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 304, + 593, + 524, + 626 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 593, + 524, + 626 + ], + "spans": [ + { + "bbox": [ + 304, + 593, + 524, + 626 + ], + "type": "text", + "content": "Rui Huang, Andrew Geng, and Yixuan Li. 2021. On the importance of gradients for detecting distributional shifts in the wild. In NeurIPS." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 304, + 634, + 525, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 634, + 525, + 667 + ], + "spans": [ + { + "bbox": [ + 304, + 634, + 525, + 667 + ], + "type": "text", + "content": "Rui Huang and Yixuan Li. 2021. Mos: Towards scaling out-of-distribution detection for large semantic space. In CVPR." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 304, + 676, + 525, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 676, + 525, + 719 + ], + "spans": [ + { + "bbox": [ + 304, + 676, + 525, + 719 + ], + "type": "text", + "content": "Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In ACMMM." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 304, + 728, + 525, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 728, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 728, + 525, + 772 + ], + "type": "text", + "content": "Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In IC-DAR Workshop." + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4982" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 772 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 290, + 105 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 290, + 105 + ], + "type": "text", + "content": "Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, and Dilek Hakkani-Tur. 2022. Towards textual out-of-domain detection without in-domain labels. TASLP." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 115, + 289, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 115, + 289, + 159 + ], + "spans": [ + { + "bbox": [ + 69, + 115, + 289, + 159 + ], + "type": "text", + "content": "Geewook Kim, Teakgyu Hong, Moonbin Yim, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. 2022. Donut: Document understanding transformer withoutOCR." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 169, + 289, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 169, + 289, + 191 + ], + "spans": [ + { + "bbox": [ + 69, + 169, + 289, + 191 + ], + "type": "text", + "content": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 200, + 289, + 244 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 200, + 289, + 244 + ], + "spans": [ + { + "bbox": [ + 69, + 200, + 289, + 244 + ], + "type": "text", + "content": "Stefan Larson, Gordon Lim, Yutong Ai, David Kuang, and Kevin Leach. 2022. Evaluating out-of-distribution performance on document image classifiers. In NeurIPS." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 254, + 289, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 254, + 289, + 298 + ], + "spans": [ + { + "bbox": [ + 69, + 254, + 289, + 298 + ], + "type": "text", + "content": "Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 308, + 289, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 308, + 289, + 352 + ], + "spans": [ + { + "bbox": [ + 69, + 308, + 289, + 352 + ], + "type": "text", + "content": "D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard. 2006. Building a test collection for complex document information processing. In SIGIR." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 362, + 289, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 362, + 289, + 406 + ], + "spans": [ + { + "bbox": [ + 69, + 362, + 289, + 406 + ], + "type": "text", + "content": "Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In AAAI." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 416, + 289, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 416, + 289, + 460 + ], + "spans": [ + { + "bbox": [ + 69, + 416, + 289, + 460 + ], + "type": "text", + "content": "Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. 2022. Dit: Self-supervised pretraining for document image transformer. In ACM MM." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 470, + 289, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 470, + 289, + 514 + ], + "spans": [ + { + "bbox": [ + 69, + 470, + 289, + 514 + ], + "type": "text", + "content": "Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021a. Selfdoc: Self-supervised document representation learning. In CVPR." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 523, + 289, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 523, + 289, + 567 + ], + "spans": [ + { + "bbox": [ + 69, + 523, + 289, + 567 + ], + "type": "text", + "content": "Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, and Jun Zhang. 2021b. kfolden: k-fold ensemble for out-of-distribution detection. In EMNLP." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 577, + 289, + 610 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 577, + 289, + 610 + ], + "spans": [ + { + "bbox": [ + 69, + 577, + 289, + 610 + ], + "type": "text", + "content": "Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 620, + 289, + 652 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 620, + 289, + 652 + ], + "spans": [ + { + "bbox": [ + 69, + 620, + 289, + 652 + ], + "type": "text", + "content": "Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. 2020. Energy-based out-of-distribution detection. In NeurIPS." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 663, + 289, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 663, + 289, + 719 + ], + "spans": [ + { + "bbox": [ + 69, + 663, + 289, + 719 + ], + "type": "text", + "content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 69, + 728, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 728, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 728, + 289, + 772 + ], + "type": "text", + "content": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 772 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 305, + 72, + 524, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 524, + 116 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 524, + 116 + ], + "type": "text", + "content": "Yifei Ming, Ziyang Cai, Jiumiang Gu, Yiyou Sun, Wei Li, and Yixuan Li. 2022a. Delving into out-of-distribution detection with vision-language representations. In NeurIPS." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 125, + 524, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 125, + 524, + 158 + ], + "spans": [ + { + "bbox": [ + 304, + 125, + 524, + 158 + ], + "type": "text", + "content": "Yifei Ming, Ying Fan, and Yixuan Li. 2022b. Poem: Out-of-distribution detection with posterior sampling. In ICML. PMLR." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 167, + 524, + 200 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 167, + 524, + 200 + ], + "spans": [ + { + "bbox": [ + 304, + 167, + 524, + 200 + ], + "type": "text", + "content": "Yifei Ming and Yixuan Li. 2023. How does fin-tuning impact out-of-distribution detection for vision-language models? IJCV." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 209, + 524, + 242 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 209, + 524, + 242 + ], + "spans": [ + { + "bbox": [ + 304, + 209, + 524, + 242 + ], + "type": "text", + "content": "Yifei Ming, Yiyou Sun, Ousmane Dia, and Yixuan Li. 2023. How to exploit hyperspherical embeddings for out-of-distribution detection? In ICLR." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 251, + 524, + 284 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 251, + 524, + 284 + ], + "spans": [ + { + "bbox": [ + 304, + 251, + 524, + 284 + ], + "type": "text", + "content": "Yifei Ming, Hang Yin, and Yixuan Li. 2022c. On the impact of spurious correlation for out-of-distribution detection. In AAAI." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 293, + 524, + 338 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 293, + 524, + 338 + ], + "spans": [ + { + "bbox": [ + 304, + 293, + 524, + 338 + ], + "type": "text", + "content": "Ajoy Mondal, Peter Lipps, and CV Jawahar. 2020. Iiit-13k: a new dataset for graphical object detection in documents. In International Workshop on Document Analysis Systems." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 346, + 524, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 346, + 524, + 390 + ], + "spans": [ + { + "bbox": [ + 304, + 346, + 524, + 390 + ], + "type": "text", + "content": "Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. 2019. Do deep generative models know what they don't know? In ICLR." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 398, + 524, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 398, + 524, + 432 + ], + "spans": [ + { + "bbox": [ + 304, + 398, + 524, + 432 + ], + "type": "text", + "content": "Poojan Oza and Vishal M Patel. 2019. C2ae: Class conditioned auto-encoder for open-set recognition. In CVPR." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 440, + 524, + 486 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 440, + 524, + 486 + ], + "spans": [ + { + "bbox": [ + 304, + 440, + 524, + 486 + ], + "type": "text", + "content": "Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: A consolidated receipt dataset for post-ocr parsing. In NeurIPS Workshop." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 494, + 524, + 538 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 494, + 524, + 538 + ], + "spans": [ + { + "bbox": [ + 304, + 494, + 524, + 538 + ], + "type": "text", + "content": "Alexander Podolskiy, Dmitry Lipin, Andrey Bout, Ekaterina Artemova, and Irina Piontkovskaya. 2021. Revisiting mahalanobis distance for transformer-based out-of-domain detection. In AAAI." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 304, + 546, + 524, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 546, + 524, + 591 + ], + "spans": [ + { + "bbox": [ + 304, + 546, + 524, + 591 + ], + "type": "text", + "content": "Yu-Ting Qiang, Yan-Wei Fu, Xiao Yu, Yan-Wen Guo, Zhi-Hua Zhou, and Leonid Sigal. 2019. Learning to generate posters of scientific papers by probabilistic graphical models. JCST." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 304, + 599, + 524, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 599, + 524, + 655 + ], + "spans": [ + { + "bbox": [ + 304, + 599, + 524, + 655 + ], + "type": "text", + "content": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 304, + 663, + 524, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 663, + 524, + 719 + ], + "spans": [ + { + "bbox": [ + 304, + 663, + 524, + 719 + ], + "type": "text", + "content": "Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, and Peter J Liu. 2023. Out-of-distribution detection and selective generation for conditional language models. In ICLR." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 304, + 728, + 524, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 728, + 524, + 772 + ], + "spans": [ + { + "bbox": [ + 304, + 728, + 524, + 772 + ], + "type": "text", + "content": "Madeline C Schiappa, Yogesh S Rawat, Shruti Vyas, Vibhav Vineet, and Hamid Palangi. 2022. Multimodal robustness analysis against language and visual perturbations. In NeurIPS." + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4983" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 105 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 105 + ], + "type": "text", + "content": "Vikash Sehwag, Mung Chiang, and Prateek Mittal. 2021. Ssd: A unified framework for self-supervised outlier detection. In ICLR." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 114, + 289, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 114, + 289, + 158 + ], + "spans": [ + { + "bbox": [ + 69, + 114, + 289, + 158 + ], + "type": "text", + "content": "Yilin Shen, Yen-Chang Hsu, Avik Ray, and Hongxia Jin. 2021. Enhancing the generalization for intent classification and out-of-domain detection in SLU. In ACL-IJCNLP." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 167, + 289, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 167, + 289, + 190 + ], + "spans": [ + { + "bbox": [ + 69, + 167, + 289, + 190 + ], + "type": "text", + "content": "Ray Smith. 2007. An overview of the tesseractOCR engine. In ICDAR." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 198, + 289, + 232 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 198, + 289, + 232 + ], + "spans": [ + { + "bbox": [ + 69, + 198, + 289, + 232 + ], + "type": "text", + "content": "Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. VI-bert: Pre-training of generic visual-linguistic representations. In ICLR." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 240, + 289, + 273 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 240, + 289, + 273 + ], + "spans": [ + { + "bbox": [ + 69, + 240, + 289, + 273 + ], + "type": "text", + "content": "Yiyou Sun, Chuan Guo, and Yixuan Li. 2021. React: Out-of-distribution detection with rectified activations. In NeurIPS." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 282, + 289, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 282, + 289, + 315 + ], + "spans": [ + { + "bbox": [ + 69, + 282, + 289, + 315 + ], + "type": "text", + "content": "Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. 2022. Out-of-distribution detection with deep nearest neighbors. In ICML." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 324, + 289, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 324, + 289, + 368 + ], + "spans": [ + { + "bbox": [ + 69, + 324, + 289, + 368 + ], + "type": "text", + "content": "Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. 2020. Csi: Novelty detection via contrastive learning on distributionally shifted instances. In NeurIPS." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 377, + 289, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 377, + 289, + 432 + ], + "spans": [ + { + "bbox": [ + 69, + 377, + 289, + 432 + ], + "type": "text", + "content": "Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, and Mohit Bansal. 2023. Unifying vision, text, and layout for universal document processing. In CVPR." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 440, + 289, + 474 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 440, + 289, + 474 + ], + "spans": [ + { + "bbox": [ + 69, + 440, + 289, + 474 + ], + "type": "text", + "content": "Thirumalaisamy P Velavan and Christian G Meyer. 2020. The Covid-19 epidemic. Tropical medicine & international health, 25(3):278." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 483, + 289, + 538 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 483, + 289, + 538 + ], + "spans": [ + { + "bbox": [ + 69, + 483, + 289, + 538 + ], + "type": "text", + "content": "Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, Dianhai Yu, et al. 2022a. mmlayout: Multi-grained multimodal transformer for document understanding. In ACMMM." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 547, + 289, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 547, + 289, + 602 + ], + "spans": [ + { + "bbox": [ + 69, + 547, + 289, + 602 + ], + "type": "text", + "content": "Zilong Wang, Jiaxiang Gu, Chris Tensmeyer, Nikolaos Barmpalios, Ani Nenkova, Tong Sun, Jingbo Shang, and Vlad I Morariu. 2022b. Mgdoc: Pre-training with multi-granular hierarchy for document image understanding. In EMNLP." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 611, + 289, + 676 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 611, + 289, + 676 + ], + "spans": [ + { + "bbox": [ + 69, + 611, + 289, + 676 + ], + "type": "text", + "content": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 686, + 289, + 729 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 686, + 289, + 729 + ], + "spans": [ + { + "bbox": [ + 69, + 686, + 289, + 729 + ], + "type": "text", + "content": "Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. 2019. Detector2. https://github.com/facebookresearch/detectron2." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 69, + 738, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 738, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 738, + 289, + 772 + ], + "type": "text", + "content": "Zhisheng Xiao, Qing Yan, and Yali Amit. 2020. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In NeurIPS." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 357 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "type": "text", + "content": "Keyang Xu, Tongzheng Ren, Shikun Zhang, Yihao Feng, and Caiming Xiong. 2021a. Unsupervised out-of-domain detection via pre-trained transformers. In ACL." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 125, + 524, + 180 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 125, + 524, + 180 + ], + "spans": [ + { + "bbox": [ + 304, + 125, + 524, + 180 + ], + "type": "text", + "content": "Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. 2021b. Layoutmv2: Multi-modal pre-training for visually-rich document understanding. In ACL." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 189, + 524, + 232 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 189, + 524, + 232 + ], + "spans": [ + { + "bbox": [ + 304, + 189, + 524, + 232 + ], + "type": "text", + "content": "Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pre-training of text and layout for document image understanding. In SIGKDD." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 241, + 524, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 241, + 524, + 275 + ], + "spans": [ + { + "bbox": [ + 304, + 241, + 524, + 275 + ], + "type": "text", + "content": "Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. In ICDAR." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 282, + 524, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 282, + 524, + 316 + ], + "spans": [ + { + "bbox": [ + 304, + 282, + 524, + 316 + ], + "type": "text", + "content": "Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021. Contrastive out-of-distribution detection for pretrained transformers. In EMNLP." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 324, + 524, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 324, + 524, + 357 + ], + "spans": [ + { + "bbox": [ + 304, + 324, + 524, + 357 + ], + "type": "text", + "content": "Yunhua Zhou, Peiju Liu, and Xipeng Qiu. 2022. KNN-contrastive learning for out-of-domain intent classification. In ACL." + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4984" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 226, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 226, + 83 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 226, + 83 + ], + "type": "text", + "content": "A Dataset and Model Details" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 95, + 138, + 105 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 95, + 138, + 105 + ], + "spans": [ + { + "bbox": [ + 68, + 95, + 138, + 105 + ], + "type": "text", + "content": "A.1 Datasets" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 112, + 290, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 112, + 290, + 208 + ], + "spans": [ + { + "bbox": [ + 67, + 112, + 290, + 208 + ], + "type": "text", + "content": "The full RVL-CDIP dataset consists of 320K/40K/40K training/validation/testing images under 16 categories. We select 12 of them as the ID (In-domain) data. We employ the Google OCR engine to extract the text and layout information, which provides tokens, text blocks and the corresponding bounding boxes." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 219, + 284, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 219, + 284, + 232 + ], + "spans": [ + { + "bbox": [ + 67, + 219, + 284, + 232 + ], + "type": "text", + "content": "A.2 Quantifying OOD Dataset Construction" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 238, + 291, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 238, + 291, + 507 + ], + "spans": [ + { + "bbox": [ + 67, + 238, + 291, + 507 + ], + "type": "text", + "content": "The distance between datasets can be measured via Optimal Transport Dataset Distance (OTDD)" + }, + { + "bbox": [ + 67, + 238, + 291, + 507 + ], + "type": "inline_equation", + "content": "^{8}" + }, + { + "bbox": [ + 67, + 238, + 291, + 507 + ], + "type": "text", + "content": ". We visualize the OTDD distance between ID and the OOD (both in-domain and out-domain) data in Fig. 10a, where we highlight the in-domain OOD data in blue and the out-domain OOD data in green. Specifically, we randomly sample 1000 images from each dataset and calculate the average distance between pairs of datasets. We can see a significant gap between the OTDD of in-domain OOD data and out-domain OOD data. To make the analysis more thorough, we consider two additional in-domain OOD settings: (1) select the classes the model performs well as OOD data; (2) randomly select classes as OOD data. The results are shown in Fig. 10b and Fig. 10c. We can see that the distance between ID and in-domain OOD is similar to the original scheme (Fig. 10a). This suggests that most in-domain OOD categories are not far from ID data." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 510, + 291, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 510, + 291, + 591 + ], + "spans": [ + { + "bbox": [ + 67, + 510, + 291, + 591 + ], + "type": "text", + "content": "While this paper represents an initial endeavor, we hope that our work will serve as a stepping stone towards constructing more comprehensive and diverse OOD benchmarks in the document domain, akin to those available in the NLP and natural image domain." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 602, + 230, + 616 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 602, + 230, + 616 + ], + "spans": [ + { + "bbox": [ + 68, + 602, + 230, + 616 + ], + "type": "text", + "content": "A.3 Models and Training Details" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 621, + 291, + 729 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 621, + 291, + 729 + ], + "spans": [ + { + "bbox": [ + 67, + 621, + 291, + 729 + ], + "type": "text", + "content": "All models reported in Fig. 2b, except UDoc, are initialized with pre-trained weights from Huggingface and fine-tuned on the full RVL-CDIP training set. During fine-tuning, we train these models on RVL-CDIP with the cross-entropy loss. The models were optimized with Adam optimizer (Kingma and Ba, 2014) for 30 epochs with a batch size of 50 and a learning rate of " + }, + { + "bbox": [ + 67, + 621, + 291, + 729 + ], + "type": "inline_equation", + "content": "2 \\times 10^{-5}" + }, + { + "bbox": [ + 67, + 621, + 291, + 729 + ], + "type": "text", + "content": " on 8 A100 GPUs." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 526, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 98 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 98 + ], + "type": "text", + "content": "The following are the hyperparameters of the models used in our paper:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 303, + 109, + 354, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 109, + 354, + 121 + ], + "spans": [ + { + "bbox": [ + 303, + 109, + 354, + 121 + ], + "type": "text", + "content": "Text-only:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 316, + 134, + 526, + 280 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 316, + 134, + 525, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 134, + 525, + 216 + ], + "spans": [ + { + "bbox": [ + 316, + 134, + 525, + 216 + ], + "type": "text", + "content": "- BERT and RoBERTa: We adopt RoBERTaBase (12 layers) and BERTBase (12 layers) as backbones and set the maximum sequence length to 512. For RoBERTa, the classifier consists of two linear layers followed by a tanh activation function." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 316, + 228, + 526, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 228, + 526, + 280 + ], + "spans": [ + { + "bbox": [ + 316, + 228, + 526, + 280 + ], + "type": "text", + "content": "- LongformerBase: We also employ LongformerBase (12 layers) as the backbone and set the maximum sequence length to 4,096." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 295, + 363, + 307 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 295, + 363, + 307 + ], + "spans": [ + { + "bbox": [ + 303, + 295, + 363, + 307 + ], + "type": "text", + "content": "Vision-only:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 320, + 525, + 506 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 316, + 320, + 525, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 320, + 525, + 360 + ], + "spans": [ + { + "bbox": [ + 316, + 320, + 525, + 360 + ], + "type": "text", + "content": "- ResNet50: We adopt ResNet50 pre-trained on ImageNet-1k as the backbone. We fine-tune the model at a resolution of " + }, + { + "bbox": [ + 316, + 320, + 525, + 360 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 316, + 320, + 525, + 360 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 373, + 525, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 373, + 525, + 426 + ], + "spans": [ + { + "bbox": [ + 316, + 373, + 525, + 426 + ], + "type": "text", + "content": "- ViT: We consider ViTBase (vit-base-patch16-224, pre-trained on ImageNet-21k) as the backbone and fine-tune at a resolution of " + }, + { + "bbox": [ + 316, + 373, + 525, + 426 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 316, + 373, + 525, + 426 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 439, + 525, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 439, + 525, + 506 + ], + "spans": [ + { + "bbox": [ + 316, + 439, + 525, + 506 + ], + "type": "text", + "content": "- SwinB: We also use the Swin Transformer (swin-base-patch4-window7-224-in22k, pretrained on ImageNet-21k) as the backbone and fine-tune the model at a resolution of " + }, + { + "bbox": [ + 316, + 439, + 525, + 506 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 316, + 439, + 525, + 506 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 520, + 369, + 533 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 520, + 369, + 533 + ], + "spans": [ + { + "bbox": [ + 303, + 520, + 369, + 533 + ], + "type": "text", + "content": "Text+Layout:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 546, + 526, + 772 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 316, + 546, + 525, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 546, + 525, + 599 + ], + "spans": [ + { + "bbox": [ + 316, + 546, + 525, + 599 + ], + "type": "text", + "content": "- **LayoutLMv1:** This model employs the LayoutLM (layoutlm-base-uncased, 12 layers, pre-trained on IIT-CDIP) as the backbone. We set the maximum sequence length to 512." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 612, + 526, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 612, + 526, + 705 + ], + "spans": [ + { + "bbox": [ + 316, + 612, + 526, + 705 + ], + "type": "text", + "content": "- Spatial-RoBERTaBase (Pre): This model combines our spatial-aware adapter to the pretrained RoBERTaBase model. The adapter is applied to the word embedding layer. We freeze the pre-trained word embeddings and optimize the spatial-aware adapter and transformers." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 719, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 719, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 316, + 719, + 525, + 772 + ], + "type": "text", + "content": "- Spatial-RoBERTaBase (Post): Instead of inserting the spatial-aware adapter in the input layer, this model integrates the spatial-aware adapter at the output layer of the transformer." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 80, + 738, + 266, + 750 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 738, + 266, + 750 + ], + "spans": [ + { + "bbox": [ + 80, + 738, + 266, + 750 + ], + "type": "text", + "content": "7https://cloud.google.com/vision/docs/ocr" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 81, + 750, + 235, + 761 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 750, + 235, + 761 + ], + "spans": [ + { + "bbox": [ + 81, + 750, + 235, + 761 + ], + "type": "text", + "content": "8https://github.com/microsoft/otdd" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 81, + 761, + 217, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 761, + 217, + 772 + ], + "spans": [ + { + "bbox": [ + 81, + 761, + 217, + 772 + ], + "type": "text", + "content": "9https://huggingface.co/models" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "text", + "content": "4985" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 294, + 791, + 299, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 791, + 299, + 800 + ], + "spans": [ + { + "bbox": [ + 294, + 791, + 299, + 800 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 136, + 207, + 237 + ], + "blocks": [ + { + "bbox": [ + 70, + 136, + 207, + 237 + ], + "lines": [ + { + "bbox": [ + 70, + 136, + 207, + 237 + ], + "spans": [ + { + "bbox": [ + 70, + 136, + 207, + 237 + ], + "type": "image", + "image_path": "90af6de6831ddb1f6eb120fbda29199b32b303e1b9a862bfc4bdbf707ef2c2c9.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 81, + 245, + 193, + 257 + ], + "lines": [ + { + "bbox": [ + 81, + 245, + 193, + 257 + ], + "spans": [ + { + "bbox": [ + 81, + 245, + 193, + 257 + ], + "type": "text", + "content": "(a) OOD (Worst performance)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 209, + 137, + 346, + 237 + ], + "blocks": [ + { + "bbox": [ + 209, + 137, + 346, + 237 + ], + "lines": [ + { + "bbox": [ + 209, + 137, + 346, + 237 + ], + "spans": [ + { + "bbox": [ + 209, + 137, + 346, + 237 + ], + "type": "image", + "image_path": "dd2e9423677d9dc596b0519c26eb3f64df1deb8943d1f34f83ac9faeec506a27.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 266, + 525, + 290 + ], + "lines": [ + { + "bbox": [ + 67, + 266, + 525, + 290 + ], + "spans": [ + { + "bbox": [ + 67, + 266, + 525, + 290 + ], + "type": "text", + "content": "Figure 10: Visualization of optimal transport dataset distance for ID and OOD (in-domain and out-domain) datasets. We highlight the in-domain OOD data in blue and the out-domain OOD data in green." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 347, + 137, + 484, + 237 + ], + "blocks": [ + { + "bbox": [ + 223, + 245, + 330, + 257 + ], + "lines": [ + { + "bbox": [ + 223, + 245, + 330, + 257 + ], + "spans": [ + { + "bbox": [ + 223, + 245, + 330, + 257 + ], + "type": "text", + "content": "(b) OOD (Best performance)." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 347, + 137, + 484, + 237 + ], + "lines": [ + { + "bbox": [ + 347, + 137, + 484, + 237 + ], + "spans": [ + { + "bbox": [ + 347, + 137, + 484, + 237 + ], + "type": "image", + "image_path": "4d55b13de1f9a2cbe1a3e0dfc3ae97cf5463611f9a1f228ab8d7be62d03e1f0e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 361, + 245, + 468, + 257 + ], + "lines": [ + { + "bbox": [ + 361, + 245, + 468, + 257 + ], + "spans": [ + { + "bbox": [ + 361, + 245, + 468, + 257 + ], + "type": "text", + "content": "(c) OOD (Random selection)." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 76, + 432, + 186, + 484 + ], + "blocks": [ + { + "bbox": [ + 76, + 432, + 186, + 484 + ], + "lines": [ + { + "bbox": [ + 76, + 432, + 186, + 484 + ], + "spans": [ + { + "bbox": [ + 76, + 432, + 186, + 484 + ], + "type": "image", + "image_path": "075807af9553c10e98933f376dc0e187355f594c655fb5afdd5be0b40c0edf76.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 76, + 485, + 185, + 539 + ], + "blocks": [ + { + "bbox": [ + 76, + 485, + 185, + 539 + ], + "lines": [ + { + "bbox": [ + 76, + 485, + 185, + 539 + ], + "spans": [ + { + "bbox": [ + 76, + 485, + 185, + 539 + ], + "type": "image", + "image_path": "14438878630e68d29777e5137aa31845596aed3bda2ad1df565207e6063ef4d2.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 85, + 542, + 174, + 553 + ], + "lines": [ + { + "bbox": [ + 85, + 542, + 174, + 553 + ], + "spans": [ + { + "bbox": [ + 85, + 542, + 174, + 553 + ], + "type": "text", + "content": "(a) RoBERTaBase (10%)" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 187, + 432, + 296, + 485 + ], + "blocks": [ + { + "bbox": [ + 187, + 432, + 296, + 485 + ], + "lines": [ + { + "bbox": [ + 187, + 432, + 296, + 485 + ], + "spans": [ + { + "bbox": [ + 187, + 432, + 296, + 485 + ], + "type": "image", + "image_path": "a4f3a8b4ec2a2c7f337e06c72f660c8f915f92ce4076165a883f13ee07d9c79e.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 187, + 485, + 295, + 539 + ], + "blocks": [ + { + "bbox": [ + 187, + 485, + 295, + 539 + ], + "lines": [ + { + "bbox": [ + 187, + 485, + 295, + 539 + ], + "spans": [ + { + "bbox": [ + 187, + 485, + 295, + 539 + ], + "type": "image", + "image_path": "901da70de489a082be65312b3f3de0b01b9aa0f342d51cb4a59a8c4707eca283.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 196, + 542, + 286, + 553 + ], + "lines": [ + { + "bbox": [ + 196, + 542, + 286, + 553 + ], + "spans": [ + { + "bbox": [ + 196, + 542, + 286, + 553 + ], + "type": "text", + "content": "(b) RoBERTaBase (20%)" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 299, + 432, + 407, + 485 + ], + "blocks": [ + { + "bbox": [ + 299, + 432, + 407, + 485 + ], + "lines": [ + { + "bbox": [ + 299, + 432, + 407, + 485 + ], + "spans": [ + { + "bbox": [ + 299, + 432, + 407, + 485 + ], + "type": "image", + "image_path": "e4836d6a18a3287fff4747411832a7273c9139688deecbd6e2ba33498a7c2c11.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 299, + 485, + 407, + 539 + ], + "blocks": [ + { + "bbox": [ + 299, + 485, + 407, + 539 + ], + "lines": [ + { + "bbox": [ + 299, + 485, + 407, + 539 + ], + "spans": [ + { + "bbox": [ + 299, + 485, + 407, + 539 + ], + "type": "image", + "image_path": "27cde30ec36119db3d2b1d13a779742c66fd0c75d4b18d51103a65553730bb77.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 409, + 432, + 518, + 485 + ], + "blocks": [ + { + "bbox": [ + 409, + 432, + 518, + 485 + ], + "lines": [ + { + "bbox": [ + 409, + 432, + 518, + 485 + ], + "spans": [ + { + "bbox": [ + 409, + 432, + 518, + 485 + ], + "type": "image", + "image_path": "3310b40122707b033c78dc92f8004821ba6350fbd59fbd099f0fb3a136065523.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 409, + 485, + 518, + 539 + ], + "blocks": [ + { + "bbox": [ + 409, + 485, + 518, + 539 + ], + "lines": [ + { + "bbox": [ + 409, + 485, + 518, + 539 + ], + "spans": [ + { + "bbox": [ + 409, + 485, + 518, + 539 + ], + "type": "image", + "image_path": "7bb3433e1b02ffca18eec3ccce6d720aaefda0f7829f8a45aff1c9efcc58fc61.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 76, + 555, + 185, + 604 + ], + "blocks": [ + { + "bbox": [ + 76, + 555, + 185, + 604 + ], + "lines": [ + { + "bbox": [ + 76, + 555, + 185, + 604 + ], + "spans": [ + { + "bbox": [ + 76, + 555, + 185, + 604 + ], + "type": "image", + "image_path": "b6bc09e143045d69ad07c9c1cb4350d135eaa51ce05c551d3adcc158703ee13e.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 76, + 604, + 185, + 660 + ], + "blocks": [ + { + "bbox": [ + 76, + 604, + 185, + 660 + ], + "lines": [ + { + "bbox": [ + 76, + 604, + 185, + 660 + ], + "spans": [ + { + "bbox": [ + 76, + 604, + 185, + 660 + ], + "type": "image", + "image_path": "fd7f589a65e340c0a995ffa12d8acfd4f3b78b2d36381737e7ebc9f714c8544a.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 95, + 662, + 162, + 674 + ], + "lines": [ + { + "bbox": [ + 95, + 662, + 162, + 674 + ], + "spans": [ + { + "bbox": [ + 95, + 662, + 162, + 674 + ], + "type": "text", + "content": "(e) " + }, + { + "bbox": [ + 95, + 662, + 162, + 674 + ], + "type": "inline_equation", + "content": "\\mathrm{ViT_{Base}}" + }, + { + "bbox": [ + 95, + 662, + 162, + 674 + ], + "type": "text", + "content": " (10%)" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 187, + 555, + 295, + 660 + ], + "blocks": [ + { + "bbox": [ + 187, + 555, + 295, + 660 + ], + "lines": [ + { + "bbox": [ + 187, + 555, + 295, + 660 + ], + "spans": [ + { + "bbox": [ + 187, + 555, + 295, + 660 + ], + "type": "image", + "image_path": "a0eb1ce030971c848e4c5626a4f4fa7e369eac9f86805535dac8e5d8a872cc34.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 207, + 662, + 273, + 674 + ], + "lines": [ + { + "bbox": [ + 207, + 662, + 273, + 674 + ], + "spans": [ + { + "bbox": [ + 207, + 662, + 273, + 674 + ], + "type": "text", + "content": "(f) " + }, + { + "bbox": [ + 207, + 662, + 273, + 674 + ], + "type": "inline_equation", + "content": "\\mathrm{ViT_{Base}}" + }, + { + "bbox": [ + 207, + 662, + 273, + 674 + ], + "type": "text", + "content": " (20%)" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 67, + 677, + 525, + 702 + ], + "lines": [ + { + "bbox": [ + 67, + 677, + 525, + 702 + ], + "spans": [ + { + "bbox": [ + 67, + 677, + 525, + 702 + ], + "type": "text", + "content": "Figure 11: Feature visualization for pre-trained (with different numbers of pre-training data) and fine-tuned models. We show both in-domain (RVL-CDIP) and out-domain (CORD) OOD datasets." + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 298, + 555, + 406, + 660 + ], + "blocks": [ + { + "bbox": [ + 307, + 542, + 396, + 553 + ], + "lines": [ + { + "bbox": [ + 307, + 542, + 396, + 553 + ], + "spans": [ + { + "bbox": [ + 307, + 542, + 396, + 553 + ], + "type": "text", + "content": "(c) RoBERTaBase (40%)" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 298, + 555, + 406, + 660 + ], + "lines": [ + { + "bbox": [ + 298, + 555, + 406, + 660 + ], + "spans": [ + { + "bbox": [ + 298, + 555, + 406, + 660 + ], + "type": "image", + "image_path": "3111f755a7d0ff1a749b26995277f72811df8e66e50a82c9549a54b80c3f4c86.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 662, + 385, + 674 + ], + "lines": [ + { + "bbox": [ + 317, + 662, + 385, + 674 + ], + "spans": [ + { + "bbox": [ + 317, + 662, + 385, + 674 + ], + "type": "text", + "content": "(g) " + }, + { + "bbox": [ + 317, + 662, + 385, + 674 + ], + "type": "inline_equation", + "content": "\\mathrm{ViT_{Base}}" + }, + { + "bbox": [ + 317, + 662, + 385, + 674 + ], + "type": "text", + "content": " (40%)" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 409, + 555, + 518, + 660 + ], + "blocks": [ + { + "bbox": [ + 415, + 542, + 510, + 553 + ], + "lines": [ + { + "bbox": [ + 415, + 542, + 510, + 553 + ], + "spans": [ + { + "bbox": [ + 415, + 542, + 510, + 553 + ], + "type": "text", + "content": "(d) RoBERTaBase (100%)" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 409, + 555, + 518, + 660 + ], + "lines": [ + { + "bbox": [ + 409, + 555, + 518, + 660 + ], + "spans": [ + { + "bbox": [ + 409, + 555, + 518, + 660 + ], + "type": "image", + "image_path": "fea5884ad3a01e9d91ea5681e5e1cf201c87eb9d4be6e37730e9ccd5374ae46f.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 426, + 662, + 498, + 674 + ], + "lines": [ + { + "bbox": [ + 426, + 662, + 498, + 674 + ], + "spans": [ + { + "bbox": [ + 426, + 662, + 498, + 674 + ], + "type": "text", + "content": "(h) " + }, + { + "bbox": [ + 426, + 662, + 498, + 674 + ], + "type": "inline_equation", + "content": "\\mathrm{ViT_{Base}}" + }, + { + "bbox": [ + 426, + 662, + 498, + 674 + ], + "type": "text", + "content": " (100%)" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_caption" + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4986" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 293, + 792, + 300, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 792, + 300, + 801 + ], + "spans": [ + { + "bbox": [ + 293, + 792, + 300, + 801 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 63, + 196, + 327 + ], + "blocks": [ + { + "bbox": [ + 70, + 63, + 196, + 327 + ], + "lines": [ + { + "bbox": [ + 70, + 63, + 196, + 327 + ], + "spans": [ + { + "bbox": [ + 70, + 63, + 196, + 327 + ], + "type": "image", + "image_path": "8e18e6d992d98a892ba8037f96ab0a525b0b6088487dd5318b64b8782e511986.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 231, + 63, + 359, + 325 + ], + "blocks": [ + { + "bbox": [ + 231, + 63, + 359, + 325 + ], + "lines": [ + { + "bbox": [ + 231, + 63, + 359, + 325 + ], + "spans": [ + { + "bbox": [ + 231, + 63, + 359, + 325 + ], + "type": "image", + "image_path": "e4abd59173f9fc34556cee7805493b325a16fa764f5f4fa43d769b3e7844d2ec.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 395, + 63, + 522, + 324 + ], + "blocks": [ + { + "bbox": [ + 395, + 63, + 522, + 324 + ], + "lines": [ + { + "bbox": [ + 395, + 63, + 522, + 324 + ], + "spans": [ + { + "bbox": [ + 395, + 63, + 522, + 324 + ], + "type": "image", + "image_path": "faac36a23aa786e1a67155dfb68d9f1e4bc0aa8668956fbde9f449316527e24b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 70, + 407, + 289, + 507 + ], + "blocks": [ + { + "bbox": [ + 66, + 333, + 525, + 405 + ], + "lines": [ + { + "bbox": [ + 66, + 333, + 525, + 405 + ], + "spans": [ + { + "bbox": [ + 66, + 333, + 525, + 405 + ], + "type": "text", + "content": "Figure 12: MSP, Energy, KNN, and Maha score histogram distributions of ID (blue) and OOD (green) inputs derived from fine-tuned ResNet-50, RoBERTa, and LayoutLMv3. The KNN scores calculated from both vision and language models naturally form smooth distributions. In contrast, MSP and Maha scores for both in- and out-of-distribution data concentrate on high values. Overall our experiments show that using feature space makes the scores more distinguishable between and out-of-distributions and, as a result, enables more effective OOD detection." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 70, + 407, + 289, + 507 + ], + "lines": [ + { + "bbox": [ + 70, + 407, + 289, + 507 + ], + "spans": [ + { + "bbox": [ + 70, + 407, + 289, + 507 + ], + "type": "image", + "image_path": "fe1eb8b820ce0708098b7e7e18c14a3c2bfb47f3907afdf75356b8fe35c93854.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 518, + 290, + 555 + ], + "lines": [ + { + "bbox": [ + 67, + 518, + 290, + 555 + ], + "spans": [ + { + "bbox": [ + 67, + 518, + 290, + 555 + ], + "type": "text", + "content": "Figure 13: The network architectures in green blocks are our proposed models. We also show the modality information on top of each architecture." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 584, + 169, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 584, + 169, + 597 + ], + "spans": [ + { + "bbox": [ + 68, + 584, + 169, + 597 + ], + "type": "text", + "content": "Vision+Text+Layout:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 80, + 624, + 291, + 772 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 80, + 624, + 290, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 624, + 290, + 664 + ], + "spans": [ + { + "bbox": [ + 80, + 624, + 290, + 664 + ], + "type": "text", + "content": "- LaytouLMv3: We use LayoutLMv3 (layoutlmv3-base, 12 layers, pre-trained on IIT-CDIP) as the backbone." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 80, + 692, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 692, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 80, + 692, + 291, + 772 + ], + "type": "text", + "content": "- UDoc: We use a slight variant of UDoc with the only difference in the sentence encoder, where we adopt a smaller version of the pretrained sentence encoder (all-MiniLM-L6-v2, 6 layers) instead of the larger sentence encoder (bert-base-nli-mean-tokens, 12 layers)." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 412, + 491, + 426 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 412, + 491, + 426 + ], + "spans": [ + { + "bbox": [ + 302, + 412, + 491, + 426 + ], + "type": "text", + "content": "B Beyond Document Classification" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": "In the main paper, we mainly focus on document classification to provide a thorough and in-depth analysis. In this section, we go beyond document classification and explore OOD detection for two entity-level tasks in documents: document entity recognition and document object detection. It is natural to detect and recognize basic units in documents such as text, tables, and figures. Document entity recognition aims to predict the label for each semantic entity with given bounding boxes. Document object detection is an object detection task for document images. Specifically, we denote the input as " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": ", the bounding box coordinates associated with object instances in the image as " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "\\pmb{b} \\in \\mathbb{R}^4" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": ", and use the model with parameters " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": " to model the bounding box regression " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "p_{\\theta}(b|x)" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": " and the label classification " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "p_{\\theta}(y|x, b)" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": ". Given a test input " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "\\hat{x}" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": ", the OOD detection scoring function for entity detection and recognition can be unified as " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "S(\\hat{x}, \\hat{b})" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "\\hat{b}" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": " denotes the object instance predicted by the object detector. In particular, for document entity recognition, since the bounding boxes are provided, the OOD score can be simplified as " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "S(\\hat{x}, \\bar{b})" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "inline_equation", + "content": "\\bar{b}" + }, + { + "bbox": [ + 302, + 448, + 526, + 772 + ], + "type": "text", + "content": " is the given object instance." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 789 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 789 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 789 + ], + "type": "text", + "content": "4987" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 293, + 791, + 299, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 791, + 299, + 800 + ], + "spans": [ + { + "bbox": [ + 293, + 791, + 299, + 800 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 66, + 71, + 293, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 71, + 293, + 383 + ], + "spans": [ + { + "bbox": [ + 66, + 71, + 293, + 383 + ], + "type": "text", + "content": "Document Object Detection. For document object detection, we use PubLayNet as the ID dataset and construct the OOD dataset from IIIT-AR-13K. Unlike PubLayNet, where the documents are scientific articles, IIIT-AR-13K is a dataset for graphical object detection in business documents (e.g., annual reports), thus there exists an obvious domain gap. We select natural images as the OOD entity and filter images that contain the OOD entity. Two object detection models are considered in this paper: (1) Vanilla Faster-RCNN with ResNet-50 visual backbone, and (2) Faster-RCNN with VOS (Du et al., 2022), a recent unknown-aware learning framework to improve OOD detection performance for natural images. Following the original paper, we use 1,000 samples for each ID class to estimate the class-conditional Gaussian statistics. The models are trained for 180k iterations with a base learning rate of 0.01 and a batch size of 8 using the Detectron2 framework (Wu et al., 2019). The performance of the models is measured using the mean average precision (MAP) @ intersection over union (IOU) [0.50:0.95] of bounding boxes." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 390, + 291, + 646 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 390, + 291, + 646 + ], + "spans": [ + { + "bbox": [ + 66, + 390, + 291, + 646 + ], + "type": "text", + "content": "Document Entity Recognition. For entity recognition, we construct ID and OOD datasets from FUNSD. Each semantic entity includes a list of words, a label, and a bounding box. The standard label set for this dataset contains four categories: question, answer, header, and other. In this paper, we select entities labeled as other or header as OOD data, and the entities belonging to the other three categories as ID. Instead of treating entity recognition as a named-entity recognition problem, we follow UDoc and solve this problem at the semantic region level. We replace the sentence encoder in UDoc with a smaller sentence encoder (all-MiniLM-L6-v2" + }, + { + "bbox": [ + 66, + 390, + 291, + 646 + ], + "type": "inline_equation", + "content": "^{10}" + }, + { + "bbox": [ + 66, + 390, + 291, + 646 + ], + "type": "text", + "content": ") from Huggingface (Wolf et al., 2019). We also have the following model variants to verify the effectiveness of the combination of modalities: textual-only, visual-only, textual+spatial, visual+spatial, and visual+textual+spatial." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 647, + 290, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 647, + 290, + 673 + ], + "spans": [ + { + "bbox": [ + 67, + 647, + 290, + 673 + ], + "type": "text", + "content": "We provide details on datasets and models as follows." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 682, + 138, + 694 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 682, + 138, + 694 + ], + "spans": [ + { + "bbox": [ + 67, + 682, + 138, + 694 + ], + "type": "text", + "content": "B.1 Datasets" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 700, + 290, + 755 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 700, + 290, + 755 + ], + "spans": [ + { + "bbox": [ + 67, + 700, + 290, + 755 + ], + "type": "text", + "content": "The original FUNSD (Jaume et al., 2019) dataset contains 149 training and 50 testing images. For document entity recognition, we treat entities with the category other/anchor as OOD entities. After" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 71, + 526, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 125 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 125 + ], + "type": "text", + "content": "the split, if we consider other as OOD, we have a total of 8,330 ID and 1,019 OOD entities. Otherwise, if we consider header as OOD, we have 8,981 ID and 368 OOD entities in total." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 301, + 126, + 527, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 126, + 527, + 275 + ], + "spans": [ + { + "bbox": [ + 301, + 126, + 527, + 275 + ], + "type": "text", + "content": "For document object detection, we consider PubLayNet (Zhong et al., 2019), which contains " + }, + { + "bbox": [ + 301, + 126, + 527, + 275 + ], + "type": "inline_equation", + "content": "336\\mathrm{K} / 11\\mathrm{K}" + }, + { + "bbox": [ + 301, + 126, + 527, + 275 + ], + "type": "text", + "content": " training/validation images with 6 categories (text, title, list, fig., and table). The original IIIT-AR-13K (Mondal et al., 2020) contains (table, fig., natural image, logo, and signature). In this paper, considering the overlap between IIIT-AR-13K and PubLayNet, we select those images containing natural images as the OOD test set. After filtering, we obtain 2,880 OOD entities across 1,837 document images." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 301, + 275, + 527, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 275, + 527, + 397 + ], + "spans": [ + { + "bbox": [ + 301, + 275, + 527, + 397 + ], + "type": "text", + "content": "We consider three ID datasets in this experiment. (1) PubLayNet: This is the original PubLayNet dataset. We treat all the entities in training/validation images as ID entities. (2) Considering the domain shift between ID data (PubLayNet) and OOD data (IIIT-AR-13K). We combine the PubLayNet training data with the images from IIIT-AR-13K with overlapping annotations (table and figure) and train the object detection model." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 407, + 367, + 419 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 407, + 367, + 419 + ], + "spans": [ + { + "bbox": [ + 302, + 407, + 367, + 419 + ], + "type": "text", + "content": "B.2 Models" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 426, + 527, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 426, + 527, + 521 + ], + "spans": [ + { + "bbox": [ + 302, + 426, + 527, + 521 + ], + "type": "text", + "content": "Fig. 13 illustrates the entity recognition models used in this paper. We consider the entities on regions instead of tokens, as regions provide richer semantic information. As for the pre-trained model, we adopt UDoc (trained on IIT-CDIP) since it models inputs at the regional level. Based on the UDoc framework, we develop the following models." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 303, + 527, + 411, + 540 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 527, + 411, + 540 + ], + "spans": [ + { + "bbox": [ + 303, + 527, + 411, + 540 + ], + "type": "text", + "content": "Vision/Vision+Layout:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 316, + 551, + 526, + 698 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 316, + 551, + 525, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 551, + 525, + 605 + ], + "spans": [ + { + "bbox": [ + 316, + 551, + 525, + 605 + ], + "type": "text", + "content": "- ResNet-50: This model is composed of the ResNet-50 from pre-trained UDoc. It adopts the RoI pooling followed by a classifier to extract the entity features." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 316, + 616, + 526, + 698 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 616, + 526, + 698 + ], + "spans": [ + { + "bbox": [ + 316, + 616, + 526, + 698 + ], + "type": "text", + "content": "- ResNet-50+Position: This model also adapts UDoc's pre-trained ResNet-50 for further improvement. It makes the RoI features spatially aware by adding position embeddings, which are mapped from the bounding boxes via a linear mapping layer." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 303, + 708, + 393, + 722 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 708, + 393, + 722 + ], + "spans": [ + { + "bbox": [ + 303, + 708, + 393, + 722 + ], + "type": "text", + "content": "Text/Text+Layout:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 733, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 733, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 316, + 733, + 526, + 772 + ], + "type": "text", + "content": "- Sentence BERT: This model adopts the language branch of UDoc and appends the classifier to the output of the sentence encoder." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 78, + 760, + 285, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 760, + 285, + 772 + ], + "spans": [ + { + "bbox": [ + 78, + 760, + 285, + 772 + ], + "type": "text", + "content": "10https://huggingface.co/sentence-transformers" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "text", + "content": "4988" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "spans": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 89, + 69, + 295, + 142 + ], + "blocks": [ + { + "bbox": [ + 89, + 69, + 295, + 142 + ], + "lines": [ + { + "bbox": [ + 89, + 69, + 295, + 142 + ], + "spans": [ + { + "bbox": [ + 89, + 69, + 295, + 142 + ], + "type": "image", + "image_path": "f1c75d15ae21f76e4491afaf6a3fc5ad750c0f81e0fab7f2a57d7592f3089702.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 295, + 143, + 501, + 163 + ], + "lines": [ + { + "bbox": [ + 295, + 143, + 501, + 163 + ], + "spans": [ + { + "bbox": [ + 295, + 143, + 501, + 163 + ], + "type": "text", + "content": "(b) OOD detection results from different object detection methods and models." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 82, + 168, + 509, + 180 + ], + "lines": [ + { + "bbox": [ + 82, + 168, + 509, + 180 + ], + "spans": [ + { + "bbox": [ + 82, + 168, + 509, + 180 + ], + "type": "text", + "content": "Figure 14: Ablation on document entity recognition and object detection. Numbers are reported in FPR95." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 297, + 69, + 500, + 142 + ], + "blocks": [ + { + "bbox": [ + 88, + 144, + 295, + 163 + ], + "lines": [ + { + "bbox": [ + 88, + 144, + 295, + 163 + ], + "spans": [ + { + "bbox": [ + 88, + 144, + 295, + 163 + ], + "type": "text", + "content": "(a) Comparison of OOD detection methods on different models on two OOD classes: other and header." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 297, + 69, + 500, + 142 + ], + "lines": [ + { + "bbox": [ + 297, + 69, + 500, + 142 + ], + "spans": [ + { + "bbox": [ + 297, + 69, + 500, + 142 + ], + "type": "image", + "image_path": "6a13b259845932a94df9bd7401b4c1fd5cb34fddcbd94e79d6e72b83660219b2.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 81, + 188, + 289, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 188, + 289, + 227 + ], + "spans": [ + { + "bbox": [ + 81, + 188, + 289, + 227 + ], + "type": "text", + "content": "- Sentence BERT+Position: This model is close to the above model but adds position embeddings to the sentence embeddings." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 236, + 168, + 249 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 236, + 168, + 249 + ], + "spans": [ + { + "bbox": [ + 68, + 236, + 168, + 249 + ], + "type": "text", + "content": "Vision+Text+Layout:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 81, + 257, + 290, + 400 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 81, + 257, + 290, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 257, + 290, + 324 + ], + "spans": [ + { + "bbox": [ + 81, + 257, + 290, + 324 + ], + "type": "text", + "content": "- ResNet-50+sentence BERT: This model follows the same framework as UDoc, but replaces the sentence encoder in their original design with a more miniature sentence encoder (all-MiniLM-L6-v2)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 81, + 333, + 290, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 333, + 290, + 400 + ], + "spans": [ + { + "bbox": [ + 81, + 333, + 290, + 400 + ], + "type": "text", + "content": "- SwinT+Sentence BERT: This model replaces the ResNet-50 visual backbone with a pre-trained tiny Swin Transformer (swintiny-patch4-window7-224) adopted from the Huggingface." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 409, + 289, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 409, + 289, + 448 + ], + "spans": [ + { + "bbox": [ + 67, + 409, + 289, + 448 + ], + "type": "text", + "content": "All the models are fine-tuned with the cross-entropy loss for 100 epochs, using a learning rate of " + }, + { + "bbox": [ + 67, + 409, + 289, + 448 + ], + "type": "inline_equation", + "content": "10^{-5}" + }, + { + "bbox": [ + 67, + 409, + 289, + 448 + ], + "type": "text", + "content": " and a batch size of 8 on an A100 GPU." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 458, + 220, + 470 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 458, + 220, + 470 + ], + "spans": [ + { + "bbox": [ + 68, + 458, + 220, + 470 + ], + "type": "text", + "content": "B.3 Summary of Observations" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 476, + 290, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 476, + 290, + 555 + ], + "spans": [ + { + "bbox": [ + 67, + 476, + 290, + 555 + ], + "type": "text", + "content": "We provide a summary of observations here and hope to inspire future works on a thorough investigation of OOD detection for entity-level tasks. To identify entity types, models should not only understand the words but also utilize spatial and visual information." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 557, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 557, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 557, + 291, + 772 + ], + "type": "text", + "content": "For document entity recognition, the comparison of distance-based and logit-based OOD detection methods with different models are shown in Fig. 14a. More details are shown in Table 2. We see that models can better predict the entity type and also achieve better OOD robustness with the help of spatial information. Considering the weak language dependency between entities, it is not surprising that vision-based models achieve better performance than text-based models. In particular, UDoc with ResNet-50 achieves the best performance on two OOD test sets, illustrating that visual information plays a major role in increasing the discrimination of entities with similar semantics. For document object detection, we summarize our findings in Fig. 14b and describe them in more" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 301, + 188, + 525, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 188, + 525, + 241 + ], + "spans": [ + { + "bbox": [ + 301, + 188, + 525, + 241 + ], + "type": "text", + "content": "detail in Table 1. We can see that the OOD detection performance is further improved by introducing document images from IIIT-AR-13K with the same ID annotations as training data." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 242, + 526, + 458 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 242, + 526, + 458 + ], + "spans": [ + { + "bbox": [ + 302, + 242, + 526, + 458 + ], + "type": "text", + "content": "To provide more intuitions, in Fig. 15, we visualize the document entity recognition OOD detection results. In Fig. 16, we visualize the prediction on sample OOD images, using object detection models trained without VOS (top) and with VOS (bottom), respectively. We can see that vanilla Faster RCNN trained on PubLayNet produces false positives when applied to the OOD document images from IIIT-AR-13K. Table 1 shows that introducing the unknown-aware learning method optimized for both ID and OOD can reduce the FPR95 while preserving the mAP on the ID data. This experiment indicates that incorporating uncertainty estimation into the entity detection training procedure can improve the reliability of the document object detection system." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 468, + 483, + 481 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 468, + 483, + 481 + ], + "spans": [ + { + "bbox": [ + 302, + 468, + 483, + 481 + ], + "type": "text", + "content": "C Detailed Experimental Results" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 489, + 524, + 763 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 302, + 489, + 524, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 489, + 524, + 516 + ], + "spans": [ + { + "bbox": [ + 302, + 489, + 524, + 516 + ], + "type": "text", + "content": "- Table 2 corresponds to the results shown in Fig. 15 and Fig. 14a." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 524, + 524, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 524, + 524, + 550 + ], + "spans": [ + { + "bbox": [ + 302, + 524, + 524, + 550 + ], + "type": "text", + "content": "- Table 1 corresponds to the results shown in Fig. 16 and Fig. 14b." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 560, + 524, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 560, + 524, + 587 + ], + "spans": [ + { + "bbox": [ + 302, + 560, + 524, + 587 + ], + "type": "text", + "content": "- Table 3 and Table 7 correspond to the results shown in Fig. 4a." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 596, + 524, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 596, + 524, + 622 + ], + "spans": [ + { + "bbox": [ + 302, + 596, + 524, + 622 + ], + "type": "text", + "content": "- Table 4 and Table 5 correspond to the results shown in Fig. 4c." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 631, + 524, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 631, + 524, + 657 + ], + "spans": [ + { + "bbox": [ + 302, + 631, + 524, + 657 + ], + "type": "text", + "content": "- Table 6 corresponds to the results shown in Fig. 8 and Fig. 9." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 666, + 524, + 692 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 666, + 524, + 692 + ], + "spans": [ + { + "bbox": [ + 302, + 666, + 524, + 692 + ], + "type": "text", + "content": "- Table 9 and Table 8 correspond to the results shown in Fig. 6 and Fig. 9." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 302, + 702, + 524, + 727 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 702, + 524, + 727 + ], + "spans": [ + { + "bbox": [ + 302, + 702, + 524, + 727 + ], + "type": "text", + "content": "- Table 10 and Table 11 correspond to the analysis for Sec. 4 and Sec. 4.2." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 302, + 737, + 524, + 763 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 737, + 524, + 763 + ], + "spans": [ + { + "bbox": [ + 302, + 737, + 524, + 763 + ], + "type": "text", + "content": "- Table 12 corresponds to the results shown in Fig. 9." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 789 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 789 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 789 + ], + "type": "text", + "content": "4989" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 293, + 791, + 299, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 791, + 299, + 800 + ], + "spans": [ + { + "bbox": [ + 293, + 791, + 299, + 800 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 110, + 526, + 232 + ], + "blocks": [ + { + "bbox": [ + 69, + 110, + 526, + 232 + ], + "lines": [ + { + "bbox": [ + 69, + 110, + 526, + 232 + ], + "spans": [ + { + "bbox": [ + 69, + 110, + 526, + 232 + ], + "type": "image", + "image_path": "79de30db107355133e0f289c7f4db6feb3f9ac012befdf84ed5b4e4b131ef632.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 243, + 525, + 280 + ], + "lines": [ + { + "bbox": [ + 67, + 243, + 525, + 280 + ], + "spans": [ + { + "bbox": [ + 67, + 243, + 525, + 280 + ], + "type": "text", + "content": "Figure 15: Visualization of detected OOD entities on the form images. The top part shows the entities in blue are entities annotated as other. The bottom part shows the detected OOD entities (green). We also show failure cases on the right part." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 70, + 354, + 526, + 496 + ], + "blocks": [ + { + "bbox": [ + 70, + 354, + 526, + 496 + ], + "lines": [ + { + "bbox": [ + 70, + 354, + 526, + 496 + ], + "spans": [ + { + "bbox": [ + 70, + 354, + 526, + 496 + ], + "type": "image", + "image_path": "757e27df2d9e25d1d977d414ed4a1f7dabd77e97d7ddf7ae1f976564f4d51dff.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 507, + 525, + 569 + ], + "lines": [ + { + "bbox": [ + 67, + 507, + 525, + 569 + ], + "spans": [ + { + "bbox": [ + 67, + 507, + 525, + 569 + ], + "type": "text", + "content": "Figure 16: Visualization of detected objects on the OOD images (from IIIT-AR-13K) by a vanilla Faster-RCNN (top) and Faster-RCNN with VOS (bottom) is shown. Objects in blue boxes are detected and classified as one of the ID classes. The detected OOD objects (green) reduce false positives among detected objects. We also visualize detected objects on the ID images. There is a clear difference between PubLayNet and IIIT-AR-13K – entities and annotations of natural images rarely exist in PubLayNet." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 93, + 663, + 501, + 731 + ], + "blocks": [ + { + "bbox": [ + 160, + 643, + 431, + 655 + ], + "lines": [ + { + "bbox": [ + 160, + 643, + 431, + 655 + ], + "spans": [ + { + "bbox": [ + 160, + 643, + 431, + 655 + ], + "type": "text", + "content": "Table 1: Comparison with different training and detection methods." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 93, + 663, + 501, + 731 + ], + "lines": [ + { + "bbox": [ + 93, + 663, + 501, + 731 + ], + "spans": [ + { + "bbox": [ + 93, + 663, + 501, + 731 + ], + "type": "table", + "html": "
ModelsID DatasetOOD ScoreIIIT-AR-13K (Natural Image as OOD)PubLayNet (ID)
FPR95AUROCAUPRmAP
Vanilla Faster-RCNNPubMedNetMSP74.3379.1298.4192.6
Energy55.9683.5598.73
Faster-RCNN with VOSPubMedNetMSP63.6579.3798.5792.2
Energy55.6180.6098.67
Faster-RCNN with VOSPubMedNet+IIIT-AR-13K(ID)MSP56.5782.9498.5992.4
Energy47.7384.0498.67
", + "image_path": "0511f8c6067c2f827c3480a3419f496313da30351f2b0634acc6e898e0dfa613.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 791 + ], + "type": "text", + "content": "4990" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 293, + 792, + 301, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 792, + 301, + 801 + ], + "spans": [ + { + "bbox": [ + 293, + 792, + 301, + 801 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 343, + 523, + 540 + ], + "blocks": [ + { + "bbox": [ + 67, + 297, + 526, + 333 + ], + "lines": [ + { + "bbox": [ + 67, + 297, + 526, + 333 + ], + "spans": [ + { + "bbox": [ + 67, + 297, + 526, + 333 + ], + "type": "text", + "content": "Table 2: Comparison with different models on FUNSD OOD setting. All models are initialized with UDoc pretrained on IIT-CDIP and fine-tuned on FUNSD data with ID entities. All values are percentages. S-BERT deontes Sentence BERT. A lower FPR95 or a higher AUROC value indicates better performance." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 343, + 523, + 540 + ], + "lines": [ + { + "bbox": [ + 70, + 343, + 523, + 540 + ], + "spans": [ + { + "bbox": [ + 70, + 343, + 523, + 540 + ], + "type": "table", + "html": "
Test F1MethodOther (OOD)IDHeader (OOD)IDTest F1MethodOther (OOD)IDHeader (OOD)ID
FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1FPR95AUROCF1
ResNet-5075.15KNN1059.4779.1481.7963.97ResNet-50+Position75.82KNN1073.2173.1990.2261.42
KNN2069.9778.1581.2563.66KNN2072.9173.4488.0461.54
KNN5084.4977.4082.6162.86KNN5075.9674.4382.8860.93
KNN10097.9477.0877.6584.2461.6278.04KNN10079.6974.8583.7059.3977.98
KNN20097.8477.1594.2959.74KNN20086.0675.1491.5857.42
KNN40097.1576.0994.8457.53KNN40087.9374.9295.9255.37
MSP50.5475.8075.8276.55MSP77.8267.6084.2466.58
MaxLogit52.4073.7073.6476.72MaxLogit76.9467.0584.2465.41
Energy52.5073.7075.8276.55Energy76.6466.9384.5164.98
S-BERT77.15KNN1093.7248.4492.6660.99S-BERT+Position82.69KNN1097.4541.2493.7562.38
KNN2093.9247.6592.9359.00KNN2097.5539.9193.4861.51
KNN5093.6248.9493.2157.90KNN5097.1539.5692.3961.76
KNN10093.9248.7993.2155.07KNN10097.0641.6791.8560.99
KNN20093.9247.8582.1293.4852.8682.41KNN20096.5741.8587.0859.0887.01
KNN40094.1146.2195.3849.86KNN40097.2540.8390.2254.03
MSP93.6254.9194.2952.14MSP88.4261.1190.7659.58
MaxLogit93.7254.7594.5756.51MaxLogit89.7060.1988.8660.92
Energy93.2354.8893.2158.22Energy90.4859.6189.9561.12
ResNet-50+S-BERT89.11KNN1045.9387.8553.8087.97SwimT+S-BERT86.00KNN1063.3083.6481.5264.08
KNN2053.5886.7155.7187.06KNN2066.7382.5381.5261.50
KNN5073.2184.3662.7785.49KNN5070.1780.2182.3457.77
KNN10089.7083.0169.0283.60KNN10083.9177.7183.1554.97
KNN20096.6681.9093.1375.5480.8593.18KNN20095.3975.7990.8250.5790.40
KNN40098.8281.0091.5877.42KNN40096.7675.4999.7347.45
MSP45.4487.8267.3972.85MSP69.2870.7080.7152.02
MaxLogit45.5390.5863.0472.39MaxLogit67.1274.4181.7952.77
Energy45.5390.5763.8672.37Energy67.2274.4181.7952.77
", + "image_path": "16d5fd7d1510a755ac826d23417646f23cd095917fef5c34cdeb5d4b84a26747.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 308, + 791 + ], + "type": "text", + "content": "4991" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 294, + 791, + 299, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 791, + 299, + 800 + ], + "spans": [ + { + "bbox": [ + 294, + 791, + 299, + 800 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 254, + 523, + 648 + ], + "blocks": [ + { + "bbox": [ + 67, + 192, + 526, + 252 + ], + "lines": [ + { + "bbox": [ + 67, + 192, + 526, + 252 + ], + "spans": [ + { + "bbox": [ + 67, + 192, + 526, + 252 + ], + "type": "text", + "content": "Table 3: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP. ID (Acc) denotes the ID accuracy obtained by testing on ID test data. We report the KNN-based scores for both pre-trained and fine-tuned models. Sci. Poster denotes the document images converted from NJU-Fudan Paper-Poster Dataset. Receipt denotes the receipt images collected from the CORD receipt understanding dataset. For in-domain OOD test data, we also report the averaged scores." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 254, + 523, + 648 + ], + "lines": [ + { + "bbox": [ + 70, + 254, + 523, + 648 + ], + "spans": [ + { + "bbox": [ + 70, + 254, + 523, + 648 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.59MSP92.7569.2492.2166.9394.6565.4092.0070.0992.9067.9296.5166.9399.1052.90
MaxLogit98.3677.8597.2378.5198.7672.8498.8678.0898.3076.82100.0078.69100.0063.74
Energy98.6077.8197.5578.4998.9672.7998.9478.0098.5176.77100.0078.68100.0063.70
GradNorm98.0479.2697.0776.8598.5672.8398.6280.5598.0777.37100.0085.23100.0064.10
KNN1063.2188.1865.8188.0573.0284.6367.7488.9267.4587.4469.7788.4990.5084.44
KNN2063.5388.0765.8987.9072.7584.4867.3388.8167.3887.3268.6088.1391.1084.09
KNN5064.1787.8966.9787.7773.3484.2367.2188.6067.9287.1272.0987.4791.6083.59
KNN10064.4987.6467.7887.5573.4683.9467.2988.3768.2686.8872.0986.8391.5083.21
Pre-train on 10% IIT-CDIP (no fine-tune)
-KNN1088.0766.9492.1366.6294.1361.9094.4054.5792.1862.5167.4487.0462.1084.94
KNN2088.5966.0292.6565.2594.1360.8394.7253.7992.5261.4777.9185.3864.6083.86
KNN5089.7564.4093.5363.1294.3758.9895.1752.3393.2059.7183.7282.9769.2082.29
KNN10090.2362.9493.8561.2894.4157.4595.1351.2893.4058.2483.7280.9170.1081.05
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.71MSP94.2868.0294.4665.9896.0162.9894.8165.9894.8965.7495.3563.5599.1054.99
MaxLogit97.3677.8297.1979.1698.4072.6498.3477.6897.8276.82100.0077.3699.6066.63
Energy98.0477.8097.4379.1598.7672.6198.5877.6498.2076.80100.0077.3299.6066.61
GradNorm97.3680.6896.8376.0498.4473.2997.8981.3797.6377.85100.0086.1899.5067.49
KNN1063.5788.3067.0687.0673.6683.9273.0987.8069.3486.7769.7788.0187.6083.81
KNN2063.8588.2067.4686.9073.9483.7872.9387.7069.5486.6469.7787.6388.3083.53
KNN5063.8988.0267.5486.7174.3883.5572.2487.4669.5186.4370.9387.0988.2083.12
KNN10064.8587.8167.6286.4574.9083.2572.6587.2470.0086.1972.0986.6588.3082.89
Pre-train on 20% IIT-CDIP (no fine-tune)
-KNN1087.1568.2790.8866.8992.2662.3995.0153.0291.3262.6443.0292.2957.0087.67
KNN2087.3167.3592.0465.5491.5461.4094.9752.3391.4661.6647.6791.1862.6086.61
KNN5088.3965.7192.6963.4592.1859.5795.2550.9792.1359.9256.9889.6465.7085.20
KNN10088.8364.2093.1361.6192.2257.9995.4549.9592.4158.4458.1488.3666.9084.17
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.76MSP92.6770.0993.9365.6995.0563.1995.5065.5494.2966.1395.3563.6395.4064.97
MaxLogit98.0878.7297.8779.8598.4471.6398.3075.4198.1776.4098.8478.0798.9075.65
Energy98.4878.6997.9179.8398.6871.6198.5075.4098.3976.38100.0078.0498.5075.60
GradNorm98.0481.0397.4776.7398.4472.7797.4079.1197.8477.41100.0087.4797.6077.12
KNN1060.5788.7968.8686.3675.2683.5573.9087.1269.6586.4667.4489.9072.7089.49
KNN2061.3788.7269.0686.2475.4683.4373.4687.0069.8486.3568.6089.6673.5089.25
KNN5062.2188.5269.1886.0875.6683.2173.4286.7170.1286.1370.9389.2074.7088.89
KNN10063.7788.3069.7985.8476.0282.9374.1986.4670.9485.8874.4288.8475.3088.69
Pre-train on 40% IIT-CDIP (no fine-tune)
-KNN1085.7169.0890.8468.6890.4662.5294.7651.7690.4463.0125.5895.8357.3088.60
KNN2085.2768.2191.6467.4889.7461.3294.8151.0190.3662.0029.0795.2262.3087.61
KNN5086.1966.6092.2165.5490.3059.3594.9349.6090.9160.2741.8694.3266.8086.25
KNN10087.1965.0492.5763.8390.5057.7495.0948.4491.3458.7645.3593.6668.3085.14
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP (no fine-tune)
-KNN1084.4370.2090.2068.5490.9863.1894.7252.1690.0863.5227.9194.1046.0091.37
KNN2084.5169.3091.2867.3590.3861.9694.7251.4390.2262.5133.7293.3951.5090.55
KNN5085.6767.7591.9265.3590.8259.7994.8949.7790.8260.6639.5392.2856.7089.32
KNN10086.5566.0892.9763.4691.4658.0095.4148.3991.6058.9844.1991.2961.6088.18
", + "image_path": "c6b89cfd04e3cb218611d45ffadc1c0050c27654b45279623d1c37153149e2c7.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4992" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "spans": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 235, + 523, + 631 + ], + "blocks": [ + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "lines": [ + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "spans": [ + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "type": "text", + "content": "Table 4: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP" + }, + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "type": "inline_equation", + "content": "^{-}" + }, + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "type": "text", + "content": " (remove pseudo OOD categories)." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 235, + 523, + 631 + ], + "lines": [ + { + "bbox": [ + 70, + 235, + 523, + 631 + ], + "spans": [ + { + "bbox": [ + 70, + 235, + 523, + 631 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTaBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.62MSP90.0769.0089.9268.8692.5864.1691.0766.7890.9167.2096.5154.4796.7059.63
MaxLogit97.7678.4097.7180.5898.6471.2698.7076.3898.2076.66100.0073.5199.8073.32
Energy98.1678.3597.7580.5598.8471.2098.9076.3298.4176.60100.0073.4699.8073.31
GradNorm97.6879.9297.2779.4298.5671.3198.5079.4498.0077.52100.0082.6299.6075.85
KNN1065.8587.8966.6988.1275.9882.8274.5586.8570.7786.4287.2185.1683.9087.91
KNN2066.3387.8066.8588.0475.9482.7073.9486.7570.7686.3287.2184.6383.6087.71
KNN5066.7787.6667.3088.0076.0282.4973.6686.5270.9486.1788.3783.7383.9087.34
KNN10067.2587.4267.7487.8476.1882.1873.9986.2671.2985.9289.5382.8583.9086.98
Pre-train on 10% IIT-CDIP(- no fine-tune)
-KNN1086.3565.4885.7470.8492.9459.5593.1456.6289.5463.1229.0795.4287.6083.13
KNN2086.8764.4887.1469.6893.3058.4193.3055.9190.1562.1237.2194.7588.0081.44
KNN5087.7562.7388.9967.8093.5056.5493.7554.5291.0060.4047.6793.7190.3078.97
KNN10088.4361.1789.5966.0593.6254.9193.9953.4091.4158.8848.8493.0991.5077.00
RoBERTaBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.65MSP96.0467.5894.9068.3296.0564.9296.2368.6295.8067.36100.0061.4998.7056.38
MaxLogit97.9676.9297.5980.6898.4872.3198.7477.7298.1976.91100.0075.9199.5069.21
Energy98.1676.8998.2380.6598.8872.2699.0777.6798.5876.87100.0075.8999.5069.18
GradNorm97.8478.2397.3178.5798.0071.4498.4680.0397.9077.07100.0085.8099.0069.54
KNN1066.0587.6067.7087.9473.4283.1073.5087.9670.1786.6577.9190.1990.1084.32
KNN2066.1787.5068.3887.8373.9082.9373.6687.8270.5386.5277.9189.8489.8084.13
KNN5067.2187.2668.4687.7374.1882.6373.6687.5870.8886.3079.0789.2489.6083.80
KNN10068.7886.9869.1487.5375.5082.3074.2787.3671.9286.0482.5688.6889.8083.59
Pre-train on 20% IIT-CDIP(- no fine-tune)
-KNN1085.6366.1085.1770.3492.5860.2993.4356.8589.2063.4030.2395.7283.2083.84
KNN2086.3165.1785.9869.1393.3059.0993.4756.0589.7762.3634.8895.0884.9082.16
KNN5087.3163.5087.6367.1193.3857.1794.1654.6090.6260.6044.1994.0787.5079.74
KNN10087.8362.0688.2765.3193.6255.6594.3253.5691.0159.1448.8493.4888.8077.77
RoBERTaBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.72MSP93.8468.8693.6967.6295.4163.9194.2065.2594.2866.4196.5163.3298.9054.02
MaxLogit97.1678.5696.8780.1898.6871.8498.5874.4497.8276.26100.0076.7299.1065.41
Energy97.4078.5397.1580.1798.6871.7998.7874.3998.0076.22100.0076.6799.5065.39
GradNorm97.2480.5996.9578.0198.5272.1298.3477.1697.7676.97100.0086.9499.7067.46
KNN1066.8987.9168.5886.9077.6182.3176.5885.3972.4185.6375.5889.4586.4084.23
KNN2067.5787.8068.9086.7977.7782.1976.3085.2272.6485.5080.2389.1786.8083.85
KNN5067.9787.5869.6786.6778.0181.9876.6684.8573.0885.2780.2388.6387.2083.21
KNN10069.4687.3471.2386.4779.0181.7277.4884.5774.3085.0282.5688.1988.0082.72
Pre-train on 40% IIT-CDIP(- no fine-tune)
-KNN1088.7966.1488.3568.9293.5060.3095.5451.0991.5461.6137.2195.3755.9091.90
KNN2089.5965.0789.8067.6193.8959.1095.5850.1792.2160.4946.5194.4161.5091.00
KNN5090.5963.3991.6465.6893.7757.3595.6648.6392.9258.7653.4993.0666.4089.72
KNN10091.1961.7992.3763.9093.6655.7895.6247.4293.2157.2265.1291.9968.3088.72
RoBERTaBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
90.74MSP94.1268.2494.2966.1895.9363.8395.2165.6694.8965.9898.8459.2596.5065.42
MaxLogit97.2478.1597.1980.2798.3672.1698.3875.8297.7976.60100.0073.2899.3075.58
Energy97.3278.1397.5180.2698.6472.1298.7075.7898.0476.57100.0073.2799.6075.52
GradNorm97.1680.0797.3977.8698.4071.8398.0579.0897.7577.21100.0086.3299.4073.52
KNN1066.8187.8669.6786.9177.4982.6074.5986.2872.1485.9181.4087.7476.9088.49
KNN2066.7387.7570.3186.7877.8982.5175.2886.1372.5585.7981.4087.4377.5088.39
KNN5067.2587.5470.5986.6277.8582.3275.4185.8472.7885.5883.7286.8577.8088.23
KNN10068.1387.3471.4786.3978.0582.0876.1485.6073.4585.3583.7286.3978.5088.21
Pre-train on 100% IIT-CDIP(- no fine-tune)
-KNN1087.9566.4484.4972.3495.0158.4796.2349.0790.9261.5831.4096.1941.6094.78
KNN2088.9165.3985.7071.2595.3357.1996.5948.0691.6360.4734.8895.5048.4094.12
KNN5090.5963.6987.1469.4595.5354.9397.0846.2692.5858.5843.0294.5155.2093.05
KNN10091.7562.0888.5567.8595.8953.0597.2044.8193.3556.9550.0093.6061.1092.04
", + "image_path": "204895392e092d124f022fad04e5d83832c689e536184ac6c9274a9de6ac8afd.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4993" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "spans": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 235, + 523, + 631 + ], + "blocks": [ + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "lines": [ + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "spans": [ + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "type": "text", + "content": "Table 5: OOD detection performance for document classification with different number of pre-training data from IIT-CDIP" + }, + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "type": "inline_equation", + "content": "^{-}" + }, + { + "bbox": [ + 68, + 208, + 524, + 232 + ], + "type": "text", + "content": " (remove pseudo OOD categories)." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 235, + 523, + 631 + ], + "lines": [ + { + "bbox": [ + 70, + 235, + 523, + 631 + ], + "spans": [ + { + "bbox": [ + 70, + 235, + 523, + 631 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
LayoutLMyBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.89MSP42.4376.3156.0569.3954.3170.2547.0073.9349.9572.4743.0276.5544.1075.68
MaxLogit41.9191.2755.0489.3354.1985.2044.9790.9349.0389.1838.3794.2741.3091.38
Energy41.8391.2954.9289.3554.1185.2245.0190.9748.9789.2138.3794.2941.1091.42
GradNorm39.1591.8054.0486.9351.8886.0542.4991.6546.8989.1138.3791.7941.4091.82
KNN1031.6394.2546.5290.9846.7790.4940.8392.7941.4492.1324.4295.9530.3095.66
KNN2032.0394.1146.6590.8947.0190.3241.6092.6341.8291.9926.7495.7631.8095.44
KNN5034.3993.7549.3490.4649.3689.9444.5292.2344.4091.6033.7295.3333.2095.38
KNN10036.1593.4751.2790.1951.3689.6546.6391.9946.3591.3233.7295.1035.1095.16
Pre-train on 10% IIT-CDIP-(no fine-tune)
-KNN1090.9572.3094.6665.4990.9472.3894.4067.3292.7469.3748.8491.5656.0075.08
KNN2091.5970.5494.9863.9191.6670.7494.8165.9593.2667.7853.4990.4157.6073.51
KNN5093.0767.7695.5461.2492.7868.2795.2564.0194.1665.3255.8188.3758.5071.06
KNN10093.5565.4195.9059.1393.1066.1995.5462.4194.5263.2867.4486.4460.2069.09
LayoutLMyBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP ID data
95.84MSP49.2076.7861.5170.1362.3769.4955.5273.6457.1572.5150.0077.9950.7075.90
MaxLogit41.0391.5754.0088.4556.4285.7047.0090.1949.6188.9838.3793.6241.8090.56
Energy40.9591.6053.7688.4756.1985.7246.7990.2249.4289.0038.3793.6541.7090.59
GradNorm37.1591.8954.1684.9953.0386.2843.9590.9447.0788.5240.7090.4142.4090.91
KNN1031.6394.1747.6990.2947.4990.5040.5492.9241.8491.9731.4095.6534.5095.15
KNN2032.5594.0347.8990.2248.3290.3440.9192.7642.4291.8433.7295.4535.4094.97
KNN5035.7193.6749.7489.8251.0489.9944.1292.3945.1591.4736.0595.0136.2094.92
KNN10036.7593.3850.3089.6051.6889.7144.9792.1745.9291.2236.0594.7336.5094.71
Pre-train on 20% IIT-CDIP-(no fine-tune)
-KNN1090.3975.2579.5979.4393.1472.4197.1266.9990.0673.5250.0091.3624.7096.34
KNN2090.6373.7580.4778.5193.8170.5897.1665.5490.5272.1055.8189.9126.9095.94
KNN5091.6771.1982.5676.9094.4567.8297.3662.9891.5169.7267.4487.2929.1095.31
KNN10091.9569.1983.7375.5595.3365.3797.3660.8492.0967.7474.4284.7830.3094.75
LayoutLMyBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.01MSP51.7675.7662.3969.6363.3768.7554.2274.0357.9472.0455.8171.6942.5080.56
MaxLogit42.0391.2954.2489.4757.3084.4445.6690.0249.8188.8052.3393.0833.0092.89
Energy41.8791.3154.2089.4957.2684.4745.5090.0549.7188.8352.3393.1332.5092.92
GradNorm38.1991.6653.6486.8555.0385.6643.1891.4547.5188.9052.3392.3934.6092.95
KNN1031.4794.4347.1390.6348.2090.4538.1193.3041.2392.2027.9195.7824.7096.09
KNN2032.5994.2947.6190.5549.6090.2739.2593.1442.2692.0632.5695.6025.5095.95
KNN5034.8793.9349.5090.1052.1189.8742.2992.7544.6991.6638.3795.1626.4095.95
KNN10036.5593.6550.3889.8253.5589.5743.7192.5146.0591.3943.0294.8927.7095.77
Pre-train on 40% IIT-CDIP-(no fine-tune)
-KNN1087.0780.4471.7683.7286.7582.3196.1076.3685.4280.7175.5884.965.9098.24
KNN2088.9579.0374.9382.3188.9981.1196.7175.0187.4079.3680.2382.567.2097.93
KNN5091.4777.2380.3991.7891.7879.7597.4072.6090.2677.3787.2178.199.0097.92
KNN10090.7575.2784.7777.4891.7478.3197.1670.2691.1075.3389.5374.1114.2097.49
LayoutLMyBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP ID data
96.38MSP43.4376.1257.2169.1658.3868.5646.1474.7651.2972.1538.3778.6728.3083.78
MaxLogit35.1991.2950.2288.9853.1984.5439.9890.7144.6488.8824.4296.3921.4095.57
Energy35.2391.3250.2289.0053.1984.5539.9890.7344.6588.9024.4296.4421.4095.58
GradNorm30.3092.5448.6188.1848.9686.5836.1692.6341.0189.9819.7796.7119.2096.35
KNN1026.5094.9543.4791.6945.0990.9534.0993.8637.2992.8619.7797.3917.8096.37
KNN2027.2294.8344.0791.5845.4190.7934.6293.7137.8392.7319.7797.2218.4096.26
KNN5029.4694.4946.2891.1247.6990.4537.5093.3340.2392.3517.4497.0418.7096.80
KNN10032.1594.2648.1790.8550.6490.2140.3893.1242.8392.1119.7796.8820.7096.74
Pre-train on 100% IIT-CDIP-(no fine-tune)
-KNN1078.7481.6774.4580.8680.5383.7195.0177.3382.1880.8938.3794.6217.7096.12
KNN2082.3980.1377.8679.3183.4882.7595.4575.9384.8079.5344.1993.4214.6096.13
KNN5086.0377.6582.8076.6086.9181.3096.1073.0787.9677.1654.6591.099.6097.21
KNN10089.1175.5188.0374.0890.6279.7896.7170.4391.1274.9566.2888.5018.0096.82
", + "image_path": "05509342fd5b8a801359f737df2a09b4a8d8b605c435ef673e73e751f9fef88b.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4994" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 292, + 791, + 303, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 791, + 303, + 801 + ], + "spans": [ + { + "bbox": [ + 292, + 791, + 303, + 801 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 275, + 523, + 611 + ], + "blocks": [ + { + "bbox": [ + 67, + 230, + 524, + 267 + ], + "lines": [ + { + "bbox": [ + 67, + 230, + 524, + 267 + ], + "spans": [ + { + "bbox": [ + 67, + 230, + 524, + 267 + ], + "type": "text", + "content": "Table 6: OOD detection performance for document classification. Spatial-RoBERTaBase (Pre) or SRBase (Pre) denotes applying the spatial-aware adapter in the word embedding layer. Spatial-RoBERTaBase (Post) or SRBase (Post) denotes applying the spatial-aware adaptor at the output layer." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 275, + 523, + 611 + ], + "lines": [ + { + "bbox": [ + 70, + 275, + 523, + 611 + ], + "spans": [ + { + "bbox": [ + 70, + 275, + 523, + 611 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
RoBERTBlasteFine-tune on RVL-CDIP (ID)
90.19MSP91.1973.7090.8473.4991.8271.5391.0372.3591.2272.7793.0280.9497.6074.59
MaxLogit96.8879.0496.8779.3898.0475.8598.5477.4597.5877.93100.0082.7699.4079.99
Energy97.4878.9697.2379.3198.4075.7199.0777.2598.0477.81100.0082.7199.2080.06
KNN1053.2088.9458.5088.6261.3786.2563.7288.2959.2088.0222.0996.5268.6092.47
KNN2053.4488.8158.9088.5061.6586.0763.6088.1559.4087.8827.9196.3871.7092.02
KNN5053.8488.5259.4288.4262.0185.8164.1687.8059.8687.6432.5696.0774.3091.37
KNN10055.5688.1060.6788.2063.6985.4164.7787.4261.1787.2834.8895.6776.5090.81
No fine-tune
-KNN1093.1163.5288.1566.3494.5766.9298.4253.3793.5662.5425.5895.9986.0072.99
KNN2092.9963.1888.3965.7894.5766.0898.4252.1093.5961.7826.7495.7187.3070.44
KNN5092.6762.4189.3164.7294.1764.7498.3450.0793.6260.4826.7495.0290.8066.04
KNN10092.6761.5789.5963.5794.0163.4598.1748.3393.6159.2329.0794.3492.8061.62
SRBase(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.11MSP46.8074.5254.6470.5856.2669.7254.3070.7453.0071.3944.1975.7957.2069.23
MaxLogit39.4388.6446.4889.9249.9685.7548.3087.6646.0487.9933.7293.4250.6088.70
Energy39.4388.6646.4889.9450.0085.7648.3087.6746.0588.0133.7293.4550.6088.71
KNN1031.9194.4142.1992.6546.6589.3142.0992.6540.7192.2610.4797.4552.1092.93
KNN2032.3194.2842.5992.6447.0189.2143.4392.5341.3492.1611.6397.3153.3092.80
KNN5034.3993.9943.8392.3649.0488.9345.4192.1943.1791.8712.7997.0153.1092.51
KNN10035.1593.7644.2792.1549.4888.6546.1491.9743.7691.6315.1296.8149.7092.44
Pre-train on IIT-CDIP (no fine-tune)
-KNN1078.8278.9279.9973.8977.6981.3291.4876.5282.0077.6610.4798.0887.3080.89
KNN2079.7477.9582.6472.1779.8180.4092.1375.1183.5876.4116.2897.6092.1076.94
KNN5080.4276.8785.1369.6282.1278.9392.9873.0185.1674.6122.0996.6695.2070.53
KNN10081.4375.7086.9067.1983.4077.1293.3871.0786.2872.7727.9195.8696.6064.56
SRBase(Post)Fine-tune on RVL-CDIP (ID)
97.10MSP58.0578.3776.4665.4465.8075.0061.8177.5965.5374.1054.6581.6593.5052.85
MaxLogit49.2089.8272.3680.2857.8287.2852.5290.0457.9886.8634.8894.8891.6073.37
Energy47.5689.8771.9680.3056.5887.3251.1890.1056.8286.9034.8895.0491.3073.39
KNN1037.4393.3764.0886.8349.4489.8246.9292.1749.4790.5526.7496.3890.1080.21
KNN2038.2793.2565.3386.5250.8089.6648.0991.9950.6290.3526.7496.2391.2079.57
KNN5040.4392.9867.3886.0252.8389.3850.6591.5852.8289.9926.7495.8992.1078.48
KNN10041.9992.7767.9485.6253.8789.1751.2291.3353.7689.7229.0795.6792.6077.68
SRLarge(Pre)Pre-train on IIT-CDIP → fine-tune on RVL-CDIP (ID)
97.37MSP62.3767.8271.2763.3672.8762.5470.2563.8469.1964.3976.7460.6167.0065.48
MaxLogit33.3990.1539.2589.8742.3088.1237.0591.6638.0089.9531.4092.4127.7094.23
Energy33.3990.1639.2589.8842.3088.1337.0591.6638.0089.9631.4092.4227.7094.22
KNN1028.1894.4742.4393.0137.4391.7431.1394.7234.7993.4925.5896.2418.6096.28
KNN2028.7894.3242.4392.9038.0791.5832.0294.5535.3393.3425.5896.0218.6096.33
KNN5030.2293.9543.7192.6940.0691.2634.5494.1037.1393.0026.7495.5221.4096.14
KNN10030.8693.7144.1192.5640.6691.0535.4793.8837.7892.8026.7495.2221.7096.11
Pre-train on IIT-CDIP (no fine-tune)
-KNN1068.4980.4388.2369.8371.7583.1188.1173.3279.1476.6775.5884.3649.8092.02
KNN2071.7478.7790.2467.4175.6681.3889.0471.1481.6774.6881.4081.5562.2090.29
KNN5075.4676.4992.8163.8280.1778.7290.4267.8484.7271.7282.5677.1578.2087.49
KNN10077.6274.5994.4260.9483.1676.2591.8065.3086.7569.2784.8873.3488.2084.96
", + "image_path": "bc869ff4eda0c378d004e66b37464d033470ececb1140899cca5cfc5e6b25b64.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4995" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 292, + 791, + 302, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 791, + 302, + 801 + ], + "spans": [ + { + "bbox": [ + 292, + 791, + 302, + 801 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 114, + 523, + 507 + ], + "blocks": [ + { + "bbox": [ + 67, + 81, + 525, + 104 + ], + "lines": [ + { + "bbox": [ + 67, + 81, + 525, + 104 + ], + "spans": [ + { + "bbox": [ + 67, + 81, + 525, + 104 + ], + "type": "text", + "content": "Table 7: OOD detection performance for document classification with the different number of pre-training data from IIT-CDIP." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 71, + 114, + 523, + 507 + ], + "lines": [ + { + "bbox": [ + 71, + 114, + 523, + 507 + ], + "spans": [ + { + "bbox": [ + 71, + 114, + 523, + 507 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
VITBase(10%)Pre-train on 10% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.89MSP55.8088.3748.6191.3863.9383.8355.5288.5555.9688.0352.0589.6034.1095.04
MaxLogit50.3691.5137.7794.3062.3787.9753.6992.1151.0591.4738.3694.2428.6096.06
Energy50.5691.4837.0894.3363.4987.8955.1992.0051.5891.4238.3694.2929.4095.96
GradNorm55.5679.7545.9684.7966.9274.0758.4481.0756.7279.9247.9582.0434.9091.68
KNN1050.4092.6043.5193.9251.6090.5474.4788.8755.0091.4820.5597.199.2098.21
KNN2049.8092.7040.3894.4353.3990.2674.7288.7754.5791.5423.2996.9810.4098.05
KNN5046.7292.8934.2795.2456.0789.9274.5588.4552.9091.6227.4096.5612.8097.80
KNN10045.4892.8929.3395.6757.6289.5675.0488.2551.8791.5930.1496.2115.0097.57
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.9243.0897.6749.0099.5254.4199.3540.2698.8646.6993.1592.516.9098.06
KNN2098.8842.4797.7548.5799.5253.7599.3539.5698.8846.0994.5292.248.6097.91
KNN5098.8041.7097.8348.0499.5252.9199.3538.6298.8845.3295.8991.8010.6097.66
KNN10098.7641.2097.7947.7099.4852.3299.3538.0198.8444.8198.6391.3114.5097.41
VITBase(20%)Pre-train on 20% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.62MSP54.3689.0151.6391.3164.5785.2360.5188.6757.7788.5660.2789.3444.2093.73
MaxLogit44.3292.1638.2194.1864.9287.6358.5691.3351.5091.3245.2192.6339.7094.36
Energy44.3692.1737.8994.2466.5687.5160.3991.2252.3091.2846.5892.6241.5094.18
GradNorm90.5154.9292.0451.6794.2945.4198.1332.3693.7446.0995.8940.4489.7059.01
KNN1052.2092.5845.8493.7353.7990.7577.8487.0257.4291.0217.8197.3316.9097.40
KNN2051.6092.6643.5594.1555.6390.4678.0486.7957.2091.0219.1897.0619.4097.11
KNN5050.1292.8639.9894.8258.0290.1878.7786.5456.7291.1019.1896.6323.1096.68
KNN10048.0492.9134.7595.2860.3889.8878.9886.4255.5491.1220.5596.2726.2096.35
Pre-train on IIT-CDIP (no fine-tune)
-KNN1098.1641.1397.5147.1299.4853.0599.3138.7998.6245.0294.5291.808.0097.41
KNN2098.1240.7197.5146.7999.4852.5299.3138.3198.6044.5894.5291.488.7097.25
KNN5098.0440.1097.5546.3199.4851.8499.3937.6398.6243.9795.8991.0111.5096.99
KNN10098.0039.7497.5545.9899.4851.3499.3937.2698.6043.5897.2690.5514.6096.70
VITBase(40%)Pre-train on 40% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.63MSP55.4888.6552.2791.5464.4985.5258.0889.2057.5888.7367.1284.6245.8093.82
MaxLogit47.1291.7440.0694.0961.0588.6856.5792.0151.2091.6369.8689.8132.9095.46
Energy47.1291.7339.9494.1062.3388.6258.6091.8852.0091.5869.8689.6532.7095.44
GradNorm47.0085.7641.9089.6460.6981.3753.7387.0650.8385.9664.3881.1234.0092.93
KNN1053.2892.1348.3392.9946.4592.2075.6188.8755.9291.5534.2595.536.8098.56
KNN2052.7692.2445.8893.5748.1291.9574.8488.7555.4091.6332.8895.217.8098.36
KNN5051.2892.5240.9494.5150.5291.7075.0888.4654.4691.8035.6294.6710.9098.04
KNN10050.3292.6236.1695.1253.3591.3675.9388.2453.9491.8439.7394.2513.6097.76
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.5640.6097.0346.2899.2453.7699.1539.6298.2445.0682.1992.021.0099.59
KNN2097.5640.0096.9545.8699.2453.1899.1539.1298.2244.5482.1991.631.0099.55
KNN5097.5639.2496.9945.2099.2452.3999.1538.4998.2443.8386.3091.071.0099.50
KNN10097.6038.7897.0344.7999.2451.7699.1538.1598.2643.3790.4190.671.2099.45
VITBase(100%)Pre-train on 100% IIT-CDIP→ fine-tune on RVL-CDIP (ID)
94.79MSP54.2888.8049.1491.8064.6084.4558.8588.7856.7288.4661.6489.4441.0094.27
MaxLogit44.9692.1338.0194.5263.9787.9756.4991.8150.8691.6168.4990.6534.6095.26
Energy45.7292.1138.0194.5565.8487.8657.9191.7051.8791.5672.6090.4134.8095.14
GradNorm48.7284.2144.3687.5063.4978.0756.2584.7953.2083.6460.2782.9635.6091.24
KNN1045.1693.1439.1394.6251.6890.8573.5888.8152.3991.8650.6893.0910.4098.04
KNN2044.8893.1436.6495.0453.3590.5974.2788.6752.2891.8650.6892.6712.0097.81
KNN5043.6793.1931.1895.6056.7490.2975.2888.4951.7291.8957.5392.2315.6097.45
KNN10043.6393.1527.5295.9458.7490.0276.1888.3851.5291.8761.6492.0118.9097.18
Pre-train on IIT-CDIP (no fine-tune)
-KNN1097.0442.3593.9750.1797.4152.6898.0143.1996.6147.1012.3397.473.1098.38
KNN2097.1641.9994.0149.9697.8152.0198.0942.7396.7746.6715.0796.953.0098.31
KNN5096.9641.6294.3449.5698.0051.2098.0542.2496.8446.1621.9296.082.7098.18
KNN10097.0041.4894.9049.3198.1250.6598.1342.0397.0445.8736.9995.292.3098.27
", + "image_path": "4f7daab9e9d0e41bdd5cd49901dd20393e84ed097dd7d1931e69a6d1d192428b.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 71, + 564, + 523, + 759 + ], + "blocks": [ + { + "bbox": [ + 67, + 537, + 525, + 560 + ], + "lines": [ + { + "bbox": [ + 67, + 537, + 525, + 560 + ], + "spans": [ + { + "bbox": [ + 67, + 537, + 525, + 560 + ], + "type": "text", + "content": "Table 8: OOD detection performance for document classification. Longformer" + }, + { + "bbox": [ + 67, + 537, + 525, + 560 + ], + "type": "inline_equation", + "content": "_{4096}" + }, + { + "bbox": [ + 67, + 537, + 525, + 560 + ], + "type": "text", + "content": " denotes the original model adopted from the Huggingface model hub. Longformer" + }, + { + "bbox": [ + 67, + 537, + 525, + 560 + ], + "type": "inline_equation", + "content": "_{4096}" + }, + { + "bbox": [ + 67, + 537, + 525, + 560 + ], + "type": "text", + "content": " (+) denotes the additional pre-training on IIT-CDIP." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 71, + 564, + 523, + 759 + ], + "lines": [ + { + "bbox": [ + 71, + 564, + 523, + 759 + ], + "spans": [ + { + "bbox": [ + 71, + 564, + 523, + 759 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Longformer1006Fine-tune on RVL-CDIP (ID)
90.71MSP95.0064.3295.6262.1795.8960.5393.9566.8995.1263.4888.3777.5098.6054.72
MaxLogit97.1272.8497.0775.2298.2470.3995.8277.5797.0674.0090.7086.6299.6068.10
Energy97.4872.8297.3575.2198.3670.3796.5977.5697.4473.9991.8686.6399.8068.08
KNN1058.4588.2165.6586.8867.8083.9956.7889.5362.1787.1527.9196.0182.1086.31
KNN2058.9788.0465.5786.6068.1283.8057.3589.3462.5086.9429.0795.8282.6085.93
KNN5060.2587.6466.5786.2568.9183.4158.8188.9663.6486.5630.2395.4682.7085.27
KNN10061.9787.1968.1485.8170.1582.9560.4788.6065.1886.1434.8895.0482.8084.75
No fine-tune
-KNN1098.0455.4597.6359.9798.7651.7598.1353.1698.1455.0870.9388.69100.0064.97
KNN2098.1255.1997.6759.6498.8051.2798.1752.7198.1954.7070.9388.51100.0064.08
KNN5098.0054.8297.6359.1398.8050.5798.3052.0798.1854.1573.2688.29100.0062.82
KNN10097.9254.4897.6758.6298.8450.0098.3451.6298.1953.6874.4288.14100.0061.70
Longformer1006 (+)Pre-train on IIT-CDIP→fine-tune on RVL-CDIP (ID)
91.13MSP95.2064.0895.6261.3896.0559.4794.4863.1395.3462.0290.7067.2698.0055.52
MaxLogit96.9675.4196.5476.0397.8970.1596.7174.5697.0274.04100.0078.6599.7072.88
Energy97.2875.4096.5476.0398.2870.1497.1674.5597.3274.03100.0078.5999.7072.86
KNN1058.7389.2566.2187.5772.0383.7663.6888.7265.1687.3248.8494.7886.4087.84
KNN2058.6189.1865.9787.4571.6783.6963.3988.6164.9187.2348.8494.6285.3087.70
KNN5061.1788.9666.9787.2972.8383.4765.8388.3366.7087.0155.8194.2585.2087.39
KNN10061.7388.7966.9387.1173.3083.2466.1588.1567.0386.8255.8194.0084.7087.21
Pre-train on IIT-CDIP (no fine-tune)
-KNN1095.4861.4098.0753.6697.7355.5598.6648.7097.4954.8381.4091.1297.4046.27
KNN2095.5660.9297.9552.9597.4954.9798.5048.2197.3854.2684.8890.6297.5045.55
KNN5095.6059.9497.9551.7797.4153.9798.6247.2997.4053.2487.2189.9598.2044.18
KNN10095.6059.0497.9950.7497.2152.9998.5846.5197.3452.3288.3789.5298.5043.09
", + "image_path": "9ebecc600f08575451f828abb7727de28b96970541821c23288df6966b5ba861.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 310, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 310, + 801 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 310, + 801 + ], + "type": "text", + "content": "4996 12" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 289, + 523, + 572 + ], + "blocks": [ + { + "bbox": [ + 82, + 269, + 509, + 280 + ], + "lines": [ + { + "bbox": [ + 82, + 269, + 509, + 280 + ], + "spans": [ + { + "bbox": [ + 82, + 269, + 509, + 280 + ], + "type": "text", + "content": "Table 9: OOD detection performance for document classification. All models are pre-trained on ImageNet." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 71, + 289, + 523, + 572 + ], + "lines": [ + { + "bbox": [ + 71, + 289, + 523, + 572 + ], + "spans": [ + { + "bbox": [ + 71, + 289, + 523, + 572 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
ResNet-50Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.12MSP64.4987.8755.8990.9466.6087.3177.8880.8766.2286.7551.1692.7663.1090.36
MaxLogit64.8988.5947.9792.8165.4087.5277.5681.8763.9687.7041.8694.6254.0093.29
Energy67.0988.3047.8192.8666.6887.2478.5381.7565.0387.5439.5394.7348.5093.68
KNN1073.3886.8267.9887.4671.3187.8492.9077.7476.3984.966.9899.125.2098.98
KNN2074.9086.4166.2987.7973.8287.2193.9576.5177.2484.486.9898.965.5098.85
KNN5076.6686.0466.4188.4878.2986.3995.5074.7679.2283.925.8198.685.9098.70
KNN10077.5485.6165.4188.9982.1685.4396.2373.3780.3383.356.9898.346.3098.51
Pre-train on ImageNet
-KNN1096.9651.1494.6251.7598.7653.8499.5937.6097.4848.5883.5685.0020.8097.00
KNN2096.9650.3794.3451.5498.9252.9899.5936.6097.4547.8783.5684.4922.7096.71
KNN5096.9249.2994.2951.3099.0051.8499.5935.1597.4546.9083.5684.0326.7096.21
KNN10097.1248.6094.5451.2599.1651.1199.5534.3697.5946.3382.1983.3129.4095.67
Swin10Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
95.74MSP47.6488.0949.9088.1158.2283.1450.2888.9051.5187.0649.3291.3136.5093.63
MaxLogit42.3993.1142.4793.4558.6288.7945.9093.1847.3492.1350.6892.5032.2095.65
Energy43.1593.0542.9593.4059.0288.7046.7193.0747.9692.0652.0592.3833.6095.49
KNN1049.4492.8246.7392.8742.9092.5772.6988.4552.9491.6816.4496.736.1098.30
KNN2048.8492.9543.2793.5144.5392.3272.2888.3552.2391.7817.8196.527.4098.10
KNN5046.4493.2639.2594.5747.4192.0973.3487.8751.6191.9526.0396.158.6097.80
KNN10043.7693.4235.0395.2950.0891.7275.7787.4251.1691.9628.7795.9411.3097.55
Pre-train on ImageNet
-KNN1098.5652.7595.0655.1499.3658.8599.8041.8698.2052.1565.7593.262.1099.35
KNN2098.4451.8695.1854.7299.3257.8899.8040.6698.1851.2868.4992.522.6099.22
KNN5098.5250.6995.3854.1399.1656.6199.7639.0198.2050.1178.0891.143.4098.99
KNN10098.7249.9695.6653.8099.1655.8499.7638.1698.3249.4479.4589.894.3098.77
VITBasePre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
94.38MSP56.8189.1452.1991.8067.4884.2659.9088.7759.1088.4947.6792.9859.5091.99
MaxLogit50.7691.3744.6093.7568.0486.9455.1591.8154.6490.9740.7094.2052.4093.16
Energy51.1691.3144.5293.7569.4386.8156.0991.7755.3090.9138.3794.1153.2093.11
KNN1062.5790.1257.7390.9153.6790.3684.5086.1964.6289.4012.7997.9613.0097.92
KNN2063.0190.2456.0191.5155.0390.0284.3886.0164.6189.4415.1297.7614.9097.67
KNN5061.9790.6253.2392.6258.2689.5784.2585.6464.4389.6116.2897.3819.8097.24
KNN10060.2990.8549.7093.5360.3889.0784.0185.4363.6089.7216.2897.0523.6096.82
Pre-train on ImageNet
-KNN1098.4852.1595.0256.9499.4853.7799.4738.9098.1150.4493.1590.2720.4097.13
KNN2098.4851.4195.0656.6199.4452.9299.5537.6198.1349.6494.5289.4422.6096.80
KNN5098.3250.4394.8656.2199.4051.8699.5935.8298.0448.5897.2688.2326.6096.25
KNN10098.4049.7695.0655.9099.4451.1599.5934.5998.1247.8598.6387.2431.2095.76
", + "image_path": "1c93ee91c8abe7cbd433f9a636c79a1085d54fca3364e9ccd6c0fb87e359e1ac.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4997" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 291, + 791, + 302, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 791, + 302, + 801 + ], + "spans": [ + { + "bbox": [ + 291, + 791, + 302, + 801 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 207, + 523, + 661 + ], + "blocks": [ + { + "bbox": [ + 67, + 180, + 524, + 204 + ], + "lines": [ + { + "bbox": [ + 67, + 180, + 524, + 204 + ], + "spans": [ + { + "bbox": [ + 67, + 180, + 524, + 204 + ], + "type": "text", + "content": "Table 10: OOD detection performance for document classification (select OOD categories achieve the best performance across most of the models with different modalities)." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 71, + 207, + 523, + 661 + ], + "lines": [ + { + "bbox": [ + 71, + 207, + 523, + 661 + ], + "spans": [ + { + "bbox": [ + 71, + 207, + 523, + 661 + ], + "type": "table", + "html": "
REBERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
EmailResumeFile folderSci. publicationAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
86.13MSP96.2260.3890.6771.7293.8259.4793.8665.5193.6464.2791.8670.5793.0069.99
MaxLogit99.2166.5795.8073.6695.4766.8197.0965.6396.8968.1794.1977.1794.6074.69
Energy99.6066.5396.6473.5795.1466.8297.2165.3597.1568.0794.1977.4495.6074.90
KNN1083.7082.7769.0284.2888.3274.0686.1174.0281.7978.7843.0292.7472.0088.87
KNN2084.5082.3569.0684.2188.2073.7186.7274.0282.1278.5748.8492.3873.8088.31
KNN5084.9881.5768.8684.0688.0873.0187.0873.9482.2578.1454.6591.9275.4087.44
KNN10086.2580.8870.2683.8088.2872.4087.4473.8983.0677.7458.1491.5078.2086.68
Pre-train on pure-text data
-KNN1086.0975.6395.1258.6297.7159.7598.9550.5494.4761.1410.4798.4689.8063.01
KNN2086.2974.9295.0058.1497.7158.8899.0349.4994.5160.3612.7998.3590.8060.59
KNN5087.3273.5594.6457.5397.8357.5699.1548.1194.7359.1912.7998.1193.3056.61
KNN10089.2772.4894.2857.1297.9956.5299.1147.3795.1658.3711.6397.8994.3052.98
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.34MSP96.9060.5596.2059.1496.3155.7297.8255.1296.8157.6395.3580.4499.6052.82
MaxLogit98.9768.9797.6065.6495.6763.4298.6362.8797.7265.2397.6788.4299.7071.54
Energy99.4468.9697.9265.6395.8363.4298.7162.8397.9865.2197.6788.4699.9071.55
KNN1068.2888.7269.6283.3678.1785.0890.8874.9876.7483.0416.2896.9081.6086.94
KNN2068.0488.6170.1083.2277.5384.9290.7574.9576.6082.9216.2896.8481.8086.49
KNN5069.2888.2970.9882.9278.2984.4690.9674.8277.3882.6219.7796.5983.4085.71
KNN10069.2888.1571.3482.6978.4984.2190.4374.8677.3982.4822.0996.3883.9085.17
Pre-train on pure-text data
-KNN1097.4247.7795.7250.0997.6746.5899.5238.6197.5845.7645.3593.92100.0063.03
KNN2097.4646.9195.6049.8097.7146.0299.5238.2197.5745.2446.5193.77100.0061.92
KNN5097.5845.6895.5649.4597.7545.1999.5237.7297.6044.5150.0093.60100.0060.35
KNN10097.6644.7895.6049.1797.8744.6399.5637.5797.6744.0451.1693.48100.0058.89
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
85.25MSP60.5387.2669.5387.0027.8695.1394.0575.7962.9986.3091.7874.4027.8095.47
MaxLogit59.9889.2772.6188.0230.0495.4193.3975.3864.0087.0280.8279.8930.0095.29
Energy63.7189.1475.6487.5545.7194.1592.7775.0269.4686.4678.0881.0762.2093.44
KNN1072.4685.6885.6985.3068.6276.0196.1555.3580.7375.5936.9994.562.2099.37
KNN2076.1584.5588.6584.2266.1380.6796.5456.3181.8776.4438.3693.812.7099.28
KNN5080.3782.6192.0082.4960.9886.7796.9359.0682.5777.7347.9592.423.8099.11
KNN10084.7080.5495.1580.6451.2991.7897.1661.1982.0878.5450.6891.014.7098.91
Pre-train on ImageNet
-KNN1099.7240.9499.6521.5252.4791.0398.3345.4087.5449.7284.9384.3820.4097.12
KNN2099.6841.1899.6520.6850.6191.6398.4144.6587.0949.5486.3083.9423.4096.87
KNN5099.6441.5899.6519.4846.9792.3698.3743.4986.1649.2384.9383.7026.9096.43
KNN10099.6442.1999.6518.9844.9192.8498.3342.8685.6349.2284.9383.1229.2095.98
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.25MSP70.2381.8767.6885.3143.9792.6883.7879.4066.4284.8286.3078.2354.1091.62
MaxLogit54.7387.0446.5192.3017.2596.5190.8674.1152.3487.4982.1983.2034.4094.82
Energy54.0587.1144.3892.4916.3896.6391.2973.5951.5387.4684.9383.0733.8094.82
KNN1056.0890.6648.8092.8438.3193.3191.0266.9158.5585.9327.4096.033.3098.84
KNN2054.6190.9549.9892.6827.5895.2491.4468.5455.9086.8526.0396.354.0098.76
KNN5055.2590.6852.1592.3715.7597.2891.2571.6253.6087.9928.7796.104.9098.59
KNN10056.2090.3154.7592.179.1498.0091.1375.1152.8088.9030.1495.776.5098.35
Pre-train on ImageNet
-KNN1099.8443.5599.7620.6447.9293.2098.9137.5586.6148.7458.9093.881.6099.32
KNN2099.8444.4799.8018.3641.3194.1499.0336.4585.0048.3672.6092.692.6099.00
KNN5099.8845.2699.8017.9239.9794.3999.0336.7184.6748.5779.4591.973.7098.81
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
89.97MSP61.2585.8466.5785.0440.4493.1085.8481.8363.5286.4573.9780.6660.3090.41
MaxLogit53.0290.3755.7788.8619.9196.2592.3879.6955.2788.7976.7185.1650.6093.12
Energy51.7990.4955.0789.0317.5396.5392.6979.2054.2788.8179.4585.0150.1093.20
KNN1054.1391.1852.8691.1858.4987.4692.8865.9864.5983.9542.4795.0711.0097.94
KNN2054.2191.1853.1790.9950.6189.3593.0467.5262.7684.7643.8494.9813.1097.62
KNN5054.5391.0553.3390.7941.9592.8293.0072.0660.7086.6842.4794.7417.3097.12
KNN10054.6590.8154.1290.5630.7991.9098.7247.1088.2452.1995.8989.3122.0096.58
Pre-train on ImageNet
-KNN1099.8046.4699.6826.5058.6590.6198.7246.4089.2152.4987.6791.3919.9097.25
KNN2099.8046.0299.6525.6957.3091.0198.7246.4688.8752.3090.4190.8721.7097.01
KNN5099.8045.4899.6124.7655.1691.5298.7646.6988.3352.1194.5289.9924.3096.62
KNN10099.8045.3399.6524.4354.8191.9098.7247.1088.2452.1995.8989.3128.8096.27
", + "image_path": "1b1752f689735deabd5b92180920f0866266f465367a3d1dc83a8f3255e9c4a5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 791 + ], + "type": "text", + "content": "4998" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 292, + 791, + 303, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 791, + 303, + 801 + ], + "spans": [ + { + "bbox": [ + 292, + 791, + 303, + 801 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 71, + 89, + 523, + 543 + ], + "blocks": [ + { + "bbox": [ + 77, + 74, + 515, + 85 + ], + "lines": [ + { + "bbox": [ + 77, + 74, + 515, + 85 + ], + "spans": [ + { + "bbox": [ + 77, + 74, + 515, + 85 + ], + "type": "text", + "content": "Table 11: OOD detection performance for document classification (randomly select four categories as OOD)." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 71, + 89, + 523, + 543 + ], + "lines": [ + { + "bbox": [ + 71, + 89, + 523, + 543 + ], + "spans": [ + { + "bbox": [ + 71, + 89, + 523, + 543 + ], + "type": "table", + "html": "
RobERTaBaseID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
LetterHandwrittenAdvertisementMemoAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROC
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
88.86MSP70.2279.2150.1487.2484.6467.8091.4257.9974.1073.0695.3559.7594.3055.12
MaxLogit66.0487.5139.6592.5386.4777.0391.6771.8470.9682.23100.0077.8996.8071.96
Energy66.2087.5738.1992.5987.3577.0391.6771.8970.8582.27100.0077.9296.8071.96
KNN1062.6280.1960.9870.9075.6280.2485.8469.2071.2675.1394.1981.9990.4082.48
KNN2063.1880.1060.0771.1775.9080.0385.7268.8871.2275.0494.1981.7591.2081.89
KNN5063.7880.0057.3071.7076.3479.6785.8868.3870.8274.9494.1981.4591.8081.09
KNN10064.7779.9854.3371.9477.3779.3286.0867.8070.6474.7694.1981.2091.9080.47
Pre-train on pure-text data
-KNN1085.5359.9098.6121.7996.2156.7297.6958.3994.5149.2012.7998.0184.5065.73
KNN2085.4559.2798.7321.1996.2155.6397.9057.0594.5748.2812.7997.9186.1063.57
KNN5086.8057.9498.7720.4596.8954.1298.3055.3595.1946.9613.9597.6089.3059.64
KNN10088.4756.7198.8119.9796.8152.8998.1853.9395.5745.8813.9597.3891.1055.17
Pre-train on pure-text data→ fine-tune on RVL-CDIP (ID)
92.08MSP65.9669.5850.3877.9381.5260.8990.2154.2372.0265.6682.5660.1495.0050.90
MaxLogit62.1987.3544.6489.7979.9778.8488.3968.0868.8081.0280.2384.1994.3077.36
Energy61.2787.3543.6189.8179.1378.8588.1568.0868.0481.0280.2384.1994.3077.37
KNN1058.6579.5450.7771.8166.5683.4880.8775.1964.2177.5158.1492.7890.0077.76
KNN2057.8179.4351.4071.7267.0083.3581.1574.8664.3477.3458.1492.5789.7077.12
KNN5058.7779.3051.6071.6766.7283.1581.3174.3664.6077.1261.6392.2489.8076.17
KNN10061.3979.1652.7571.6167.8482.9381.7673.9165.9476.9062.7991.9989.8075.29
Pre-train on pure-text data
-KNN1099.4047.83100.0027.7598.2847.0393.2060.4097.7245.7546.5193.85100.0063.64
KNN2099.4447.33100.0027.4898.3246.4993.2460.2297.7545.3848.8493.70100.0062.79
KNN5099.4446.33100.0027.2398.4045.8593.4160.0597.8144.8651.1693.51100.0061.55
KNN10099.4445.67100.0027.3198.4445.2393.5359.9097.8544.5352.3393.40100.0060.31
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
87.80MSP70.5885.3555.2989.8864.2986.5471.1585.5865.3386.8454.7991.7077.2084.67
MaxLogit64.2587.4653.5990.7249.7090.6064.4588.7158.0089.3736.9995.1378.9086.86
Energy62.6687.6558.3390.3346.0091.2663.5689.0557.6489.5732.8895.6983.0087.05
KNN1090.9979.3756.3690.6472.4186.2089.1781.7477.2384.492.7499.3239.7093.70
KNN2092.1778.0047.4792.6168.2788.4290.8580.2374.6984.822.7499.2543.8093.08
KNN5094.3275.9628.4494.4965.6589.2792.7877.9170.3084.411.3798.9749.7092.09
KNN10095.5874.0227.2195.0760.4489.7894.2275.6369.3683.622.7498.6753.8091.10
Pre-train on ImageNet
-KNN1098.4642.2177.2981.4127.8791.1699.0843.4775.6864.5680.8289.9812.3098.17
KNN2098.6641.0076.7881.7029.2292.2799.0842.2975.9464.3283.5689.3014.1097.97
KNN5098.5839.5376.5881.8131.0192.0599.1240.8076.3263.5583.5688.5116.3097.61
KNN10098.6238.6277.1381.4932.6491.8499.1239.8676.8862.9583.5687.8019.5097.23
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
92.42MSP63.9687.0365.2188.1573.5679.7261.4088.4666.0385.8484.9374.3449.6092.49
MaxLogit56.4990.2275.3687.0072.6484.2644.2293.0162.1888.6272.6084.1629.1095.70
Energy57.4390.1177.0186.6073.4484.1743.7893.0662.9288.4873.9784.2528.0095.69
KNN1060.2790.1266.9090.7649.6689.1547.6792.6756.1290.6842.4794.287.2098.56
KNN2061.3290.0161.3791.3148.8390.3349.0092.5255.1391.0430.1495.568.8098.33
KNN5062.2289.7856.4491.5650.3489.5548.5292.3054.3890.8026.0395.7211.8097.97
KNN10062.6289.6054.9891.8550.7088.9347.6392.1853.9890.6430.1495.5413.9097.66
Pre-train on ImageNet
-KNN1099.1545.5786.0279.4432.4590.9899.5246.2079.2865.5524.6696.240.4099.78
KNN2099.1944.1186.8980.3533.4892.1999.6044.7979.7965.3627.4095.620.5099.73
KNN5099.2342.3987.9981.6636.7891.5999.6043.0780.9064.6843.8494.570.8099.63
KNN10099.1941.4689.0282.6340.6091.0599.6042.1482.1064.3252.0593.491.2099.53
Pre-train on ImageNet→ fine-tune on RVL-CDIP (ID)
91.03MSP69.6886.8169.6787.8872.2580.7869.3886.6170.2485.5267.1285.9758.5091.47
MaxLogit63.3589.2068.4088.5869.5884.3861.0889.9465.6088.0257.5389.4148.4093.04
Energy62.2289.2170.3488.4370.2684.3760.7590.0365.8988.0158.9089.4749.7093.03
KNN1068.1088.9954.9092.3053.4488.0558.1991.3458.6690.1738.3695.0222.9096.71
KNN2067.6188.9549.0192.8551.5389.2558.5991.1656.6890.5541.1094.4725.4096.35
KNN5067.2988.9142.5493.1553.9688.4358.7590.8855.6490.3442.4793.6029.9095.78
KNN10066.1988.9043.8093.1955.7187.7359.1190.6456.2090.1245.2192.8634.9095.27
Pre-train on ImageNet
-KNN1098.9041.9890.9677.1534.8790.6999.4041.2181.0362.7654.7994.2710.8098.47
KNN2098.9440.5491.6777.2036.8291.7199.4439.8581.7262.3264.3893.5712.7098.25
KNN5099.0738.7592.6176.9940.0091.1799.5238.1482.8061.2675.3492.4715.9097.87
KNN10099.1137.4393.2576.5643.3890.6899.5636.9383.8260.4082.1991.5218.9097.49
", + "image_path": "d21179af54df802164528cb4458c607a666cd4356a9b0dfd1da6f32220841944.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 69, + 602, + 524, + 766 + ], + "blocks": [ + { + "bbox": [ + 69, + 558, + 524, + 593 + ], + "lines": [ + { + "bbox": [ + 69, + 558, + 524, + 593 + ], + "spans": [ + { + "bbox": [ + 69, + 558, + 524, + 593 + ], + "type": "text", + "content": "Table 12: OOD detection performance for document classification. All models are pre-trained on IIT-CDIP. For LayoutLM models, we adopt the checkpoints from the Huggingface model hub. For UDoc, we pre-train the model on our side. All models are fine-tuned on RVL-CDIP ID data." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 69, + 602, + 524, + 766 + ], + "lines": [ + { + "bbox": [ + 69, + 602, + 524, + 766 + ], + "spans": [ + { + "bbox": [ + 69, + 602, + 524, + 766 + ], + "type": "table", + "html": "
ID AccMethodOOD Dataset (In-domain)OOD Dataset (Out-domain)
Sci. ReportPresentationFormLetterAverageSci. PosterReceipt
FPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95AUROCFPR95
LayoutMv1Base97.28MSP47.4874.9159.7468.7266.4065.3658.8969.1258.1369.5343.0277.1572.40
MaxLogit27.0692.3837.9791.5245.6588.3635.9291.2236.6590.8724.4294.9657.30
Energy27.0692.4037.9791.5445.6588.3635.9291.2336.6590.8824.4294.9757.30
KNN1020.8296.0935.3293.8240.0691.3428.6594.8031.2194.0117.4497.0049.80
KNN2021.7495.9336.2093.7741.4291.1230.4494.6132.4593.8617.4496.8251.70
KNN5024.3495.5638.2593.4143.9390.6933.6494.1935.0493.4623.2696.4453.80
KNN10025.5495.3039.1393.2045.1790.3534.7893.9936.1693.2125.5896.2454.70
LayoutMv397.81MSP56.1670.8163.4467.1767.1665.3058.6069.5861.3468.2252.3372.7043.60
MaxLogit30.7089.1740.4288.1842.9884.0933.1288.2236.8087.4219.7794.5011.70
Energy30.7089.1840.4288.1842.9884.1033.1288.2336.8087.4219.7794.5111.70
KNN1021.7495.0335.6893.3832.8891.8618.5196.2627.2094.1311.6397.588.90
KNN2022.7494.9036.5693.2033.9691.6619.6496.1528.2293.9812.7997.4410.00
KNN5024.6294.6238.3792.7135.8391.3821.6395.9330.1193.6613.9597.2010.70
KNN10025.2294.3839.2992.3236.5591.0922.4895.7930.8893.4016.2897.0411.80
UDocNet5097.36MSP66.1365.7369.4364.0971.0363.2871.0663.2569.4164.0940.7078.4739.80
MaxLogit45.9682.1247.2186.3949.6483.1649.5983.1348.1083.702.3398.574.00
Energy45.9682.1247.2186.4049.6483.1649.5983.1348.1083.702.3398.604.00
KNN1030.0294.4741.2288.6641.9090.9936.6593.4837.4591.901.1699.135.50
KNN2031.1094.3641.9888.4442.1090.9038.0393.3538.3091.761.1699.046.90
KNN5033.9594.0743.3587.8944.0190.7240.7193.0640.5191.431.1698.847.40
KNN10034.8393.8443.7587.5145.0190.6141.9692.9041.3991.221.1698.728.30
", + "image_path": "a818ce65bc49fbbdc05638cbad98c0d341d7b4b8eaf06fd0ffa7635bf25db81f.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "text", + "content": "4999" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 292, + 791, + 302, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 791, + 302, + 801 + ], + "spans": [ + { + "bbox": [ + 292, + 791, + 302, + 801 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 5 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 26 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_content_list.json b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..312fc377fd695c9470120f06ee9a9b2e4366e683 --- /dev/null +++ b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_content_list.json @@ -0,0 +1,1174 @@ +[ + { + "type": "text", + "text": "A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets", + "text_level": 1, + "bbox": [ + 119, + 89, + 878, + 129 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Musa Nuri İhtiyar, Ömer Özdemir, Mustafa Emre Erengül, Arzucan Özgü", + "bbox": [ + 174, + 153, + 826, + 172 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{musa.ihtiyar, omer.ozdemir1, mustafa.erengul, arzucan.ozgur} @boun.edu.tr", + "bbox": [ + 186, + 173, + 816, + 187 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Department of Computer Engineering, Bogazici University", + "bbox": [ + 260, + 189, + 739, + 204 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Offensive language detection is crucial in natural language processing (NLP). We investigated the importance of context for detecting such language in reply tweets on Twitter, where the use of offensive language is widespread. We collected a Turkish tweet dataset where the target group was unvaccinated people during the Covid period. Tweets in the dataset were enriched with contextual information by adding the original tweet to which a particular tweet was posted as a reply. The dataset, which includes over 28,000 tweet-reply pairs, was manually labeled by human annotators and made publicly available. In addition, we compared the performance of different machine learning models with and without contextual information. Our results show that this type of contextual information was not very useful in improving the performance of the models in general, although it slightly increased the macroaveraged F1-score of certain models.", + "bbox": [ + 144, + 278, + 460, + 576 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 588, + 258, + 602 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Humans can communicate through language, which enables them to engage in many useful activities, yet language might also be used for destructive purposes. One of the most critical examples of this is offensive language, which can be defined as \"any utterance which is blasphemous, obscene, indecent, insulting, hurtful, disgusting, morally repugnant, or which breaches commonly accepted standards of decent and proper speech\" (Law-Insider, 2023).", + "bbox": [ + 112, + 613, + 489, + 758 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The use of offensive language can occur on a variety of platforms, but is particularly common on online platforms such as Twitter. In recent years, several approaches have been proposed to automatically detect offensive language in tweets. Finetuning language models pre-trained with extensive data is considered the current state-of-the-art for detecting offensive language. BERT (Devlin et al., 2019) is one of the most prominent transformer-based pre-trained language models for English and", + "bbox": [ + 112, + 758, + 489, + 919 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "has also been shown to be very effective in detecting offensive language (Dai et al., 2020; Zampieri et al., 2020; Mozafari et al., 2020). A similar trend can be observed for other languages. For example, Mubarak et al. (2023) used AraBERT (Antoun et al., 2020), the Arabic version of BERT, for Arabic. Similarly, BERTurk (Schweter, 2020) has been successfully used to detect offensive language in Turkish tweets (Beyhan et al., 2022; Toraman et al., 2022; Arin et al., 2023).", + "bbox": [ + 507, + 253, + 885, + 413 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Annotated datasets are needed to train or fine-tune machine learning models for offensive language detection. A number of datasets have been prepared for different languages and domains and made publicly available (Basile et al., 2019; Zampieri et al., 2020; ElSherief et al., 2021). A limitation of these datasets is that generally each tweet is labeled individually without considering contextual information. There are few studies that consider contextual information. Mosca et al. (2021) investigate the relative contribution of user information features in machine learning models by using explainability techniques. Cécillon et al. (2021) propose a graph-based approach to represent dialog data from chat logs of an online game and use this representation for abusive language detection. Yu et al. (2022) define context as the previous comment in a Reddit conversation thread and show that such contextual information is useful for detecting hate speech.", + "bbox": [ + 507, + 417, + 885, + 738 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We hypothesize that similar contextual information may be useful for offensive language detection in tweets. As a motivating example, consider a reply tweet that states, \"I fully agree.\" The category of this reply tweet (i.e., whether it is offensive or not) depends on the previous context, i.e., the tweet to which it was posted as a reply. To investigate the impact of such contextual information on commonly used machine learning-based offensive language detection models, we collected and manually annotated tweet-reply pairs in Turkish, a", + "bbox": [ + 507, + 741, + 885, + 919 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1543", + "bbox": [ + 480, + 927, + 519, + 941 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1543-1549", + "bbox": [ + 216, + 944, + 779, + 957 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 277, + 957, + 719, + 971 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "low-resource language with limited datasets. One of the first tweet datasets for detecting offensive language in Turkish was developed by Çöltekin (2020). Recently, Beyhan et al. (2022) and Toraman et al. (2022) also released tweet datasets for Turkish. However, none of these datasets consider contextual information.", + "bbox": [ + 112, + 84, + 487, + 195 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We chose our domain as the Covid-19 pandemic, which affected our lives in a number of different ways. Pandemics trigger fear and anger in most people, leading to increased use of offensive language. Sharif et al. (2021) studied the detection of hostile statements in the context of the Covid-19 pandemic, and Bor et al. (2023) showed that such offensive language occurred against unvaccinated people during this period. Therefore, we selected unvaccinated people as our target group.", + "bbox": [ + 110, + 198, + 487, + 357 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The main contributions of this paper are twofold: (i) We collect and manually annotate a Turkish tweet dataset specific to the Covid-19 period and containing contextual information in the form of the replied tweet. (ii) We investigate the impact of such contextual information on the performance of commonly used machine learning-based models for offensive language detection. The dataset and source code are made publicly available for future studies.", + "bbox": [ + 112, + 359, + 487, + 518 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The rest of the paper is organized as follows. While Section 2 examines the collection and annotation of the dataset, Section 3 focuses on the experiments conducted to compare the machine learning models with and without contextual information. Finally, Section 4 discusses the lessons learned.", + "bbox": [ + 112, + 520, + 489, + 631 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Dataset", + "text_level": 1, + "bbox": [ + 112, + 646, + 215, + 659 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We collected a dataset containing replied and reply tweet pairs. A reply tweet is a tweet written in response to another tweet, while a replied tweet is a tweet to which another tweet has replied. Suppose a tweet $T1$ is posted and then another tweet $T2$ is posted in response to $T1$ . In this case, $T1$ is called a reply tweet and $T2$ is called a reply tweet.", + "bbox": [ + 112, + 671, + 487, + 785 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our goal was to create a target group-specific dataset to enable the development of models capable of detecting offensive language towards a specific target group. We selected unvaccinated people in the Covid 19 pandemic as the target group for offensive language. We examined the period from", + "bbox": [ + 112, + 785, + 487, + 883 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "March 2020, when the virus reached Türkiye, to September 2022, when the pandemic was no longer on the agenda for most people on the planet. We used search by keyword with 16 different queries such as \"aşırkız\" (unvaccinated) and \"aşı olmak istemeyen\" (those who do not want to be vaccinated) to identify relevant tweets. The keywords are phrases meaning \"aşırkız\" (unvaccinated) with different singular/plural forms or spellings due to the Turkish character related issues. The list of all keywords used in this study can be found in the Appendix.", + "bbox": [ + 507, + 84, + 884, + 261 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "There were different options to search for the replied and reply tweet pairs. The first one was getting pairs where at least one of the 16 search keywords occurred in the reply tweet. We call this Dataset 1. Another possibility is that these keywords occur in the replied tweet. This case contains two subcases. The first case is to have at least one of these keywords in a replied tweet, which itself is a reply to another tweet. We refer to this case as Dataset 2. Finally, the last case is to have at least one of these keywords in a replied tweet that is not itself a reply to another tweet. This case is called Dataset 3. All three of these datasets were merged to obtain the final dataset.", + "bbox": [ + 507, + 266, + 884, + 491 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Although conversations on Twitter could be arbitrarily long, we only looked at the previous tweet (replied tweet) to avoid unnecessarily complicated data format. In other words, all of the samples in our dataset are a pair. Yet, we could capture any replied-reply couple related to unvaccinated people as long as at least one of the tweets contains one or more of the pre-determined keywords. During the search, we collected tweet ID and tweet text information for both the replied and reply tweets.", + "bbox": [ + 507, + 495, + 882, + 656 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Once the collection process was completed, we proceeded with labeling. The objective of the annotation was to obtain a binary label indicating whether or not the reply tweet contains offensive language against unvaccinated people. Making explanations about specific points is essential for this part. First of all, we decided to keep the task clear so that we could understand the impact of the context better, so using a binary label looked like the best option, and we only looked at offensive language against unvaccinated people; in other words, even if a reply tweet was offensive, against immigrants for instance, we labeled that as \"not offensive against unvaccinated people\" instead of \"offensive against unvaccinated people\". This was not because such offensive language was acceptable", + "bbox": [ + 507, + 662, + 884, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "$^{1}$ https://github.com/boun-tabi/CovidOffensiveLanguageUltimateDatasets", + "bbox": [ + 112, + 892, + 405, + 917 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "1544 2", + "bbox": [ + 482, + 928, + 519, + 952 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "but due to the fact that we wanted to have a single target group to make the problem more focused such that the effect of the context could be seen more directly. Solely focusing on the offensiveness of the reply tweet was done since the context is relevant only for the reply tweet. That is, a pair where the replied tweet was offensive against unvaccinated people, but the reply tweet was not offensive is categorized as \"not offensive\" since we are only interested in the reply tweet's behavior.", + "bbox": [ + 112, + 84, + 492, + 247 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Which cases to consider as offensive language is another crucial point to explain. Situations like swearing and insulting were the most obvious ones. In addition, provocative words like stating that there should be a punishment, such as not being able to go outside or get into closed areas, without stating any exception or an alternative option, for unvaccinated people are included in this label. Also, we want to express that quotations or simply stating an idea without using harmful language, like saying that \"not getting vaccinated is a wrong behavior,\" are not perceived as offensive language. Even if we determine criteria, as we mentioned, for when to consider a tweet as offensive, this field is inevitably subjective for specific examples. This is why at least two people annotated each pair in our dataset.", + "bbox": [ + 115, + 252, + 490, + 525 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The annotation process was carried out as follows. A general guideline for annotation was established and provided to the annotators (i.e., three of the authors of the paper) and a training was performed by using sample examples. Each tweet pair was annotated independently by two annotators and a third annotator was used to resolve inconsistencies. For each tweet pair, there were three label options, namely \"not offensive against unvaccinated people\", \"ambiguous\", and \"offensive against unvaccinated people\". Although it is stated that the goal was obtaining binary labels, three options were given in order to provide more flexibility to the annotators; however, the pairs whose final label is \"ambiguous\" were removed from the final dataset since this would make the primary goal of the study more difficult to interpret which was examining the effect of taking the replied tweet into account. While doing the annotation, totally unrelated cases in the dataset, such as unvaccinated fruits and vegetables owing to chosen keywords, were mostly cleaned even though a limited number of such cases might be still existing in the dataset. We wanted to measure inter-annotator agreement", + "bbox": [ + 115, + 533, + 490, + 919 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "for these labels, so we used the F1 and Cohen Kappa scores. We obtained $55.22\\%$ and $46.26\\%$ , respectively, for these metrics.", + "bbox": [ + 507, + 84, + 884, + 131 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "After obtaining the annotations by two annotators for each pair of tweets, the annotations were examined. If there is consistency, then this was chosen as the ultimate label. If the ultimate label is \"ambiguous\", it is removed; otherwise, it is added to the final dataset. If there is inconsistency in the form of one annotator choosing \"not offensive\" and the other choosing \"offensive\", these cases are ambiguous; consequently, these were removed as well. For the inconsistencies where one annotator chose \"ambiguous\", the third annotator looked at the tweet pair and determined the final decision. If a label other than \"ambiguous\" was chosen, then it is selected as the last label. If not, it was removed. After several hours of this procedure, pairs with binary labels were obtained. In total, we obtained 28808 pairs. While 13478 of them came from Dataset 1, Datasets 2 and 3 contributed with 1515 and 13815 pairs, respectively. The final binary dataset has 27219 examples that are not offensive against unvaccinated people, denoted with 0, and 1589 examples which are offensive against unvaccinated people which are denoted with 2 since 1 was representing the ambiguous case. The dataset is inevitably imbalanced since $94.48\\%$ of the pairs are labeled as 0. Inter-annotator agreement for the dataset's last version was measured using the F1 score and Cohen Kappa score. This time they were calculated as $95.21\\%$ and $88.97\\%$ , which is significantly better than the initial version of the dataset. The final version of the dataset containing the reply and reply tweet ids as well as the manual annotations is made publicly available for future studies.[2]", + "bbox": [ + 507, + 135, + 885, + 682 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Experiments and Results", + "text_level": 1, + "bbox": [ + 509, + 700, + 761, + 717 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "After completing the annotation of the dataset, we used it to train and evaluate various machine learning models to detect offensive language against unvaccinated people. We randomly selected $20\\%$ of the dataset as the test set. For each algorithm we used, we examined two different scenarios. In the first, we used only the reply tweet, while in the second, we studied the impact of using the replied tweet in addition to the reply tweet on our models.", + "bbox": [ + 507, + 731, + 884, + 876 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "2https://github.com/boun-tabi/", + "bbox": [ + 507, + 891, + 761, + 904 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "CovidOffensiveLanguageUltimateDatasets", + "bbox": [ + 509, + 904, + 801, + 917 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "1545", + "bbox": [ + 482, + 927, + 519, + 938 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 492, + 940, + 504, + 951 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/d6107a1af23337bab38d975758aad73cb0e78e0e7269bcee018e4a91ee0249eb.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodPrecRecF1
KNN (1)20.5641.1227.41
KNN (2)20.8440.7927.59
LR (1)50.0039.8044.32
LR (2)44.7241.7843.20
MNB (1)65.3226.6437.85
MNB (2)45.6534.5439.32
SVM (1)50.7644.0847.18
SVM (2)51.4634.8741.57
RF (1)38.5139.1438.82
RF (2)43.2535.8539.21
", + "bbox": [ + 161, + 80, + 438, + 266 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Traditional Machine Learning Models", + "text_level": 1, + "bbox": [ + 112, + 361, + 460, + 378 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Simple machine learning algorithms might perform quite good for certain tasks. Therefore, we started with simple algorithms such as Logistic Regression (LR), K-Nearest Neighbors (KNN), and Multinomial Naive Bayes (MNB). Then we also used Support Vector Machines (SVM) and Random Forest (RF). Since our dataset was imbalanced, we used downsampling to increase the performance of our models. In other words, we randomly selected a subset for the not offensive class while using all samples for the offensive class since it already had a limited number of samples. We had 1285 positive samples in the training set, so we decreased the not offensive class to 4500 samples, since too much reduction would cause a data scarcity problem. We used a tfidf based vector representation for the tweets. The performance of the commonly used traditional machine learning algorithms is given in Table 1 with the macro-averaged F1 score, precision, and recall.", + "bbox": [ + 112, + 382, + 489, + 703 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "There are two main observations we can make with these results. These simple models are not able to perform well on this task. Even if we had used a majority classifier, we would obtain $50.0\\%$ recall, $47.24\\%$ precision and $48.58\\%$ F1 score. The inclusion of information from the replied tweets does not have a significant impact on the performance of the models and behaves more like noise.", + "bbox": [ + 112, + 706, + 489, + 834 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2 Deep Learning Models", + "text_level": 1, + "bbox": [ + 112, + 848, + 339, + 865 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Deep Learning models are top-rated in natural language processing. Especially the transformer-based ones (Vaswani et al., 2017) like BERT (De", + "bbox": [ + 112, + 871, + 489, + 917 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/90529b5c9b29f8bff9fcebf12b9eb9d09ac7256a0ea91d5cb4178383fdcc3523.jpg", + "table_caption": [ + "Table 1: Results for traditional models. For each model, (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used." + ], + "table_footnote": [], + "table_body": "
MethodPrecRecF1
BERTurk (1)65.7382.6870.28
BERTurk (2)70.1179.0373.57
", + "bbox": [ + 541, + 80, + 852, + 134 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 2: Results for deep learning models. (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used.", + "bbox": [ + 507, + 143, + 882, + 202 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "vlin et al., 2019) obtained incredible success in the last years. Therefore, we decided to look at the performance of the Turkish version of the BERT model called BERTurk (Schweter, 2020) with and without replied tweet information. For the single tweet setting, we followed the classical procedure for fine-tuning where we used binary cross-entropy with Adam optimizer (Kingma and Ba, 2015) with $5x10^{-5}$ learning rate. We did the hyperparameter optimization by looking at the validation set F1 score. For the case of using two tweets (the reply and replied tweet), the only difference was creating a longer input string by combining the two tweets in the form of \"Önceki tweet: replied tweet Cevap: reply tweet\" (in English, \"Previous tweet: replied tweet Reply: reply tweet\"). The results (macro-averaged scores) obtained on the test set are summarized for the two cases in Table 2.", + "bbox": [ + 507, + 247, + 884, + 537 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Interestingly, this time the model that uses both the reply and the replied tweet performed better in terms of F1 score, yet the effect of taking context into account is still limited. Even though precision improves, recall drops. The English translation of an example to explain this phenomenon is provided below. In this example, the reply tweet is offensive, while the replied tweet is not offensive. In this case, including the replied tweet as contextual information to classify the reply tweet misleads the model.", + "bbox": [ + 507, + 549, + 882, + 725 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Replied Tweet: \"Vaccination opponents misread the National Anthem.\"", + "- Reply Tweet: \"Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine.\"" + ], + "bbox": [ + 507, + 736, + 884, + 827 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For more example tweets where the inclusion of context (i.e., the replied tweet) is necessary for the correct classification of the reply tweet and where context could mislead the classifier, see the Appendix.", + "bbox": [ + 507, + 839, + 882, + 917 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "1546 4", + "bbox": [ + 480, + 928, + 519, + 952 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4 Conclusion", + "text_level": 1, + "bbox": [ + 112, + 84, + 247, + 98 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We prepared an offensive language dataset for Turkish, where the number of such datasets is very limited. Unlike most other tweet datasets where each tweet is considered individually, we included the replied tweet as contextual information and investigated how this information affects the performance of commonly used machine learning models. Contrary to our expectation, our results showed that this resulted in only a slight improvement in the F1-score for some models and did not significantly improve the performance of the studied models for offensive language detection in general. In theory, the previous tweet appears to contain important information. However, in analyzing our dataset, we found that most reply tweets have only a weak relationship to the replied tweet in terms of meaning. Moreover, interaction with other tweets is dominated by the use of other features on Twitter, such as \"like\" or \"retweet.\" Consequently, the use of information about previous tweets did not provide much contribution for offensive language detection in this study. Nonetheless, attempting to develop models specifically designed to consider information about previous tweets could lead to better performance and represents a promising future research direction.", + "bbox": [ + 112, + 111, + 492, + 529 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 112, + 543, + 218, + 558 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "While we tried various methods for detecting offensive language with and without replied tweet, we have not focused on developing a specific model which benefits from the previous (i.e., replied) tweet in the best way. Our goal was to investigate the impact of contextual information on the performance of commonly used machine learning-based models. Therefore, even though we were not able to get significant improvements with contextual information, further research focusing on this subject is a promising direction to follow.", + "bbox": [ + 112, + 571, + 489, + 747 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We examined the use of previous tweet for only single target group and language due to the laborious nature of the manual annotation process and the time limitations. The dataset can be expanded with other target groups and languages in the future.", + "bbox": [ + 112, + 749, + 489, + 829 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 112, + 843, + 265, + 858 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Offensive language detection systems could be very useful for real-life uses. Because machine learning-based models are guided mainly by the data they", + "bbox": [ + 112, + 871, + 489, + 919 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "use, the annotation of datasets is an essential step, which ought to be carried out responsibly. Despite the fact that we tried to use multiple annotators for the labeling process, developing better strategies are possible since some examples regarding offensive language are very subjective. The annotated data is shared based on Twitter's terms of use.", + "bbox": [ + 507, + 84, + 884, + 197 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 509, + 210, + 682, + 225 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This work is partially supported by the EU funded project entitled \"Utilizing Digital Technology for Social Cohesion, Positive Messaging and Peace by Boosting Collaboration, Exchange and Solidarity\" and by the Bogaziçi University Research Fund under the Grant Number 16903.", + "bbox": [ + 507, + 237, + 884, + 332 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 510, + 362, + 608, + 376 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association.", + "Inanç Arin, Zeynep Işık, Seçilay Kugal, Somaiyeh Dehghan, Arzucan Özgür, and Berrin Yanıkoğlu. 2023. SIU2023-NST - Hate Speech Detection Contest. In 31st Signal Processing and Communications Applications Conference (SIU), pages 1-4.", + "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. Semeval-2019 Task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63.", + "Fatih Beyhan, Buse Çarık, İnç Arın, Ayşecan Terzioglu, Berrin Yanıkoğlu, and Reyyan Yeniterzi. 2022. A Turkish hate speech dataset and detection system. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 4177-4185.", + "Alexander Bor, Frederik Jorgensen, and Michael Bang Petersen. 2023. Discriminatory attitudes against unvaccinated people during the pandemic. Nature, 613(7945):704-711.", + "Noé Cécillon, Vincent Labatut, Richard Dufour, and Georges Linares. 2021. Graph embeddings for abusive language detection. SN Computer Science, 2:1-15.", + "Çagrı Çoltekin. 2020. A corpus of Turkish offensive language on social media. In Proceedings of the" + ], + "bbox": [ + 509, + 385, + 885, + 919 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "1547 5", + "bbox": [ + 480, + 927, + 519, + 952 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Twelfth Language Resources and Evaluation Conference, pages 6174-6184, Marseille, France. European Language Resources Association.", + "bbox": [ + 131, + 85, + 489, + 124 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Wenliang Dai, Tiezheng Yu, Zihan Liu, and Pascale Fung. 2020. Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-TaskLearning for Offensive Language Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2060-2066.", + "bbox": [ + 115, + 134, + 489, + 202 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", + "bbox": [ + 114, + 210, + 489, + 329 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 345-363.", + "bbox": [ + 114, + 338, + 489, + 418 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", + "bbox": [ + 114, + 426, + 489, + 494 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Law-Insider. 2023. Offensive language definition | law insider. Accessed on June 18, 2023.", + "bbox": [ + 114, + 502, + 487, + 529 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Edoardo Mosca, Maximilian Wich, and Georg Groh. 2021. Understanding and interpreting the impact of user context in hate speech detection. In Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pages 91-102.", + "bbox": [ + 114, + 538, + 489, + 606 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Marzieh Mozafari, Reza Farahbakhsh, and Noel Crespi. 2020. A BERT-based transfer learning approach for hate speech detection in online social media. In Complex Networks and Their Applications VIII: Volume 1 Proceedings of the Eighth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2019 8, pages 928-940. Springer.", + "bbox": [ + 114, + 614, + 489, + 708 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Hamdy Mubarak, Sabit Hassan, and Shammur Absar Chowdhury. 2023. Emojis as anchors to detect Arabic offensive language and hate speech. *Natural Language Engineering*, page 1-22.", + "bbox": [ + 114, + 715, + 489, + 770 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Stefan Schweter. 2020. BERTurk - BERT models for Turkish.", + "bbox": [ + 114, + 778, + 487, + 806 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Omar Sharif, Eftekhar Hossain, and Mohammed Moshiul Hoque. 2021. Combating hostility: Covid-19 fake news and hostile post detection in social media. arXiv preprint arXiv:2101.03291.", + "bbox": [ + 114, + 815, + 489, + 869 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Cagri Toraman, Furkan Şahinuc, and Eyup Yilmaz. 2022. Large-scale hate speech detection with cross-domain transfer. In Proceedings of the Thirteenth", + "bbox": [ + 114, + 877, + 489, + 917 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Language Resources and Evaluation Conference, pages 2215-2225, Marseille, France. European Language Resources Association.", + "bbox": [ + 526, + 85, + 884, + 124 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.", + "bbox": [ + 510, + 134, + 884, + 200 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Xinchen Yu, Eduardo Blanco, and Lingzi Hong. 2022. Hate speech and counter speech detection: Conversational context does matter. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5918-5930, Seattle, United States. Association for Computational Linguistics.", + "bbox": [ + 509, + 209, + 884, + 315 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Cagri Öltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425-1447. Association for Computational Linguistics.", + "bbox": [ + 509, + 324, + 884, + 430 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "A Tweet Pair Examples Regarding Context Information", + "text_level": 1, + "bbox": [ + 509, + 439, + 826, + 472 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "A.1 An example where context is necessary for correct classification of the reply tweet", + "text_level": 1, + "bbox": [ + 509, + 482, + 880, + 514 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The English translation:", + "bbox": [ + 509, + 519, + 690, + 533 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Replied Tweet: If we are closed at home again because of those who are not vaccinated, you will see curses that you have not seen so far in this account..", + "bbox": [ + 507, + 536, + 880, + 598 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Reply Tweet: +1", + "bbox": [ + 527, + 600, + 655, + 615 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "A.2 An example where context does not matter", + "text_level": 1, + "bbox": [ + 509, + 626, + 836, + 656 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "English translation:", + "bbox": [ + 509, + 663, + 658, + 678 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Replied Tweet: It may be against the necessity of vaccination, it may be thought that the mask is not protective; however, there is no human side of walking as a group on a girl who works as a cashier under difficult conditions, entering a closed area without a mask, and causing fear and sadness.", + "bbox": [ + 507, + 678, + 880, + 774 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Reply Tweet: Those who are not vaccinated + those who do not wear masks. I seriously don't understand what's wrong with this team. This team is seriously litmus of intelligence.", + "bbox": [ + 507, + 776, + 882, + 840 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "A.3 An example where reply is not offensive but replied might mislead since it is offensive", + "text_level": 1, + "bbox": [ + 509, + 850, + 870, + 897 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "English translation:", + "bbox": [ + 509, + 903, + 658, + 917 + ], + "page_idx": 5 + }, + { + "type": "footer", + "text": "1548", + "bbox": [ + 480, + 928, + 519, + 939 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 492, + 940, + 504, + 951 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Replied Tweet: Prof. Bingür Sonmez: Those who say they will not get vaccinated are traitors, we will not allow them to get married with our girls", + "bbox": [ + 112, + 84, + 487, + 131 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Reply Tweet: At the point where the cardiovascular surgeon has come, we will not allow traitors who do not get vaccinated to get married with our girls.", + "bbox": [ + 112, + 133, + 487, + 195 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "A.4 An example where reply is offensive but replied might mislead since it is not offensive", + "text_level": 1, + "bbox": [ + 114, + 206, + 478, + 254 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "English translation:", + "bbox": [ + 114, + 261, + 262, + 274 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Replied Tweet: Vaccination opponents misread the National Anthem", + "bbox": [ + 112, + 277, + 485, + 306 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Reply Tweet: Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine.", + "bbox": [ + 112, + 309, + 487, + 355 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "B Keywords used for Getting Related Tweets", + "text_level": 1, + "bbox": [ + 114, + 368, + 453, + 399 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The following keywords were used in our search: aşışiz, asışiz, aşışizlar, asışizlar, aş1 olmayan, as1 olmayan, aş1 olmayanlar, as1 olmayanlar, aş1 olmak istemeyen, as1 olmak istemeyen, aş1 olmak istemeyenler, as1 olmak istemeyenler, aş1 yaptır-mayan, as1 yaptır-mayan, aş1 yaptır-mayanlar, as1 yaptır-mayanlar.", + "bbox": [ + 112, + 411, + 487, + 523 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "1549 7", + "bbox": [ + 480, + 928, + 519, + 951 + ], + "page_idx": 6 + } +] \ No newline at end of file diff --git a/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_model.json b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b6e2178409fadb7ea57d8b7cef3c7d02f5b1ebb9 --- /dev/null +++ b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_model.json @@ -0,0 +1,1259 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.12, + 0.09, + 0.88, + 0.13 + ], + "angle": 0, + "content": "A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.154, + 0.827, + 0.173 + ], + "angle": 0, + "content": "Musa Nuri İhtiyar, Ömer Özdemir, Mustafa Emre Erengül, Arzucan Özgü" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.174, + 0.817, + 0.189 + ], + "angle": 0, + "content": "{musa.ihtiyar, omer.ozdemir1, mustafa.erengul, arzucan.ozgur} @boun.edu.tr" + }, + { + "type": "text", + "bbox": [ + 0.262, + 0.19, + 0.74, + 0.205 + ], + "angle": 0, + "content": "Department of Computer Engineering, Bogazici University" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.253, + 0.341, + 0.269 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.279, + 0.461, + 0.577 + ], + "angle": 0, + "content": "Offensive language detection is crucial in natural language processing (NLP). We investigated the importance of context for detecting such language in reply tweets on Twitter, where the use of offensive language is widespread. We collected a Turkish tweet dataset where the target group was unvaccinated people during the Covid period. Tweets in the dataset were enriched with contextual information by adding the original tweet to which a particular tweet was posted as a reply. The dataset, which includes over 28,000 tweet-reply pairs, was manually labeled by human annotators and made publicly available. In addition, we compared the performance of different machine learning models with and without contextual information. Our results show that this type of contextual information was not very useful in improving the performance of the models in general, although it slightly increased the macroaveraged F1-score of certain models." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.589, + 0.26, + 0.604 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.614, + 0.49, + 0.759 + ], + "angle": 0, + "content": "Humans can communicate through language, which enables them to engage in many useful activities, yet language might also be used for destructive purposes. One of the most critical examples of this is offensive language, which can be defined as \"any utterance which is blasphemous, obscene, indecent, insulting, hurtful, disgusting, morally repugnant, or which breaches commonly accepted standards of decent and proper speech\" (Law-Insider, 2023)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.759, + 0.49, + 0.92 + ], + "angle": 0, + "content": "The use of offensive language can occur on a variety of platforms, but is particularly common on online platforms such as Twitter. In recent years, several approaches have been proposed to automatically detect offensive language in tweets. Finetuning language models pre-trained with extensive data is considered the current state-of-the-art for detecting offensive language. BERT (Devlin et al., 2019) is one of the most prominent transformer-based pre-trained language models for English and" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.254, + 0.886, + 0.414 + ], + "angle": 0, + "content": "has also been shown to be very effective in detecting offensive language (Dai et al., 2020; Zampieri et al., 2020; Mozafari et al., 2020). A similar trend can be observed for other languages. For example, Mubarak et al. (2023) used AraBERT (Antoun et al., 2020), the Arabic version of BERT, for Arabic. Similarly, BERTurk (Schweter, 2020) has been successfully used to detect offensive language in Turkish tweets (Beyhan et al., 2022; Toraman et al., 2022; Arin et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.418, + 0.886, + 0.739 + ], + "angle": 0, + "content": "Annotated datasets are needed to train or fine-tune machine learning models for offensive language detection. A number of datasets have been prepared for different languages and domains and made publicly available (Basile et al., 2019; Zampieri et al., 2020; ElSherief et al., 2021). A limitation of these datasets is that generally each tweet is labeled individually without considering contextual information. There are few studies that consider contextual information. Mosca et al. (2021) investigate the relative contribution of user information features in machine learning models by using explainability techniques. Cécillon et al. (2021) propose a graph-based approach to represent dialog data from chat logs of an online game and use this representation for abusive language detection. Yu et al. (2022) define context as the previous comment in a Reddit conversation thread and show that such contextual information is useful for detecting hate speech." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.743, + 0.886, + 0.92 + ], + "angle": 0, + "content": "We hypothesize that similar contextual information may be useful for offensive language detection in tweets. As a motivating example, consider a reply tweet that states, \"I fully agree.\" The category of this reply tweet (i.e., whether it is offensive or not) depends on the previous context, i.e., the tweet to which it was posted as a reply. To investigate the impact of such contextual information on commonly used machine learning-based offensive language detection models, we collected and manually annotated tweet-reply pairs in Turkish, a" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.928, + 0.521, + 0.942 + ], + "angle": 0, + "content": "1543" + }, + { + "type": "footer", + "bbox": [ + 0.218, + 0.945, + 0.78, + 0.958 + ], + "angle": 0, + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1543-1549" + }, + { + "type": "footer", + "bbox": [ + 0.278, + 0.958, + 0.72, + 0.972 + ], + "angle": 0, + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.489, + 0.196 + ], + "angle": 0, + "content": "low-resource language with limited datasets. One of the first tweet datasets for detecting offensive language in Turkish was developed by Çöltekin (2020). Recently, Beyhan et al. (2022) and Toraman et al. (2022) also released tweet datasets for Turkish. However, none of these datasets consider contextual information." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.199, + 0.489, + 0.358 + ], + "angle": 0, + "content": "We chose our domain as the Covid-19 pandemic, which affected our lives in a number of different ways. Pandemics trigger fear and anger in most people, leading to increased use of offensive language. Sharif et al. (2021) studied the detection of hostile statements in the context of the Covid-19 pandemic, and Bor et al. (2023) showed that such offensive language occurred against unvaccinated people during this period. Therefore, we selected unvaccinated people as our target group." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.36, + 0.489, + 0.519 + ], + "angle": 0, + "content": "The main contributions of this paper are twofold: (i) We collect and manually annotate a Turkish tweet dataset specific to the Covid-19 period and containing contextual information in the form of the replied tweet. (ii) We investigate the impact of such contextual information on the performance of commonly used machine learning-based models for offensive language detection. The dataset and source code are made publicly available for future studies." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.521, + 0.49, + 0.632 + ], + "angle": 0, + "content": "The rest of the paper is organized as follows. While Section 2 examines the collection and annotation of the dataset, Section 3 focuses on the experiments conducted to compare the machine learning models with and without contextual information. Finally, Section 4 discusses the lessons learned." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.647, + 0.216, + 0.661 + ], + "angle": 0, + "content": "2 Dataset" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.673, + 0.489, + 0.786 + ], + "angle": 0, + "content": "We collected a dataset containing replied and reply tweet pairs. A reply tweet is a tweet written in response to another tweet, while a replied tweet is a tweet to which another tweet has replied. Suppose a tweet \\( T1 \\) is posted and then another tweet \\( T2 \\) is posted in response to \\( T1 \\). In this case, \\( T1 \\) is called a reply tweet and \\( T2 \\) is called a reply tweet." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.787, + 0.489, + 0.884 + ], + "angle": 0, + "content": "Our goal was to create a target group-specific dataset to enable the development of models capable of detecting offensive language towards a specific target group. We selected unvaccinated people in the Covid 19 pandemic as the target group for offensive language. We examined the period from" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.262 + ], + "angle": 0, + "content": "March 2020, when the virus reached Türkiye, to September 2022, when the pandemic was no longer on the agenda for most people on the planet. We used search by keyword with 16 different queries such as \"aşırkız\" (unvaccinated) and \"aşı olmak istemeyen\" (those who do not want to be vaccinated) to identify relevant tweets. The keywords are phrases meaning \"aşırkız\" (unvaccinated) with different singular/plural forms or spellings due to the Turkish character related issues. The list of all keywords used in this study can be found in the Appendix." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.267, + 0.885, + 0.492 + ], + "angle": 0, + "content": "There were different options to search for the replied and reply tweet pairs. The first one was getting pairs where at least one of the 16 search keywords occurred in the reply tweet. We call this Dataset 1. Another possibility is that these keywords occur in the replied tweet. This case contains two subcases. The first case is to have at least one of these keywords in a replied tweet, which itself is a reply to another tweet. We refer to this case as Dataset 2. Finally, the last case is to have at least one of these keywords in a replied tweet that is not itself a reply to another tweet. This case is called Dataset 3. All three of these datasets were merged to obtain the final dataset." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.497, + 0.884, + 0.657 + ], + "angle": 0, + "content": "Although conversations on Twitter could be arbitrarily long, we only looked at the previous tweet (replied tweet) to avoid unnecessarily complicated data format. In other words, all of the samples in our dataset are a pair. Yet, we could capture any replied-reply couple related to unvaccinated people as long as at least one of the tweets contains one or more of the pre-determined keywords. During the search, we collected tweet ID and tweet text information for both the replied and reply tweets." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.663, + 0.885, + 0.919 + ], + "angle": 0, + "content": "Once the collection process was completed, we proceeded with labeling. The objective of the annotation was to obtain a binary label indicating whether or not the reply tweet contains offensive language against unvaccinated people. Making explanations about specific points is essential for this part. First of all, we decided to keep the task clear so that we could understand the impact of the context better, so using a binary label looked like the best option, and we only looked at offensive language against unvaccinated people; in other words, even if a reply tweet was offensive, against immigrants for instance, we labeled that as \"not offensive against unvaccinated people\" instead of \"offensive against unvaccinated people\". This was not because such offensive language was acceptable" + }, + { + "type": "page_footnote", + "bbox": [ + 0.113, + 0.893, + 0.406, + 0.918 + ], + "angle": 0, + "content": "\\(^{1}\\)https://github.com/boun-tabi/CovidOffensiveLanguageUltimateDatasets" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.929, + 0.521, + 0.953 + ], + "angle": 0, + "content": "1544 2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.248 + ], + "angle": 0, + "content": "but due to the fact that we wanted to have a single target group to make the problem more focused such that the effect of the context could be seen more directly. Solely focusing on the offensiveness of the reply tweet was done since the context is relevant only for the reply tweet. That is, a pair where the replied tweet was offensive against unvaccinated people, but the reply tweet was not offensive is categorized as \"not offensive\" since we are only interested in the reply tweet's behavior." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.253, + 0.492, + 0.526 + ], + "angle": 0, + "content": "Which cases to consider as offensive language is another crucial point to explain. Situations like swearing and insulting were the most obvious ones. In addition, provocative words like stating that there should be a punishment, such as not being able to go outside or get into closed areas, without stating any exception or an alternative option, for unvaccinated people are included in this label. Also, we want to express that quotations or simply stating an idea without using harmful language, like saying that \"not getting vaccinated is a wrong behavior,\" are not perceived as offensive language. Even if we determine criteria, as we mentioned, for when to consider a tweet as offensive, this field is inevitably subjective for specific examples. This is why at least two people annotated each pair in our dataset." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.534, + 0.492, + 0.92 + ], + "angle": 0, + "content": "The annotation process was carried out as follows. A general guideline for annotation was established and provided to the annotators (i.e., three of the authors of the paper) and a training was performed by using sample examples. Each tweet pair was annotated independently by two annotators and a third annotator was used to resolve inconsistencies. For each tweet pair, there were three label options, namely \"not offensive against unvaccinated people\", \"ambiguous\", and \"offensive against unvaccinated people\". Although it is stated that the goal was obtaining binary labels, three options were given in order to provide more flexibility to the annotators; however, the pairs whose final label is \"ambiguous\" were removed from the final dataset since this would make the primary goal of the study more difficult to interpret which was examining the effect of taking the replied tweet into account. While doing the annotation, totally unrelated cases in the dataset, such as unvaccinated fruits and vegetables owing to chosen keywords, were mostly cleaned even though a limited number of such cases might be still existing in the dataset. We wanted to measure inter-annotator agreement" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.885, + 0.133 + ], + "angle": 0, + "content": "for these labels, so we used the F1 and Cohen Kappa scores. We obtained \\(55.22\\%\\) and \\(46.26\\%\\), respectively, for these metrics." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.136, + 0.887, + 0.683 + ], + "angle": 0, + "content": "After obtaining the annotations by two annotators for each pair of tweets, the annotations were examined. If there is consistency, then this was chosen as the ultimate label. If the ultimate label is \"ambiguous\", it is removed; otherwise, it is added to the final dataset. If there is inconsistency in the form of one annotator choosing \"not offensive\" and the other choosing \"offensive\", these cases are ambiguous; consequently, these were removed as well. For the inconsistencies where one annotator chose \"ambiguous\", the third annotator looked at the tweet pair and determined the final decision. If a label other than \"ambiguous\" was chosen, then it is selected as the last label. If not, it was removed. After several hours of this procedure, pairs with binary labels were obtained. In total, we obtained 28808 pairs. While 13478 of them came from Dataset 1, Datasets 2 and 3 contributed with 1515 and 13815 pairs, respectively. The final binary dataset has 27219 examples that are not offensive against unvaccinated people, denoted with 0, and 1589 examples which are offensive against unvaccinated people which are denoted with 2 since 1 was representing the ambiguous case. The dataset is inevitably imbalanced since \\(94.48\\%\\) of the pairs are labeled as 0. Inter-annotator agreement for the dataset's last version was measured using the F1 score and Cohen Kappa score. This time they were calculated as \\(95.21\\%\\) and \\(88.97\\%\\), which is significantly better than the initial version of the dataset. The final version of the dataset containing the reply and reply tweet ids as well as the manual annotations is made publicly available for future studies.[2]" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.701, + 0.762, + 0.718 + ], + "angle": 0, + "content": "3 Experiments and Results" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.732, + 0.885, + 0.877 + ], + "angle": 0, + "content": "After completing the annotation of the dataset, we used it to train and evaluate various machine learning models to detect offensive language against unvaccinated people. We randomly selected \\(20\\%\\) of the dataset as the test set. For each algorithm we used, we examined two different scenarios. In the first, we used only the reply tweet, while in the second, we studied the impact of using the replied tweet in addition to the reply tweet on our models." + }, + { + "type": "page_footnote", + "bbox": [ + 0.509, + 0.892, + 0.763, + 0.906 + ], + "angle": 0, + "content": "2https://github.com/boun-tabi/" + }, + { + "type": "page_footnote", + "bbox": [ + 0.51, + 0.906, + 0.802, + 0.918 + ], + "angle": 0, + "content": "CovidOffensiveLanguageUltimateDatasets" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.928, + 0.521, + 0.939 + ], + "angle": 0, + "content": "1545" + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.941, + 0.505, + 0.952 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.163, + 0.082, + 0.44, + 0.267 + ], + "angle": 0, + "content": "
MethodPrecRecF1
KNN (1)20.5641.1227.41
KNN (2)20.8440.7927.59
LR (1)50.0039.8044.32
LR (2)44.7241.7843.20
MNB (1)65.3226.6437.85
MNB (2)45.6534.5439.32
SVM (1)50.7644.0847.18
SVM (2)51.4634.8741.57
RF (1)38.5139.1438.82
RF (2)43.2535.8539.21
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.277, + 0.49, + 0.337 + ], + "angle": 0, + "content": "Table 1: Results for traditional models. For each model, (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.362, + 0.461, + 0.379 + ], + "angle": 0, + "content": "3.1 Traditional Machine Learning Models" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.384, + 0.49, + 0.705 + ], + "angle": 0, + "content": "Simple machine learning algorithms might perform quite good for certain tasks. Therefore, we started with simple algorithms such as Logistic Regression (LR), K-Nearest Neighbors (KNN), and Multinomial Naive Bayes (MNB). Then we also used Support Vector Machines (SVM) and Random Forest (RF). Since our dataset was imbalanced, we used downsampling to increase the performance of our models. In other words, we randomly selected a subset for the not offensive class while using all samples for the offensive class since it already had a limited number of samples. We had 1285 positive samples in the training set, so we decreased the not offensive class to 4500 samples, since too much reduction would cause a data scarcity problem. We used a tfidf based vector representation for the tweets. The performance of the commonly used traditional machine learning algorithms is given in Table 1 with the macro-averaged F1 score, precision, and recall." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.707, + 0.49, + 0.835 + ], + "angle": 0, + "content": "There are two main observations we can make with these results. These simple models are not able to perform well on this task. Even if we had used a majority classifier, we would obtain \\(50.0\\%\\) recall, \\(47.24\\%\\) precision and \\(48.58\\%\\) F1 score. The inclusion of information from the replied tweets does not have a significant impact on the performance of the models and behaves more like noise." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.849, + 0.341, + 0.866 + ], + "angle": 0, + "content": "3.2 Deep Learning Models" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.872, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Deep Learning models are top-rated in natural language processing. Especially the transformer-based ones (Vaswani et al., 2017) like BERT (De" + }, + { + "type": "table", + "bbox": [ + 0.542, + 0.082, + 0.853, + 0.135 + ], + "angle": 0, + "content": "
MethodPrecRecF1
BERTurk (1)65.7382.6870.28
BERTurk (2)70.1179.0373.57
" + }, + { + "type": "table_caption", + "bbox": [ + 0.508, + 0.145, + 0.884, + 0.203 + ], + "angle": 0, + "content": "Table 2: Results for deep learning models. (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.248, + 0.885, + 0.538 + ], + "angle": 0, + "content": "vlin et al., 2019) obtained incredible success in the last years. Therefore, we decided to look at the performance of the Turkish version of the BERT model called BERTurk (Schweter, 2020) with and without replied tweet information. For the single tweet setting, we followed the classical procedure for fine-tuning where we used binary cross-entropy with Adam optimizer (Kingma and Ba, 2015) with \\(5x10^{-5}\\) learning rate. We did the hyperparameter optimization by looking at the validation set F1 score. For the case of using two tweets (the reply and replied tweet), the only difference was creating a longer input string by combining the two tweets in the form of \"Önceki tweet: replied tweet Cevap: reply tweet\" (in English, \"Previous tweet: replied tweet Reply: reply tweet\"). The results (macro-averaged scores) obtained on the test set are summarized for the two cases in Table 2." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.55, + 0.884, + 0.726 + ], + "angle": 0, + "content": "Interestingly, this time the model that uses both the reply and the replied tweet performed better in terms of F1 score, yet the effect of taking context into account is still limited. Even though precision improves, recall drops. The English translation of an example to explain this phenomenon is provided below. In this example, the reply tweet is offensive, while the replied tweet is not offensive. In this case, including the replied tweet as contextual information to classify the reply tweet misleads the model." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.737, + 0.884, + 0.768 + ], + "angle": 0, + "content": "- Replied Tweet: \"Vaccination opponents misread the National Anthem.\"" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.78, + 0.885, + 0.828 + ], + "angle": 0, + "content": "- Reply Tweet: \"Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine.\"" + }, + { + "type": "list", + "bbox": [ + 0.508, + 0.737, + 0.885, + 0.828 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.84, + 0.884, + 0.919 + ], + "angle": 0, + "content": "For more example tweets where the inclusion of context (i.e., the replied tweet) is necessary for the correct classification of the reply tweet and where context could mislead the classifier, see the Appendix." + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.929, + 0.521, + 0.953 + ], + "angle": 0, + "content": "1546 4" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.114, + 0.085, + 0.248, + 0.099 + ], + "angle": 0, + "content": "4 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.112, + 0.493, + 0.53 + ], + "angle": 0, + "content": "We prepared an offensive language dataset for Turkish, where the number of such datasets is very limited. Unlike most other tweet datasets where each tweet is considered individually, we included the replied tweet as contextual information and investigated how this information affects the performance of commonly used machine learning models. Contrary to our expectation, our results showed that this resulted in only a slight improvement in the F1-score for some models and did not significantly improve the performance of the studied models for offensive language detection in general. In theory, the previous tweet appears to contain important information. However, in analyzing our dataset, we found that most reply tweets have only a weak relationship to the replied tweet in terms of meaning. Moreover, interaction with other tweets is dominated by the use of other features on Twitter, such as \"like\" or \"retweet.\" Consequently, the use of information about previous tweets did not provide much contribution for offensive language detection in this study. Nonetheless, attempting to develop models specifically designed to consider information about previous tweets could lead to better performance and represents a promising future research direction." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.544, + 0.22, + 0.559 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.572, + 0.49, + 0.748 + ], + "angle": 0, + "content": "While we tried various methods for detecting offensive language with and without replied tweet, we have not focused on developing a specific model which benefits from the previous (i.e., replied) tweet in the best way. Our goal was to investigate the impact of contextual information on the performance of commonly used machine learning-based models. Therefore, even though we were not able to get significant improvements with contextual information, further research focusing on this subject is a promising direction to follow." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.75, + 0.49, + 0.83 + ], + "angle": 0, + "content": "We examined the use of previous tweet for only single target group and language due to the laborious nature of the manual annotation process and the time limitations. The dataset can be expanded with other target groups and languages in the future." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.844, + 0.267, + 0.859 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.872, + 0.49, + 0.92 + ], + "angle": 0, + "content": "Offensive language detection systems could be very useful for real-life uses. Because machine learning-based models are guided mainly by the data they" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.198 + ], + "angle": 0, + "content": "use, the annotation of datasets is an essential step, which ought to be carried out responsibly. Despite the fact that we tried to use multiple annotators for the labeling process, developing better strategies are possible since some examples regarding offensive language are very subjective. The annotated data is shared based on Twitter's terms of use." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.211, + 0.683, + 0.227 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.238, + 0.885, + 0.334 + ], + "angle": 0, + "content": "This work is partially supported by the EU funded project entitled \"Utilizing Digital Technology for Social Cohesion, Positive Messaging and Peace by Boosting Collaboration, Exchange and Solidarity\" and by the Bogaziçi University Research Fund under the Grant Number 16903." + }, + { + "type": "title", + "bbox": [ + 0.511, + 0.363, + 0.61, + 0.377 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.386, + 0.887, + 0.479 + ], + "angle": 0, + "content": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.49, + 0.886, + 0.557 + ], + "angle": 0, + "content": "Inanç Arin, Zeynep Işık, Seçilay Kugal, Somaiyeh Dehghan, Arzucan Özgür, and Berrin Yanıkoğlu. 2023. SIU2023-NST - Hate Speech Detection Contest. In 31st Signal Processing and Communications Applications Conference (SIU), pages 1-4." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.568, + 0.886, + 0.661 + ], + "angle": 0, + "content": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. Semeval-2019 Task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.672, + 0.886, + 0.75 + ], + "angle": 0, + "content": "Fatih Beyhan, Buse Çarık, İnç Arın, Ayşecan Terzioglu, Berrin Yanıkoğlu, and Reyyan Yeniterzi. 2022. A Turkish hate speech dataset and detection system. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 4177-4185." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.763, + 0.886, + 0.815 + ], + "angle": 0, + "content": "Alexander Bor, Frederik Jorgensen, and Michael Bang Petersen. 2023. Discriminatory attitudes against unvaccinated people during the pandemic. Nature, 613(7945):704-711." + }, + { + "type": "ref_text", + "bbox": [ + 0.51, + 0.827, + 0.885, + 0.879 + ], + "angle": 0, + "content": "Noé Cécillon, Vincent Labatut, Richard Dufour, and Georges Linares. 2021. Graph embeddings for abusive language detection. SN Computer Science, 2:1-15." + }, + { + "type": "ref_text", + "bbox": [ + 0.51, + 0.892, + 0.885, + 0.92 + ], + "angle": 0, + "content": "Çagrı Çoltekin. 2020. A corpus of Turkish offensive language on social media. In Proceedings of the" + }, + { + "type": "list", + "bbox": [ + 0.51, + 0.386, + 0.887, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.928, + 0.521, + 0.953 + ], + "angle": 0, + "content": "1547 5" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.132, + 0.086, + 0.49, + 0.126 + ], + "angle": 0, + "content": "Twelfth Language Resources and Evaluation Conference, pages 6174-6184, Marseille, France. European Language Resources Association." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.135, + 0.49, + 0.203 + ], + "angle": 0, + "content": "Wenliang Dai, Tiezheng Yu, Zihan Liu, and Pascale Fung. 2020. Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-TaskLearning for Offensive Language Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2060-2066." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.211, + 0.49, + 0.33 + ], + "angle": 0, + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.339, + 0.49, + 0.419 + ], + "angle": 0, + "content": "Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 345-363." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.428, + 0.49, + 0.495 + ], + "angle": 0, + "content": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.503, + 0.489, + 0.53 + ], + "angle": 0, + "content": "Law-Insider. 2023. Offensive language definition | law insider. Accessed on June 18, 2023." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.539, + 0.49, + 0.607 + ], + "angle": 0, + "content": "Edoardo Mosca, Maximilian Wich, and Georg Groh. 2021. Understanding and interpreting the impact of user context in hate speech detection. In Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pages 91-102." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.615, + 0.49, + 0.709 + ], + "angle": 0, + "content": "Marzieh Mozafari, Reza Farahbakhsh, and Noel Crespi. 2020. A BERT-based transfer learning approach for hate speech detection in online social media. In Complex Networks and Their Applications VIII: Volume 1 Proceedings of the Eighth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2019 8, pages 928-940. Springer." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.717, + 0.49, + 0.771 + ], + "angle": 0, + "content": "Hamdy Mubarak, Sabit Hassan, and Shammur Absar Chowdhury. 2023. Emojis as anchors to detect Arabic offensive language and hate speech. *Natural Language Engineering*, page 1-22." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.779, + 0.489, + 0.807 + ], + "angle": 0, + "content": "Stefan Schweter. 2020. BERTurk - BERT models for Turkish." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.816, + 0.49, + 0.87 + ], + "angle": 0, + "content": "Omar Sharif, Eftekhar Hossain, and Mohammed Moshiul Hoque. 2021. Combating hostility: Covid-19 fake news and hostile post detection in social media. arXiv preprint arXiv:2101.03291." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.878, + 0.49, + 0.919 + ], + "angle": 0, + "content": "Cagri Toraman, Furkan Şahinuc, and Eyup Yilmaz. 2022. Large-scale hate speech detection with cross-domain transfer. In Proceedings of the Thirteenth" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.086, + 0.885, + 0.126 + ], + "angle": 0, + "content": "Language Resources and Evaluation Conference, pages 2215-2225, Marseille, France. European Language Resources Association." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.135, + 0.885, + 0.202 + ], + "angle": 0, + "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.21, + 0.885, + 0.316 + ], + "angle": 0, + "content": "Xinchen Yu, Eduardo Blanco, and Lingzi Hong. 2022. Hate speech and counter speech detection: Conversational context does matter. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5918-5930, Seattle, United States. Association for Computational Linguistics." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.325, + 0.885, + 0.431 + ], + "angle": 0, + "content": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Cagri Öltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425-1447. Association for Computational Linguistics." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.441, + 0.828, + 0.473 + ], + "angle": 0, + "content": "A Tweet Pair Examples Regarding Context Information" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.483, + 0.882, + 0.515 + ], + "angle": 0, + "content": "A.1 An example where context is necessary for correct classification of the reply tweet" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.52, + 0.692, + 0.535 + ], + "angle": 0, + "content": "The English translation:" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.537, + 0.882, + 0.599 + ], + "angle": 0, + "content": "Replied Tweet: If we are closed at home again because of those who are not vaccinated, you will see curses that you have not seen so far in this account.." + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.601, + 0.656, + 0.616 + ], + "angle": 0, + "content": "Reply Tweet: +1" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.627, + 0.838, + 0.657 + ], + "angle": 0, + "content": "A.2 An example where context does not matter" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.664, + 0.66, + 0.679 + ], + "angle": 0, + "content": "English translation:" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.68, + 0.882, + 0.775 + ], + "angle": 0, + "content": "Replied Tweet: It may be against the necessity of vaccination, it may be thought that the mask is not protective; however, there is no human side of walking as a group on a girl who works as a cashier under difficult conditions, entering a closed area without a mask, and causing fear and sadness." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.777, + 0.884, + 0.841 + ], + "angle": 0, + "content": "Reply Tweet: Those who are not vaccinated + those who do not wear masks. I seriously don't understand what's wrong with this team. This team is seriously litmus of intelligence." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.851, + 0.872, + 0.898 + ], + "angle": 0, + "content": "A.3 An example where reply is not offensive but replied might mislead since it is offensive" + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.904, + 0.659, + 0.919 + ], + "angle": 0, + "content": "English translation:" + }, + { + "type": "footer", + "bbox": [ + 0.482, + 0.929, + 0.521, + 0.94 + ], + "angle": 0, + "content": "1548" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.941, + 0.505, + 0.952 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.488, + 0.132 + ], + "angle": 0, + "content": "Replied Tweet: Prof. Bingür Sonmez: Those who say they will not get vaccinated are traitors, we will not allow them to get married with our girls" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.134, + 0.488, + 0.196 + ], + "angle": 0, + "content": "Reply Tweet: At the point where the cardiovascular surgeon has come, we will not allow traitors who do not get vaccinated to get married with our girls." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.208, + 0.479, + 0.255 + ], + "angle": 0, + "content": "A.4 An example where reply is offensive but replied might mislead since it is not offensive" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.262, + 0.263, + 0.275 + ], + "angle": 0, + "content": "English translation:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.278, + 0.486, + 0.307 + ], + "angle": 0, + "content": "Replied Tweet: Vaccination opponents misread the National Anthem" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.31, + 0.488, + 0.356 + ], + "angle": 0, + "content": "Reply Tweet: Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.369, + 0.455, + 0.4 + ], + "angle": 0, + "content": "B Keywords used for Getting Related Tweets" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.412, + 0.489, + 0.524 + ], + "angle": 0, + "content": "The following keywords were used in our search: aşışiz, asışiz, aşışizlar, asışizlar, aş1 olmayan, as1 olmayan, aş1 olmayanlar, as1 olmayanlar, aş1 olmak istemeyen, as1 olmak istemeyen, aş1 olmak istemeyenler, as1 olmak istemeyenler, aş1 yaptır-mayan, as1 yaptır-mayan, aş1 yaptır-mayanlar, as1 yaptır-mayanlar." + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.929, + 0.521, + 0.952 + ], + "angle": 0, + "content": "1549 7" + } + ] +] \ No newline at end of file diff --git a/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_origin.pdf b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..552ec68a73615a42a247fff1efa81de50a564c83 --- /dev/null +++ b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/1ce470ab-e396-4125-bc86-502c385ac36b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e895820712841448e5ae89574a33e2d40a35e8b606a49a150643037dda6dd81d +size 175179 diff --git a/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/full.md b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8cceca4c665b9a3b9844ba653ab75a6554ee791a --- /dev/null +++ b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/full.md @@ -0,0 +1,186 @@ +# A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets + +Musa Nuri İhtiyar, Ömer Özdemir, Mustafa Emre Erengül, Arzucan Özgü + +{musa.ihtiyar, omer.ozdemir1, mustafa.erengul, arzucan.ozgur} @boun.edu.tr + +Department of Computer Engineering, Bogazici University + +# Abstract + +Offensive language detection is crucial in natural language processing (NLP). We investigated the importance of context for detecting such language in reply tweets on Twitter, where the use of offensive language is widespread. We collected a Turkish tweet dataset where the target group was unvaccinated people during the Covid period. Tweets in the dataset were enriched with contextual information by adding the original tweet to which a particular tweet was posted as a reply. The dataset, which includes over 28,000 tweet-reply pairs, was manually labeled by human annotators and made publicly available. In addition, we compared the performance of different machine learning models with and without contextual information. Our results show that this type of contextual information was not very useful in improving the performance of the models in general, although it slightly increased the macroaveraged F1-score of certain models. + +# 1 Introduction + +Humans can communicate through language, which enables them to engage in many useful activities, yet language might also be used for destructive purposes. One of the most critical examples of this is offensive language, which can be defined as "any utterance which is blasphemous, obscene, indecent, insulting, hurtful, disgusting, morally repugnant, or which breaches commonly accepted standards of decent and proper speech" (Law-Insider, 2023). + +The use of offensive language can occur on a variety of platforms, but is particularly common on online platforms such as Twitter. In recent years, several approaches have been proposed to automatically detect offensive language in tweets. Finetuning language models pre-trained with extensive data is considered the current state-of-the-art for detecting offensive language. BERT (Devlin et al., 2019) is one of the most prominent transformer-based pre-trained language models for English and + +has also been shown to be very effective in detecting offensive language (Dai et al., 2020; Zampieri et al., 2020; Mozafari et al., 2020). A similar trend can be observed for other languages. For example, Mubarak et al. (2023) used AraBERT (Antoun et al., 2020), the Arabic version of BERT, for Arabic. Similarly, BERTurk (Schweter, 2020) has been successfully used to detect offensive language in Turkish tweets (Beyhan et al., 2022; Toraman et al., 2022; Arin et al., 2023). + +Annotated datasets are needed to train or fine-tune machine learning models for offensive language detection. A number of datasets have been prepared for different languages and domains and made publicly available (Basile et al., 2019; Zampieri et al., 2020; ElSherief et al., 2021). A limitation of these datasets is that generally each tweet is labeled individually without considering contextual information. There are few studies that consider contextual information. Mosca et al. (2021) investigate the relative contribution of user information features in machine learning models by using explainability techniques. Cécillon et al. (2021) propose a graph-based approach to represent dialog data from chat logs of an online game and use this representation for abusive language detection. Yu et al. (2022) define context as the previous comment in a Reddit conversation thread and show that such contextual information is useful for detecting hate speech. + +We hypothesize that similar contextual information may be useful for offensive language detection in tweets. As a motivating example, consider a reply tweet that states, "I fully agree." The category of this reply tweet (i.e., whether it is offensive or not) depends on the previous context, i.e., the tweet to which it was posted as a reply. To investigate the impact of such contextual information on commonly used machine learning-based offensive language detection models, we collected and manually annotated tweet-reply pairs in Turkish, a + +low-resource language with limited datasets. One of the first tweet datasets for detecting offensive language in Turkish was developed by Çöltekin (2020). Recently, Beyhan et al. (2022) and Toraman et al. (2022) also released tweet datasets for Turkish. However, none of these datasets consider contextual information. + +We chose our domain as the Covid-19 pandemic, which affected our lives in a number of different ways. Pandemics trigger fear and anger in most people, leading to increased use of offensive language. Sharif et al. (2021) studied the detection of hostile statements in the context of the Covid-19 pandemic, and Bor et al. (2023) showed that such offensive language occurred against unvaccinated people during this period. Therefore, we selected unvaccinated people as our target group. + +The main contributions of this paper are twofold: (i) We collect and manually annotate a Turkish tweet dataset specific to the Covid-19 period and containing contextual information in the form of the replied tweet. (ii) We investigate the impact of such contextual information on the performance of commonly used machine learning-based models for offensive language detection. The dataset and source code are made publicly available for future studies. + +The rest of the paper is organized as follows. While Section 2 examines the collection and annotation of the dataset, Section 3 focuses on the experiments conducted to compare the machine learning models with and without contextual information. Finally, Section 4 discusses the lessons learned. + +# 2 Dataset + +We collected a dataset containing replied and reply tweet pairs. A reply tweet is a tweet written in response to another tweet, while a replied tweet is a tweet to which another tweet has replied. Suppose a tweet $T1$ is posted and then another tweet $T2$ is posted in response to $T1$ . In this case, $T1$ is called a reply tweet and $T2$ is called a reply tweet. + +Our goal was to create a target group-specific dataset to enable the development of models capable of detecting offensive language towards a specific target group. We selected unvaccinated people in the Covid 19 pandemic as the target group for offensive language. We examined the period from + +March 2020, when the virus reached Türkiye, to September 2022, when the pandemic was no longer on the agenda for most people on the planet. We used search by keyword with 16 different queries such as "aşırkız" (unvaccinated) and "aşı olmak istemeyen" (those who do not want to be vaccinated) to identify relevant tweets. The keywords are phrases meaning "aşırkız" (unvaccinated) with different singular/plural forms or spellings due to the Turkish character related issues. The list of all keywords used in this study can be found in the Appendix. + +There were different options to search for the replied and reply tweet pairs. The first one was getting pairs where at least one of the 16 search keywords occurred in the reply tweet. We call this Dataset 1. Another possibility is that these keywords occur in the replied tweet. This case contains two subcases. The first case is to have at least one of these keywords in a replied tweet, which itself is a reply to another tweet. We refer to this case as Dataset 2. Finally, the last case is to have at least one of these keywords in a replied tweet that is not itself a reply to another tweet. This case is called Dataset 3. All three of these datasets were merged to obtain the final dataset. + +Although conversations on Twitter could be arbitrarily long, we only looked at the previous tweet (replied tweet) to avoid unnecessarily complicated data format. In other words, all of the samples in our dataset are a pair. Yet, we could capture any replied-reply couple related to unvaccinated people as long as at least one of the tweets contains one or more of the pre-determined keywords. During the search, we collected tweet ID and tweet text information for both the replied and reply tweets. + +Once the collection process was completed, we proceeded with labeling. The objective of the annotation was to obtain a binary label indicating whether or not the reply tweet contains offensive language against unvaccinated people. Making explanations about specific points is essential for this part. First of all, we decided to keep the task clear so that we could understand the impact of the context better, so using a binary label looked like the best option, and we only looked at offensive language against unvaccinated people; in other words, even if a reply tweet was offensive, against immigrants for instance, we labeled that as "not offensive against unvaccinated people" instead of "offensive against unvaccinated people". This was not because such offensive language was acceptable + +but due to the fact that we wanted to have a single target group to make the problem more focused such that the effect of the context could be seen more directly. Solely focusing on the offensiveness of the reply tweet was done since the context is relevant only for the reply tweet. That is, a pair where the replied tweet was offensive against unvaccinated people, but the reply tweet was not offensive is categorized as "not offensive" since we are only interested in the reply tweet's behavior. + +Which cases to consider as offensive language is another crucial point to explain. Situations like swearing and insulting were the most obvious ones. In addition, provocative words like stating that there should be a punishment, such as not being able to go outside or get into closed areas, without stating any exception or an alternative option, for unvaccinated people are included in this label. Also, we want to express that quotations or simply stating an idea without using harmful language, like saying that "not getting vaccinated is a wrong behavior," are not perceived as offensive language. Even if we determine criteria, as we mentioned, for when to consider a tweet as offensive, this field is inevitably subjective for specific examples. This is why at least two people annotated each pair in our dataset. + +The annotation process was carried out as follows. A general guideline for annotation was established and provided to the annotators (i.e., three of the authors of the paper) and a training was performed by using sample examples. Each tweet pair was annotated independently by two annotators and a third annotator was used to resolve inconsistencies. For each tweet pair, there were three label options, namely "not offensive against unvaccinated people", "ambiguous", and "offensive against unvaccinated people". Although it is stated that the goal was obtaining binary labels, three options were given in order to provide more flexibility to the annotators; however, the pairs whose final label is "ambiguous" were removed from the final dataset since this would make the primary goal of the study more difficult to interpret which was examining the effect of taking the replied tweet into account. While doing the annotation, totally unrelated cases in the dataset, such as unvaccinated fruits and vegetables owing to chosen keywords, were mostly cleaned even though a limited number of such cases might be still existing in the dataset. We wanted to measure inter-annotator agreement + +for these labels, so we used the F1 and Cohen Kappa scores. We obtained $55.22\%$ and $46.26\%$ , respectively, for these metrics. + +After obtaining the annotations by two annotators for each pair of tweets, the annotations were examined. If there is consistency, then this was chosen as the ultimate label. If the ultimate label is "ambiguous", it is removed; otherwise, it is added to the final dataset. If there is inconsistency in the form of one annotator choosing "not offensive" and the other choosing "offensive", these cases are ambiguous; consequently, these were removed as well. For the inconsistencies where one annotator chose "ambiguous", the third annotator looked at the tweet pair and determined the final decision. If a label other than "ambiguous" was chosen, then it is selected as the last label. If not, it was removed. After several hours of this procedure, pairs with binary labels were obtained. In total, we obtained 28808 pairs. While 13478 of them came from Dataset 1, Datasets 2 and 3 contributed with 1515 and 13815 pairs, respectively. The final binary dataset has 27219 examples that are not offensive against unvaccinated people, denoted with 0, and 1589 examples which are offensive against unvaccinated people which are denoted with 2 since 1 was representing the ambiguous case. The dataset is inevitably imbalanced since $94.48\%$ of the pairs are labeled as 0. Inter-annotator agreement for the dataset's last version was measured using the F1 score and Cohen Kappa score. This time they were calculated as $95.21\%$ and $88.97\%$ , which is significantly better than the initial version of the dataset. The final version of the dataset containing the reply and reply tweet ids as well as the manual annotations is made publicly available for future studies.[2] + +# 3 Experiments and Results + +After completing the annotation of the dataset, we used it to train and evaluate various machine learning models to detect offensive language against unvaccinated people. We randomly selected $20\%$ of the dataset as the test set. For each algorithm we used, we examined two different scenarios. In the first, we used only the reply tweet, while in the second, we studied the impact of using the replied tweet in addition to the reply tweet on our models. + +
MethodPrecRecF1
KNN (1)20.5641.1227.41
KNN (2)20.8440.7927.59
LR (1)50.0039.8044.32
LR (2)44.7241.7843.20
MNB (1)65.3226.6437.85
MNB (2)45.6534.5439.32
SVM (1)50.7644.0847.18
SVM (2)51.4634.8741.57
RF (1)38.5139.1438.82
RF (2)43.2535.8539.21
+ +# 3.1 Traditional Machine Learning Models + +Simple machine learning algorithms might perform quite good for certain tasks. Therefore, we started with simple algorithms such as Logistic Regression (LR), K-Nearest Neighbors (KNN), and Multinomial Naive Bayes (MNB). Then we also used Support Vector Machines (SVM) and Random Forest (RF). Since our dataset was imbalanced, we used downsampling to increase the performance of our models. In other words, we randomly selected a subset for the not offensive class while using all samples for the offensive class since it already had a limited number of samples. We had 1285 positive samples in the training set, so we decreased the not offensive class to 4500 samples, since too much reduction would cause a data scarcity problem. We used a tfidf based vector representation for the tweets. The performance of the commonly used traditional machine learning algorithms is given in Table 1 with the macro-averaged F1 score, precision, and recall. + +There are two main observations we can make with these results. These simple models are not able to perform well on this task. Even if we had used a majority classifier, we would obtain $50.0\%$ recall, $47.24\%$ precision and $48.58\%$ F1 score. The inclusion of information from the replied tweets does not have a significant impact on the performance of the models and behaves more like noise. + +# 3.2 Deep Learning Models + +Deep Learning models are top-rated in natural language processing. Especially the transformer-based ones (Vaswani et al., 2017) like BERT (De + +Table 1: Results for traditional models. For each model, (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used. + +
MethodPrecRecF1
BERTurk (1)65.7382.6870.28
BERTurk (2)70.1179.0373.57
+ +Table 2: Results for deep learning models. (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used. + +vlin et al., 2019) obtained incredible success in the last years. Therefore, we decided to look at the performance of the Turkish version of the BERT model called BERTurk (Schweter, 2020) with and without replied tweet information. For the single tweet setting, we followed the classical procedure for fine-tuning where we used binary cross-entropy with Adam optimizer (Kingma and Ba, 2015) with $5x10^{-5}$ learning rate. We did the hyperparameter optimization by looking at the validation set F1 score. For the case of using two tweets (the reply and replied tweet), the only difference was creating a longer input string by combining the two tweets in the form of "Önceki tweet: replied tweet Cevap: reply tweet" (in English, "Previous tweet: replied tweet Reply: reply tweet"). The results (macro-averaged scores) obtained on the test set are summarized for the two cases in Table 2. + +Interestingly, this time the model that uses both the reply and the replied tweet performed better in terms of F1 score, yet the effect of taking context into account is still limited. Even though precision improves, recall drops. The English translation of an example to explain this phenomenon is provided below. In this example, the reply tweet is offensive, while the replied tweet is not offensive. In this case, including the replied tweet as contextual information to classify the reply tweet misleads the model. + +- Replied Tweet: "Vaccination opponents misread the National Anthem." +- Reply Tweet: "Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine." + +For more example tweets where the inclusion of context (i.e., the replied tweet) is necessary for the correct classification of the reply tweet and where context could mislead the classifier, see the Appendix. + +# 4 Conclusion + +We prepared an offensive language dataset for Turkish, where the number of such datasets is very limited. Unlike most other tweet datasets where each tweet is considered individually, we included the replied tweet as contextual information and investigated how this information affects the performance of commonly used machine learning models. Contrary to our expectation, our results showed that this resulted in only a slight improvement in the F1-score for some models and did not significantly improve the performance of the studied models for offensive language detection in general. In theory, the previous tweet appears to contain important information. However, in analyzing our dataset, we found that most reply tweets have only a weak relationship to the replied tweet in terms of meaning. Moreover, interaction with other tweets is dominated by the use of other features on Twitter, such as "like" or "retweet." Consequently, the use of information about previous tweets did not provide much contribution for offensive language detection in this study. Nonetheless, attempting to develop models specifically designed to consider information about previous tweets could lead to better performance and represents a promising future research direction. + +# Limitations + +While we tried various methods for detecting offensive language with and without replied tweet, we have not focused on developing a specific model which benefits from the previous (i.e., replied) tweet in the best way. Our goal was to investigate the impact of contextual information on the performance of commonly used machine learning-based models. Therefore, even though we were not able to get significant improvements with contextual information, further research focusing on this subject is a promising direction to follow. + +We examined the use of previous tweet for only single target group and language due to the laborious nature of the manual annotation process and the time limitations. The dataset can be expanded with other target groups and languages in the future. + +# Ethics Statement + +Offensive language detection systems could be very useful for real-life uses. Because machine learning-based models are guided mainly by the data they + +use, the annotation of datasets is an essential step, which ought to be carried out responsibly. Despite the fact that we tried to use multiple annotators for the labeling process, developing better strategies are possible since some examples regarding offensive language are very subjective. The annotated data is shared based on Twitter's terms of use. + +# Acknowledgements + +This work is partially supported by the EU funded project entitled "Utilizing Digital Technology for Social Cohesion, Positive Messaging and Peace by Boosting Collaboration, Exchange and Solidarity" and by the Bogaziçi University Research Fund under the Grant Number 16903. + +# References + +Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association. +Inanç Arin, Zeynep Işık, Seçilay Kugal, Somaiyeh Dehghan, Arzucan Özgür, and Berrin Yanıkoğlu. 2023. SIU2023-NST - Hate Speech Detection Contest. In 31st Signal Processing and Communications Applications Conference (SIU), pages 1-4. +Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. Semeval-2019 Task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63. +Fatih Beyhan, Buse Çarık, İnç Arın, Ayşecan Terzioglu, Berrin Yanıkoğlu, and Reyyan Yeniterzi. 2022. A Turkish hate speech dataset and detection system. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 4177-4185. +Alexander Bor, Frederik Jorgensen, and Michael Bang Petersen. 2023. Discriminatory attitudes against unvaccinated people during the pandemic. Nature, 613(7945):704-711. +Noé Cécillon, Vincent Labatut, Richard Dufour, and Georges Linares. 2021. Graph embeddings for abusive language detection. SN Computer Science, 2:1-15. +Çagrı Çoltekin. 2020. A corpus of Turkish offensive language on social media. In Proceedings of the + +Twelfth Language Resources and Evaluation Conference, pages 6174-6184, Marseille, France. European Language Resources Association. + +Wenliang Dai, Tiezheng Yu, Zihan Liu, and Pascale Fung. 2020. Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-TaskLearning for Offensive Language Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2060-2066. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 345-363. + +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. + +Law-Insider. 2023. Offensive language definition | law insider. Accessed on June 18, 2023. + +Edoardo Mosca, Maximilian Wich, and Georg Groh. 2021. Understanding and interpreting the impact of user context in hate speech detection. In Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pages 91-102. + +Marzieh Mozafari, Reza Farahbakhsh, and Noel Crespi. 2020. A BERT-based transfer learning approach for hate speech detection in online social media. In Complex Networks and Their Applications VIII: Volume 1 Proceedings of the Eighth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2019 8, pages 928-940. Springer. + +Hamdy Mubarak, Sabit Hassan, and Shammur Absar Chowdhury. 2023. Emojis as anchors to detect Arabic offensive language and hate speech. *Natural Language Engineering*, page 1-22. + +Stefan Schweter. 2020. BERTurk - BERT models for Turkish. + +Omar Sharif, Eftekhar Hossain, and Mohammed Moshiul Hoque. 2021. Combating hostility: Covid-19 fake news and hostile post detection in social media. arXiv preprint arXiv:2101.03291. + +Cagri Toraman, Furkan Şahinuc, and Eyup Yilmaz. 2022. Large-scale hate speech detection with cross-domain transfer. In Proceedings of the Thirteenth + +Language Resources and Evaluation Conference, pages 2215-2225, Marseille, France. European Language Resources Association. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. + +Xinchen Yu, Eduardo Blanco, and Lingzi Hong. 2022. Hate speech and counter speech detection: Conversational context does matter. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5918-5930, Seattle, United States. Association for Computational Linguistics. + +Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Cagri Öltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425-1447. Association for Computational Linguistics. + +# A Tweet Pair Examples Regarding Context Information + +# A.1 An example where context is necessary for correct classification of the reply tweet + +The English translation: + +Replied Tweet: If we are closed at home again because of those who are not vaccinated, you will see curses that you have not seen so far in this account.. + +Reply Tweet: +1 + +# A.2 An example where context does not matter + +English translation: + +Replied Tweet: It may be against the necessity of vaccination, it may be thought that the mask is not protective; however, there is no human side of walking as a group on a girl who works as a cashier under difficult conditions, entering a closed area without a mask, and causing fear and sadness. + +Reply Tweet: Those who are not vaccinated + those who do not wear masks. I seriously don't understand what's wrong with this team. This team is seriously litmus of intelligence. + +# A.3 An example where reply is not offensive but replied might mislead since it is offensive + +English translation: + +Replied Tweet: Prof. Bingür Sonmez: Those who say they will not get vaccinated are traitors, we will not allow them to get married with our girls + +Reply Tweet: At the point where the cardiovascular surgeon has come, we will not allow traitors who do not get vaccinated to get married with our girls. + +# A.4 An example where reply is offensive but replied might mislead since it is not offensive + +English translation: + +Replied Tweet: Vaccination opponents misread the National Anthem + +Reply Tweet: Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine. + +# B Keywords used for Getting Related Tweets + +The following keywords were used in our search: aşışiz, asışiz, aşışizlar, asışizlar, aş1 olmayan, as1 olmayan, aş1 olmayanlar, as1 olmayanlar, aş1 olmak istemeyen, as1 olmak istemeyen, aş1 olmak istemeyenler, as1 olmak istemeyenler, aş1 yaptır-mayan, as1 yaptır-mayan, aş1 yaptır-mayanlar, as1 yaptır-mayanlar. \ No newline at end of file diff --git a/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/images.zip b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3be5d7474139f527fa4b53cfcd714da5adaf5345 --- /dev/null +++ b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a990b34095de12542ca8381e402209facd157822bf58bc0f91610b081b6895f5 +size 61737 diff --git a/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/layout.json b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..92d7323425bc8a7ee37368fafbb65950bdc4d569 --- /dev/null +++ b/2023/A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets/layout.json @@ -0,0 +1,4110 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 71, + 75, + 523, + 109 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 75, + 523, + 109 + ], + "spans": [ + { + "bbox": [ + 71, + 75, + 523, + 109 + ], + "type": "text", + "content": "A Dataset for Investigating the Impact of Context for Offensive Language Detection in Tweets" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 129, + 492, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 129, + 492, + 145 + ], + "spans": [ + { + "bbox": [ + 104, + 129, + 492, + 145 + ], + "type": "text", + "content": "Musa Nuri İhtiyar, Ömer Özdemir, Mustafa Emre Erengül, Arzucan Özgü" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 146, + 486, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 146, + 486, + 158 + ], + "spans": [ + { + "bbox": [ + 111, + 146, + 486, + 158 + ], + "type": "text", + "content": "{musa.ihtiyar, omer.ozdemir1, mustafa.erengul, arzucan.ozgur} @boun.edu.tr" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 155, + 159, + 440, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 159, + 440, + 172 + ], + "spans": [ + { + "bbox": [ + 155, + 159, + 440, + 172 + ], + "type": "text", + "content": "Department of Computer Engineering, Bogazici University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "spans": [ + { + "bbox": [ + 155, + 212, + 202, + 226 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 86, + 234, + 274, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 234, + 274, + 485 + ], + "spans": [ + { + "bbox": [ + 86, + 234, + 274, + 485 + ], + "type": "text", + "content": "Offensive language detection is crucial in natural language processing (NLP). We investigated the importance of context for detecting such language in reply tweets on Twitter, where the use of offensive language is widespread. We collected a Turkish tweet dataset where the target group was unvaccinated people during the Covid period. Tweets in the dataset were enriched with contextual information by adding the original tweet to which a particular tweet was posted as a reply. The dataset, which includes over 28,000 tweet-reply pairs, was manually labeled by human annotators and made publicly available. In addition, we compared the performance of different machine learning models with and without contextual information. Our results show that this type of contextual information was not very useful in improving the performance of the models in general, although it slightly increased the macroaveraged F1-score of certain models." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 495, + 154, + 507 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 495, + 154, + 507 + ], + "spans": [ + { + "bbox": [ + 68, + 495, + 154, + 507 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 516, + 291, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 516, + 291, + 638 + ], + "spans": [ + { + "bbox": [ + 67, + 516, + 291, + 638 + ], + "type": "text", + "content": "Humans can communicate through language, which enables them to engage in many useful activities, yet language might also be used for destructive purposes. One of the most critical examples of this is offensive language, which can be defined as \"any utterance which is blasphemous, obscene, indecent, insulting, hurtful, disgusting, morally repugnant, or which breaches commonly accepted standards of decent and proper speech\" (Law-Insider, 2023)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 291, + 773 + ], + "type": "text", + "content": "The use of offensive language can occur on a variety of platforms, but is particularly common on online platforms such as Twitter. In recent years, several approaches have been proposed to automatically detect offensive language in tweets. Finetuning language models pre-trained with extensive data is considered the current state-of-the-art for detecting offensive language. BERT (Devlin et al., 2019) is one of the most prominent transformer-based pre-trained language models for English and" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 213, + 527, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 213, + 527, + 348 + ], + "spans": [ + { + "bbox": [ + 302, + 213, + 527, + 348 + ], + "type": "text", + "content": "has also been shown to be very effective in detecting offensive language (Dai et al., 2020; Zampieri et al., 2020; Mozafari et al., 2020). A similar trend can be observed for other languages. For example, Mubarak et al. (2023) used AraBERT (Antoun et al., 2020), the Arabic version of BERT, for Arabic. Similarly, BERTurk (Schweter, 2020) has been successfully used to detect offensive language in Turkish tweets (Beyhan et al., 2022; Toraman et al., 2022; Arin et al., 2023)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 351, + 527, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 351, + 527, + 621 + ], + "spans": [ + { + "bbox": [ + 302, + 351, + 527, + 621 + ], + "type": "text", + "content": "Annotated datasets are needed to train or fine-tune machine learning models for offensive language detection. A number of datasets have been prepared for different languages and domains and made publicly available (Basile et al., 2019; Zampieri et al., 2020; ElSherief et al., 2021). A limitation of these datasets is that generally each tweet is labeled individually without considering contextual information. There are few studies that consider contextual information. Mosca et al. (2021) investigate the relative contribution of user information features in machine learning models by using explainability techniques. Cécillon et al. (2021) propose a graph-based approach to represent dialog data from chat logs of an online game and use this representation for abusive language detection. Yu et al. (2022) define context as the previous comment in a Reddit conversation thread and show that such contextual information is useful for detecting hate speech." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 624, + 527, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 624, + 527, + 773 + ], + "spans": [ + { + "bbox": [ + 302, + 624, + 527, + 773 + ], + "type": "text", + "content": "We hypothesize that similar contextual information may be useful for offensive language detection in tweets. As a motivating example, consider a reply tweet that states, \"I fully agree.\" The category of this reply tweet (i.e., whether it is offensive or not) depends on the previous context, i.e., the tweet to which it was posted as a reply. To investigate the impact of such contextual information on commonly used machine learning-based offensive language detection models, we collected and manually annotated tweet-reply pairs in Turkish, a" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 792 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 792 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 792 + ], + "type": "text", + "content": "1543" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 129, + 794, + 464, + 805 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 794, + 464, + 805 + ], + "spans": [ + { + "bbox": [ + 129, + 794, + 464, + 805 + ], + "type": "text", + "content": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1543-1549" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 165, + 805, + 428, + 817 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 805, + 428, + 817 + ], + "spans": [ + { + "bbox": [ + 165, + 805, + 428, + 817 + ], + "type": "text", + "content": "December 6-10, 2023 ©2023 Association for Computational Linguistics" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 164 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 164 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 164 + ], + "type": "text", + "content": "low-resource language with limited datasets. One of the first tweet datasets for detecting offensive language in Turkish was developed by Çöltekin (2020). Recently, Beyhan et al. (2022) and Toraman et al. (2022) also released tweet datasets for Turkish. However, none of these datasets consider contextual information." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 167, + 290, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 167, + 290, + 301 + ], + "spans": [ + { + "bbox": [ + 66, + 167, + 290, + 301 + ], + "type": "text", + "content": "We chose our domain as the Covid-19 pandemic, which affected our lives in a number of different ways. Pandemics trigger fear and anger in most people, leading to increased use of offensive language. Sharif et al. (2021) studied the detection of hostile statements in the context of the Covid-19 pandemic, and Bor et al. (2023) showed that such offensive language occurred against unvaccinated people during this period. Therefore, we selected unvaccinated people as our target group." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 302, + 290, + 436 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 302, + 290, + 436 + ], + "spans": [ + { + "bbox": [ + 67, + 302, + 290, + 436 + ], + "type": "text", + "content": "The main contributions of this paper are twofold: (i) We collect and manually annotate a Turkish tweet dataset specific to the Covid-19 period and containing contextual information in the form of the replied tweet. (ii) We investigate the impact of such contextual information on the performance of commonly used machine learning-based models for offensive language detection. The dataset and source code are made publicly available for future studies." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 438, + 291, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 438, + 291, + 531 + ], + "spans": [ + { + "bbox": [ + 67, + 438, + 291, + 531 + ], + "type": "text", + "content": "The rest of the paper is organized as follows. While Section 2 examines the collection and annotation of the dataset, Section 3 focuses on the experiments conducted to compare the machine learning models with and without contextual information. Finally, Section 4 discusses the lessons learned." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 544, + 128, + 555 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 544, + 128, + 555 + ], + "spans": [ + { + "bbox": [ + 67, + 544, + 128, + 555 + ], + "type": "text", + "content": "2 Dataset" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "spans": [ + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "text", + "content": "We collected a dataset containing replied and reply tweet pairs. A reply tweet is a tweet written in response to another tweet, while a replied tweet is a tweet to which another tweet has replied. Suppose a tweet " + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "inline_equation", + "content": "T1" + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "text", + "content": " is posted and then another tweet " + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "inline_equation", + "content": "T2" + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "text", + "content": " is posted in response to " + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "inline_equation", + "content": "T1" + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "text", + "content": ". In this case, " + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "inline_equation", + "content": "T1" + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "text", + "content": " is called a reply tweet and " + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "inline_equation", + "content": "T2" + }, + { + "bbox": [ + 67, + 565, + 290, + 661 + ], + "type": "text", + "content": " is called a reply tweet." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 661, + 290, + 743 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 661, + 290, + 743 + ], + "spans": [ + { + "bbox": [ + 67, + 661, + 290, + 743 + ], + "type": "text", + "content": "Our goal was to create a target group-specific dataset to enable the development of models capable of detecting offensive language towards a specific target group. We selected unvaccinated people in the Covid 19 pandemic as the target group for offensive language. We examined the period from" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 220 + ], + "type": "text", + "content": "March 2020, when the virus reached Türkiye, to September 2022, when the pandemic was no longer on the agenda for most people on the planet. We used search by keyword with 16 different queries such as \"aşırkız\" (unvaccinated) and \"aşı olmak istemeyen\" (those who do not want to be vaccinated) to identify relevant tweets. The keywords are phrases meaning \"aşırkız\" (unvaccinated) with different singular/plural forms or spellings due to the Turkish character related issues. The list of all keywords used in this study can be found in the Appendix." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 224, + 526, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 224, + 526, + 413 + ], + "spans": [ + { + "bbox": [ + 302, + 224, + 526, + 413 + ], + "type": "text", + "content": "There were different options to search for the replied and reply tweet pairs. The first one was getting pairs where at least one of the 16 search keywords occurred in the reply tweet. We call this Dataset 1. Another possibility is that these keywords occur in the replied tweet. This case contains two subcases. The first case is to have at least one of these keywords in a replied tweet, which itself is a reply to another tweet. We refer to this case as Dataset 2. Finally, the last case is to have at least one of these keywords in a replied tweet that is not itself a reply to another tweet. This case is called Dataset 3. All three of these datasets were merged to obtain the final dataset." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 417, + 525, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 417, + 525, + 552 + ], + "spans": [ + { + "bbox": [ + 302, + 417, + 525, + 552 + ], + "type": "text", + "content": "Although conversations on Twitter could be arbitrarily long, we only looked at the previous tweet (replied tweet) to avoid unnecessarily complicated data format. In other words, all of the samples in our dataset are a pair. Yet, we could capture any replied-reply couple related to unvaccinated people as long as at least one of the tweets contains one or more of the pre-determined keywords. During the search, we collected tweet ID and tweet text information for both the replied and reply tweets." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 557, + 526, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 557, + 526, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 557, + 526, + 772 + ], + "type": "text", + "content": "Once the collection process was completed, we proceeded with labeling. The objective of the annotation was to obtain a binary label indicating whether or not the reply tweet contains offensive language against unvaccinated people. Making explanations about specific points is essential for this part. First of all, we decided to keep the task clear so that we could understand the impact of the context better, so using a binary label looked like the best option, and we only looked at offensive language against unvaccinated people; in other words, even if a reply tweet was offensive, against immigrants for instance, we labeled that as \"not offensive against unvaccinated people\" instead of \"offensive against unvaccinated people\". This was not because such offensive language was acceptable" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 67, + 751, + 241, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 751, + 241, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 751, + 241, + 772 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 67, + 751, + 241, + 772 + ], + "type": "text", + "content": "https://github.com/boun-tabi/CovidOffensiveLanguageUltimateDatasets" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 287, + 781, + 309, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 781, + 309, + 801 + ], + "spans": [ + { + "bbox": [ + 287, + 781, + 309, + 801 + ], + "type": "text", + "content": "1544 2" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 208 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 208 + ], + "type": "text", + "content": "but due to the fact that we wanted to have a single target group to make the problem more focused such that the effect of the context could be seen more directly. Solely focusing on the offensiveness of the reply tweet was done since the context is relevant only for the reply tweet. That is, a pair where the replied tweet was offensive against unvaccinated people, but the reply tweet was not offensive is categorized as \"not offensive\" since we are only interested in the reply tweet's behavior." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 212, + 292, + 442 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 212, + 292, + 442 + ], + "spans": [ + { + "bbox": [ + 69, + 212, + 292, + 442 + ], + "type": "text", + "content": "Which cases to consider as offensive language is another crucial point to explain. Situations like swearing and insulting were the most obvious ones. In addition, provocative words like stating that there should be a punishment, such as not being able to go outside or get into closed areas, without stating any exception or an alternative option, for unvaccinated people are included in this label. Also, we want to express that quotations or simply stating an idea without using harmful language, like saying that \"not getting vaccinated is a wrong behavior,\" are not perceived as offensive language. Even if we determine criteria, as we mentioned, for when to consider a tweet as offensive, this field is inevitably subjective for specific examples. This is why at least two people annotated each pair in our dataset." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 449, + 292, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 449, + 292, + 773 + ], + "spans": [ + { + "bbox": [ + 69, + 449, + 292, + 773 + ], + "type": "text", + "content": "The annotation process was carried out as follows. A general guideline for annotation was established and provided to the annotators (i.e., three of the authors of the paper) and a training was performed by using sample examples. Each tweet pair was annotated independently by two annotators and a third annotator was used to resolve inconsistencies. For each tweet pair, there were three label options, namely \"not offensive against unvaccinated people\", \"ambiguous\", and \"offensive against unvaccinated people\". Although it is stated that the goal was obtaining binary labels, three options were given in order to provide more flexibility to the annotators; however, the pairs whose final label is \"ambiguous\" were removed from the final dataset since this would make the primary goal of the study more difficult to interpret which was examining the effect of taking the replied tweet into account. While doing the annotation, totally unrelated cases in the dataset, such as unvaccinated fruits and vegetables owing to chosen keywords, were mostly cleaned even though a limited number of such cases might be still existing in the dataset. We wanted to measure inter-annotator agreement" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "content": "for these labels, so we used the F1 and Cohen Kappa scores. We obtained " + }, + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "inline_equation", + "content": "55.22\\%" + }, + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "inline_equation", + "content": "46.26\\%" + }, + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "content": ", respectively, for these metrics." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "spans": [ + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "text", + "content": "After obtaining the annotations by two annotators for each pair of tweets, the annotations were examined. If there is consistency, then this was chosen as the ultimate label. If the ultimate label is \"ambiguous\", it is removed; otherwise, it is added to the final dataset. If there is inconsistency in the form of one annotator choosing \"not offensive\" and the other choosing \"offensive\", these cases are ambiguous; consequently, these were removed as well. For the inconsistencies where one annotator chose \"ambiguous\", the third annotator looked at the tweet pair and determined the final decision. If a label other than \"ambiguous\" was chosen, then it is selected as the last label. If not, it was removed. After several hours of this procedure, pairs with binary labels were obtained. In total, we obtained 28808 pairs. While 13478 of them came from Dataset 1, Datasets 2 and 3 contributed with 1515 and 13815 pairs, respectively. The final binary dataset has 27219 examples that are not offensive against unvaccinated people, denoted with 0, and 1589 examples which are offensive against unvaccinated people which are denoted with 2 since 1 was representing the ambiguous case. The dataset is inevitably imbalanced since " + }, + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "inline_equation", + "content": "94.48\\%" + }, + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "text", + "content": " of the pairs are labeled as 0. Inter-annotator agreement for the dataset's last version was measured using the F1 score and Cohen Kappa score. This time they were calculated as " + }, + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "inline_equation", + "content": "95.21\\%" + }, + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "inline_equation", + "content": "88.97\\%" + }, + { + "bbox": [ + 302, + 114, + 527, + 574 + ], + "type": "text", + "content": ", which is significantly better than the initial version of the dataset. The final version of the dataset containing the reply and reply tweet ids as well as the manual annotations is made publicly available for future studies.[2]" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 303, + 589, + 453, + 603 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 589, + 453, + 603 + ], + "spans": [ + { + "bbox": [ + 303, + 589, + 453, + 603 + ], + "type": "text", + "content": "3 Experiments and Results" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 615, + 526, + 737 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 615, + 526, + 737 + ], + "spans": [ + { + "bbox": [ + 302, + 615, + 526, + 737 + ], + "type": "text", + "content": "After completing the annotation of the dataset, we used it to train and evaluate various machine learning models to detect offensive language against unvaccinated people. We randomly selected " + }, + { + "bbox": [ + 302, + 615, + 526, + 737 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 302, + 615, + 526, + 737 + ], + "type": "text", + "content": " of the dataset as the test set. For each algorithm we used, we examined two different scenarios. In the first, we used only the reply tweet, while in the second, we studied the impact of using the replied tweet in addition to the reply tweet on our models." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 750, + 453, + 761 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 453, + 761 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 453, + 761 + ], + "type": "text", + "content": "2https://github.com/boun-tabi/" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 761, + 477, + 772 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 761, + 477, + 772 + ], + "spans": [ + { + "bbox": [ + 303, + 761, + 477, + 772 + ], + "type": "text", + "content": "CovidOffensiveLanguageUltimateDatasets" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 287, + 780, + 309, + 789 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 780, + 309, + 789 + ], + "spans": [ + { + "bbox": [ + 287, + 780, + 309, + 789 + ], + "type": "text", + "content": "1545" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "spans": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 96, + 68, + 261, + 224 + ], + "blocks": [ + { + "bbox": [ + 96, + 68, + 261, + 224 + ], + "lines": [ + { + "bbox": [ + 96, + 68, + 261, + 224 + ], + "spans": [ + { + "bbox": [ + 96, + 68, + 261, + 224 + ], + "type": "table", + "html": "
MethodPrecRecF1
KNN (1)20.5641.1227.41
KNN (2)20.8440.7927.59
LR (1)50.0039.8044.32
LR (2)44.7241.7843.20
MNB (1)65.3226.6437.85
MNB (2)45.6534.5439.32
SVM (1)50.7644.0847.18
SVM (2)51.4634.8741.57
RF (1)38.5139.1438.82
RF (2)43.2535.8539.21
", + "image_path": "d6107a1af23337bab38d975758aad73cb0e78e0e7269bcee018e4a91ee0249eb.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 304, + 274, + 318 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 304, + 274, + 318 + ], + "spans": [ + { + "bbox": [ + 67, + 304, + 274, + 318 + ], + "type": "text", + "content": "3.1 Traditional Machine Learning Models" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 322, + 291, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 322, + 291, + 592 + ], + "spans": [ + { + "bbox": [ + 67, + 322, + 291, + 592 + ], + "type": "text", + "content": "Simple machine learning algorithms might perform quite good for certain tasks. Therefore, we started with simple algorithms such as Logistic Regression (LR), K-Nearest Neighbors (KNN), and Multinomial Naive Bayes (MNB). Then we also used Support Vector Machines (SVM) and Random Forest (RF). Since our dataset was imbalanced, we used downsampling to increase the performance of our models. In other words, we randomly selected a subset for the not offensive class while using all samples for the offensive class since it already had a limited number of samples. We had 1285 positive samples in the training set, so we decreased the not offensive class to 4500 samples, since too much reduction would cause a data scarcity problem. We used a tfidf based vector representation for the tweets. The performance of the commonly used traditional machine learning algorithms is given in Table 1 with the macro-averaged F1 score, precision, and recall." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "spans": [ + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "text", + "content": "There are two main observations we can make with these results. These simple models are not able to perform well on this task. Even if we had used a majority classifier, we would obtain " + }, + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "inline_equation", + "content": "50.0\\%" + }, + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "text", + "content": " recall, " + }, + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "inline_equation", + "content": "47.24\\%" + }, + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "text", + "content": " precision and " + }, + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "inline_equation", + "content": "48.58\\%" + }, + { + "bbox": [ + 67, + 594, + 291, + 702 + ], + "type": "text", + "content": " F1 score. The inclusion of information from the replied tweets does not have a significant impact on the performance of the models and behaves more like noise." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 714, + 202, + 728 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 714, + 202, + 728 + ], + "spans": [ + { + "bbox": [ + 67, + 714, + 202, + 728 + ], + "type": "text", + "content": "3.2 Deep Learning Models" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 733, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 291, + 772 + ], + "type": "text", + "content": "Deep Learning models are top-rated in natural language processing. Especially the transformer-based ones (Vaswani et al., 2017) like BERT (De" + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 322, + 68, + 507, + 113 + ], + "blocks": [ + { + "bbox": [ + 67, + 232, + 291, + 283 + ], + "lines": [ + { + "bbox": [ + 67, + 232, + 291, + 283 + ], + "spans": [ + { + "bbox": [ + 67, + 232, + 291, + 283 + ], + "type": "text", + "content": "Table 1: Results for traditional models. For each model, (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 322, + 68, + 507, + 113 + ], + "lines": [ + { + "bbox": [ + 322, + 68, + 507, + 113 + ], + "spans": [ + { + "bbox": [ + 322, + 68, + 507, + 113 + ], + "type": "table", + "html": "
MethodPrecRecF1
BERTurk (1)65.7382.6870.28
BERTurk (2)70.1179.0373.57
", + "image_path": "90529b5c9b29f8bff9fcebf12b9eb9d09ac7256a0ea91d5cb4178383fdcc3523.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 121, + 525, + 170 + ], + "lines": [ + { + "bbox": [ + 302, + 121, + 525, + 170 + ], + "spans": [ + { + "bbox": [ + 302, + 121, + 525, + 170 + ], + "type": "text", + "content": "Table 2: Results for deep learning models. (1) corresponds to the first scenario where only the reply tweet is used and (2) corresponds to the second scenario where both the reply and the replied tweet are used." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 302, + 208, + 526, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 208, + 526, + 452 + ], + "spans": [ + { + "bbox": [ + 302, + 208, + 526, + 452 + ], + "type": "text", + "content": "vlin et al., 2019) obtained incredible success in the last years. Therefore, we decided to look at the performance of the Turkish version of the BERT model called BERTurk (Schweter, 2020) with and without replied tweet information. For the single tweet setting, we followed the classical procedure for fine-tuning where we used binary cross-entropy with Adam optimizer (Kingma and Ba, 2015) with " + }, + { + "bbox": [ + 302, + 208, + 526, + 452 + ], + "type": "inline_equation", + "content": "5x10^{-5}" + }, + { + "bbox": [ + 302, + 208, + 526, + 452 + ], + "type": "text", + "content": " learning rate. We did the hyperparameter optimization by looking at the validation set F1 score. For the case of using two tweets (the reply and replied tweet), the only difference was creating a longer input string by combining the two tweets in the form of \"Önceki tweet: replied tweet Cevap: reply tweet\" (in English, \"Previous tweet: replied tweet Reply: reply tweet\"). The results (macro-averaged scores) obtained on the test set are summarized for the two cases in Table 2." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 462, + 525, + 610 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 462, + 525, + 610 + ], + "spans": [ + { + "bbox": [ + 302, + 462, + 525, + 610 + ], + "type": "text", + "content": "Interestingly, this time the model that uses both the reply and the replied tweet performed better in terms of F1 score, yet the effect of taking context into account is still limited. Even though precision improves, recall drops. The English translation of an example to explain this phenomenon is provided below. In this example, the reply tweet is offensive, while the replied tweet is not offensive. In this case, including the replied tweet as contextual information to classify the reply tweet misleads the model." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 619, + 526, + 696 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 302, + 619, + 525, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 619, + 525, + 645 + ], + "spans": [ + { + "bbox": [ + 302, + 619, + 525, + 645 + ], + "type": "text", + "content": "- Replied Tweet: \"Vaccination opponents misread the National Anthem.\"" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 655, + 526, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 655, + 526, + 696 + ], + "spans": [ + { + "bbox": [ + 302, + 655, + 526, + 696 + ], + "type": "text", + "content": "- Reply Tweet: \"Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine.\"" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 302, + 706, + 525, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 706, + 525, + 772 + ], + "spans": [ + { + "bbox": [ + 302, + 706, + 525, + 772 + ], + "type": "text", + "content": "For more example tweets where the inclusion of context (i.e., the replied tweet) is necessary for the correct classification of the reply tweet and where context could mislead the classifier, see the Appendix." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 801 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 801 + ], + "type": "text", + "content": "1546 4" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 147, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 147, + 83 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 147, + 83 + ], + "type": "text", + "content": "4 Conclusion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 94, + 293, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 94, + 293, + 445 + ], + "spans": [ + { + "bbox": [ + 67, + 94, + 293, + 445 + ], + "type": "text", + "content": "We prepared an offensive language dataset for Turkish, where the number of such datasets is very limited. Unlike most other tweet datasets where each tweet is considered individually, we included the replied tweet as contextual information and investigated how this information affects the performance of commonly used machine learning models. Contrary to our expectation, our results showed that this resulted in only a slight improvement in the F1-score for some models and did not significantly improve the performance of the studied models for offensive language detection in general. In theory, the previous tweet appears to contain important information. However, in analyzing our dataset, we found that most reply tweets have only a weak relationship to the replied tweet in terms of meaning. Moreover, interaction with other tweets is dominated by the use of other features on Twitter, such as \"like\" or \"retweet.\" Consequently, the use of information about previous tweets did not provide much contribution for offensive language detection in this study. Nonetheless, attempting to develop models specifically designed to consider information about previous tweets could lead to better performance and represents a promising future research direction." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 457, + 130, + 470 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 457, + 130, + 470 + ], + "spans": [ + { + "bbox": [ + 67, + 457, + 130, + 470 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 481, + 291, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 481, + 291, + 629 + ], + "spans": [ + { + "bbox": [ + 67, + 481, + 291, + 629 + ], + "type": "text", + "content": "While we tried various methods for detecting offensive language with and without replied tweet, we have not focused on developing a specific model which benefits from the previous (i.e., replied) tweet in the best way. Our goal was to investigate the impact of contextual information on the performance of commonly used machine learning-based models. Therefore, even though we were not able to get significant improvements with contextual information, further research focusing on this subject is a promising direction to follow." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 630, + 291, + 698 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 630, + 291, + 698 + ], + "spans": [ + { + "bbox": [ + 67, + 630, + 291, + 698 + ], + "type": "text", + "content": "We examined the use of previous tweet for only single target group and language due to the laborious nature of the manual annotation process and the time limitations. The dataset can be expanded with other target groups and languages in the future." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 709, + 158, + 722 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 709, + 158, + 722 + ], + "spans": [ + { + "bbox": [ + 67, + 709, + 158, + 722 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 733, + 291, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 733, + 291, + 773 + ], + "spans": [ + { + "bbox": [ + 67, + 733, + 291, + 773 + ], + "type": "text", + "content": "Offensive language detection systems could be very useful for real-life uses. Because machine learning-based models are guided mainly by the data they" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 166 + ], + "type": "text", + "content": "use, the annotation of datasets is an essential step, which ought to be carried out responsibly. Despite the fact that we tried to use multiple annotators for the labeling process, developing better strategies are possible since some examples regarding offensive language are very subjective. The annotated data is shared based on Twitter's terms of use." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 177, + 406, + 190 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 177, + 406, + 190 + ], + "spans": [ + { + "bbox": [ + 303, + 177, + 406, + 190 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 200, + 526, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 200, + 526, + 280 + ], + "spans": [ + { + "bbox": [ + 302, + 200, + 526, + 280 + ], + "type": "text", + "content": "This work is partially supported by the EU funded project entitled \"Utilizing Digital Technology for Social Cohesion, Positive Messaging and Peace by Boosting Collaboration, Exchange and Solidarity\" and by the Bogaziçi University Research Fund under the Grant Number 16903." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 305, + 362, + 317 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 305, + 362, + 317 + ], + "spans": [ + { + "bbox": [ + 304, + 305, + 362, + 317 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 303, + 324, + 527, + 773 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 304, + 324, + 527, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 324, + 527, + 402 + ], + "spans": [ + { + "bbox": [ + 304, + 324, + 527, + 402 + ], + "type": "text", + "content": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 412, + 527, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 412, + 527, + 468 + ], + "spans": [ + { + "bbox": [ + 304, + 412, + 527, + 468 + ], + "type": "text", + "content": "Inanç Arin, Zeynep Işık, Seçilay Kugal, Somaiyeh Dehghan, Arzucan Özgür, and Berrin Yanıkoğlu. 2023. SIU2023-NST - Hate Speech Detection Contest. In 31st Signal Processing and Communications Applications Conference (SIU), pages 1-4." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 477, + 527, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 477, + 527, + 555 + ], + "spans": [ + { + "bbox": [ + 304, + 477, + 527, + 555 + ], + "type": "text", + "content": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. Semeval-2019 Task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 565, + 527, + 630 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 565, + 527, + 630 + ], + "spans": [ + { + "bbox": [ + 304, + 565, + 527, + 630 + ], + "type": "text", + "content": "Fatih Beyhan, Buse Çarık, İnç Arın, Ayşecan Terzioglu, Berrin Yanıkoğlu, and Reyyan Yeniterzi. 2022. A Turkish hate speech dataset and detection system. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 4177-4185." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 641, + 527, + 685 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 641, + 527, + 685 + ], + "spans": [ + { + "bbox": [ + 304, + 641, + 527, + 685 + ], + "type": "text", + "content": "Alexander Bor, Frederik Jorgensen, and Michael Bang Petersen. 2023. Discriminatory attitudes against unvaccinated people during the pandemic. Nature, 613(7945):704-711." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 303, + 695, + 526, + 739 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 695, + 526, + 739 + ], + "spans": [ + { + "bbox": [ + 303, + 695, + 526, + 739 + ], + "type": "text", + "content": "Noé Cécillon, Vincent Labatut, Richard Dufour, and Georges Linares. 2021. Graph embeddings for abusive language detection. SN Computer Science, 2:1-15." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 303, + 750, + 526, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 750, + 526, + 773 + ], + "spans": [ + { + "bbox": [ + 303, + 750, + 526, + 773 + ], + "type": "text", + "content": "Çagrı Çoltekin. 2020. A corpus of Turkish offensive language on social media. In Proceedings of the" + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 780, + 309, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 780, + 309, + 801 + ], + "spans": [ + { + "bbox": [ + 286, + 780, + 309, + 801 + ], + "type": "text", + "content": "1547 5" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 78, + 72, + 291, + 105 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 72, + 291, + 105 + ], + "spans": [ + { + "bbox": [ + 78, + 72, + 291, + 105 + ], + "type": "text", + "content": "Twelfth Language Resources and Evaluation Conference, pages 6174-6184, Marseille, France. European Language Resources Association." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 113, + 291, + 170 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 113, + 291, + 170 + ], + "spans": [ + { + "bbox": [ + 69, + 113, + 291, + 170 + ], + "type": "text", + "content": "Wenliang Dai, Tiezheng Yu, Zihan Liu, and Pascale Fung. 2020. Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-TaskLearning for Offensive Language Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2060-2066." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 177, + 291, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 177, + 291, + 277 + ], + "spans": [ + { + "bbox": [ + 68, + 177, + 291, + 277 + ], + "type": "text", + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 285, + 291, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 285, + 291, + 352 + ], + "spans": [ + { + "bbox": [ + 68, + 285, + 291, + 352 + ], + "type": "text", + "content": "Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 345-363." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 359, + 291, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 359, + 291, + 416 + ], + "spans": [ + { + "bbox": [ + 68, + 359, + 291, + 416 + ], + "type": "text", + "content": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 423, + 290, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 423, + 290, + 445 + ], + "spans": [ + { + "bbox": [ + 68, + 423, + 290, + 445 + ], + "type": "text", + "content": "Law-Insider. 2023. Offensive language definition | law insider. Accessed on June 18, 2023." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 453, + 291, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 453, + 291, + 510 + ], + "spans": [ + { + "bbox": [ + 68, + 453, + 291, + 510 + ], + "type": "text", + "content": "Edoardo Mosca, Maximilian Wich, and Georg Groh. 2021. Understanding and interpreting the impact of user context in hate speech detection. In Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pages 91-102." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 68, + 517, + 291, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 517, + 291, + 596 + ], + "spans": [ + { + "bbox": [ + 68, + 517, + 291, + 596 + ], + "type": "text", + "content": "Marzieh Mozafari, Reza Farahbakhsh, and Noel Crespi. 2020. A BERT-based transfer learning approach for hate speech detection in online social media. In Complex Networks and Their Applications VIII: Volume 1 Proceedings of the Eighth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2019 8, pages 928-940. Springer." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 68, + 602, + 291, + 648 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 602, + 291, + 648 + ], + "spans": [ + { + "bbox": [ + 68, + 602, + 291, + 648 + ], + "type": "text", + "content": "Hamdy Mubarak, Sabit Hassan, and Shammur Absar Chowdhury. 2023. Emojis as anchors to detect Arabic offensive language and hate speech. *Natural Language Engineering*, page 1-22." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 68, + 655, + 290, + 678 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 655, + 290, + 678 + ], + "spans": [ + { + "bbox": [ + 68, + 655, + 290, + 678 + ], + "type": "text", + "content": "Stefan Schweter. 2020. BERTurk - BERT models for Turkish." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 68, + 686, + 291, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 686, + 291, + 731 + ], + "spans": [ + { + "bbox": [ + 68, + 686, + 291, + 731 + ], + "type": "text", + "content": "Omar Sharif, Eftekhar Hossain, and Mohammed Moshiul Hoque. 2021. Combating hostility: Covid-19 fake news and hostile post detection in social media. arXiv preprint arXiv:2101.03291." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 738, + 291, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 738, + 291, + 772 + ], + "spans": [ + { + "bbox": [ + 68, + 738, + 291, + 772 + ], + "type": "text", + "content": "Cagri Toraman, Furkan Şahinuc, and Eyup Yilmaz. 2022. Large-scale hate speech detection with cross-domain transfer. In Proceedings of the Thirteenth" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 72, + 526, + 105 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 526, + 105 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 526, + 105 + ], + "type": "text", + "content": "Language Resources and Evaluation Conference, pages 2215-2225, Marseille, France. European Language Resources Association." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 113, + 526, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 113, + 526, + 169 + ], + "spans": [ + { + "bbox": [ + 304, + 113, + 526, + 169 + ], + "type": "text", + "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 303, + 176, + 526, + 265 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 176, + 526, + 265 + ], + "spans": [ + { + "bbox": [ + 303, + 176, + 526, + 265 + ], + "type": "text", + "content": "Xinchen Yu, Eduardo Blanco, and Lingzi Hong. 2022. Hate speech and counter speech detection: Conversational context does matter. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5918-5930, Seattle, United States. Association for Computational Linguistics." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 303, + 273, + 526, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 273, + 526, + 362 + ], + "spans": [ + { + "bbox": [ + 303, + 273, + 526, + 362 + ], + "type": "text", + "content": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Cagri Öltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425-1447. Association for Computational Linguistics." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 303, + 370, + 492, + 397 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 370, + 492, + 397 + ], + "spans": [ + { + "bbox": [ + 303, + 370, + 492, + 397 + ], + "type": "text", + "content": "A Tweet Pair Examples Regarding Context Information" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 303, + 406, + 524, + 433 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 406, + 524, + 433 + ], + "spans": [ + { + "bbox": [ + 303, + 406, + 524, + 433 + ], + "type": "text", + "content": "A.1 An example where context is necessary for correct classification of the reply tweet" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 303, + 437, + 411, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 437, + 411, + 449 + ], + "spans": [ + { + "bbox": [ + 303, + 437, + 411, + 449 + ], + "type": "text", + "content": "The English translation:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 451, + 524, + 503 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 451, + 524, + 503 + ], + "spans": [ + { + "bbox": [ + 302, + 451, + 524, + 503 + ], + "type": "text", + "content": "Replied Tweet: If we are closed at home again because of those who are not vaccinated, you will see curses that you have not seen so far in this account.." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 314, + 505, + 390, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 505, + 390, + 518 + ], + "spans": [ + { + "bbox": [ + 314, + 505, + 390, + 518 + ], + "type": "text", + "content": "Reply Tweet: +1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 303, + 527, + 498, + 552 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 527, + 498, + 552 + ], + "spans": [ + { + "bbox": [ + 303, + 527, + 498, + 552 + ], + "type": "text", + "content": "A.2 An example where context does not matter" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 303, + 558, + 392, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 558, + 392, + 571 + ], + "spans": [ + { + "bbox": [ + 303, + 558, + 392, + 571 + ], + "type": "text", + "content": "English translation:" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 302, + 571, + 524, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 571, + 524, + 651 + ], + "spans": [ + { + "bbox": [ + 302, + 571, + 524, + 651 + ], + "type": "text", + "content": "Replied Tweet: It may be against the necessity of vaccination, it may be thought that the mask is not protective; however, there is no human side of walking as a group on a girl who works as a cashier under difficult conditions, entering a closed area without a mask, and causing fear and sadness." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 302, + 653, + 525, + 707 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 653, + 525, + 707 + ], + "spans": [ + { + "bbox": [ + 302, + 653, + 525, + 707 + ], + "type": "text", + "content": "Reply Tweet: Those who are not vaccinated + those who do not wear masks. I seriously don't understand what's wrong with this team. This team is seriously litmus of intelligence." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 303, + 715, + 518, + 755 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 715, + 518, + 755 + ], + "spans": [ + { + "bbox": [ + 303, + 715, + 518, + 755 + ], + "type": "text", + "content": "A.3 An example where reply is not offensive but replied might mislead since it is offensive" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 303, + 760, + 392, + 772 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 760, + 392, + 772 + ], + "spans": [ + { + "bbox": [ + 303, + 760, + 392, + 772 + ], + "type": "text", + "content": "English translation:" + } + ] + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 790 + ], + "type": "text", + "content": "1548" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "spans": [ + { + "bbox": [ + 293, + 791, + 300, + 800 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 111 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 111 + ], + "type": "text", + "content": "Replied Tweet: Prof. Bingür Sonmez: Those who say they will not get vaccinated are traitors, we will not allow them to get married with our girls" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 112, + 290, + 164 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 112, + 290, + 164 + ], + "spans": [ + { + "bbox": [ + 67, + 112, + 290, + 164 + ], + "type": "text", + "content": "Reply Tweet: At the point where the cardiovascular surgeon has come, we will not allow traitors who do not get vaccinated to get married with our girls." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 174, + 285, + 214 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 174, + 285, + 214 + ], + "spans": [ + { + "bbox": [ + 68, + 174, + 285, + 214 + ], + "type": "text", + "content": "A.4 An example where reply is offensive but replied might mislead since it is not offensive" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 220, + 156, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 220, + 156, + 231 + ], + "spans": [ + { + "bbox": [ + 68, + 220, + 156, + 231 + ], + "type": "text", + "content": "English translation:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 233, + 289, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 233, + 289, + 258 + ], + "spans": [ + { + "bbox": [ + 67, + 233, + 289, + 258 + ], + "type": "text", + "content": "Replied Tweet: Vaccination opponents misread the National Anthem" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 260, + 290, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 260, + 290, + 299 + ], + "spans": [ + { + "bbox": [ + 67, + 260, + 290, + 299 + ], + "type": "text", + "content": "Reply Tweet: Go away the army of brainless people to your village, you can't live in the metropolis without a vaccine." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 310, + 270, + 336 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 310, + 270, + 336 + ], + "spans": [ + { + "bbox": [ + 68, + 310, + 270, + 336 + ], + "type": "text", + "content": "B Keywords used for Getting Related Tweets" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 346, + 290, + 440 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 346, + 290, + 440 + ], + "spans": [ + { + "bbox": [ + 67, + 346, + 290, + 440 + ], + "type": "text", + "content": "The following keywords were used in our search: aşışiz, asışiz, aşışizlar, asışizlar, aş1 olmayan, as1 olmayan, aş1 olmayanlar, as1 olmayanlar, aş1 olmak istemeyen, as1 olmak istemeyen, aş1 olmak istemeyenler, as1 olmak istemeyenler, aş1 yaptır-mayan, as1 yaptır-mayan, aş1 yaptır-mayanlar, as1 yaptır-mayanlar." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 286, + 781, + 309, + 800 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 781, + 309, + 800 + ], + "spans": [ + { + "bbox": [ + 286, + 781, + 309, + 800 + ], + "type": "text", + "content": "1549 7" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/A Framework for Bidirectional Decoding_ Case Study in Morphological Inflection/911ebe92-f987-4f30-ba8a-6ececcac30ec_content_list.json b/2023/A Framework for Bidirectional Decoding_ Case Study in Morphological Inflection/911ebe92-f987-4f30-ba8a-6ececcac30ec_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..51455ccd95296c58ea7b8f295565ec2e3c235179 --- /dev/null +++ b/2023/A Framework for Bidirectional Decoding_ Case Study in Morphological Inflection/911ebe92-f987-4f30-ba8a-6ececcac30ec_content_list.json @@ -0,0 +1,2985 @@ +[ + { + "type": "text", + "text": "A Framework for Bidirectional Decoding: Case Study in Morphological Inflection", + "text_level": 1, + "bbox": [ + 127, + 89, + 870, + 129 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Marc E. Canby and Julia Hockenmaier", + "bbox": [ + 324, + 155, + 673, + 171 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "University of Illinois at Urbana-Champaign", + "bbox": [ + 324, + 172, + 677, + 187 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{marcec2,juliahmr}@illinois.edu", + "bbox": [ + 344, + 189, + 657, + 204 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 252, + 339, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Transformer-based encoder-decoder models that generate outputs in a left-to-right fashion have become standard for sequence-to-sequence tasks. In this paper, we propose a framework for decoding that produces sequences from the \"outside-in\": at each step, the model chooses to generate a token on the left, on the right, or join the left and right sequences. We argue that this is more principled than prior bidirectional decoders. Our proposal supports a variety of model architectures and includes several training methods, such as a dynamic programming algorithm that marginalizes out the latent ordering variable. Our model sets state-of-the-art (SOTA) on the 2022 and 2023 shared tasks, beating the next best systems by over 4.7 and 2.7 points in average accuracy respectively. The model performs particularly well on long sequences, can implicitly learn the split point of words composed of stem and affix, and performs better relative to the baseline on datasets that have fewer unique lemmas (but more examples per lemma).", + "bbox": [ + 144, + 279, + 460, + 605 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 618, + 258, + 633 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Transformer-based encoder-decoder architectures (Bahdanau et al., 2014; Vaswani et al., 2017) that decode sequences from left to right have become dominant for sequence-to-sequence tasks. While this approach is quite straightforward and intuitive, some research has shown that models suffer from this arbitrary constraint. For example, models that decode left-to-right are often more likely to miss tokens near the end of the sequence, while right-to-left models are more prone to making mistakes near the beginning (Zhang et al., 2019; Zhou et al., 2019a). This is a result of the \"snowballing\" effect, whereby the model's use of its own incorrect predictions can lead future predictions to be incorrect (Bengio et al., 2015; Liu et al., 2016).", + "bbox": [ + 115, + 643, + 489, + 883 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We explore this issue for the task of morphological inflection, where the goal is to learn a mapping from a word's lexeme (e.g. the lemma walk) to a particular form (e.g. walked) specified by a set of morphosyntactic tags (e.g. V;V.PTCP;PST). This has been the focus of recent shared tasks (Cotterell et al., 2016, 2017, 2018; McCarthy et al., 2019; Vylomova et al., 2020; Pimentel et al., 2021; Kodner et al., 2022; Goldman et al., 2023). Most approaches use neural encoder-decoder architectures, e.g recurrent neural networks (RNNs) (Aharoni and Goldberg, 2017; Wu and Cotterell, 2019) or transformers (Wu et al., 2021). To our knowledge, Canby et al. (2020) is the only model that uses bidirectional decoding for inflection; it decodes the sequence in both directions simultaneously and returns the one with higher probability.", + "bbox": [ + 507, + 253, + 884, + 526 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this paper, we propose a novel framework for bidirectional decoding that supports a variety of model architectures. Unlike previous work (§2), at each step the model chooses to generate a token on the left, generate a token on the right, or join the left and right sequences.", + "bbox": [ + 507, + 527, + 882, + 623 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "This proposal is appealing for several reasons. As a general framework, this approach supports a wide variety of model architectures that may be task-specific. Further, it generalizes L2R and R2L decoders, as the model can choose to generate sequences in a purely unidirectional fashion. Finally, the model is able to decide which generation order is best for each sequence, and can even produce parts of a sequence from each direction. This is particularly appropriate for a task like inflection, where many words are naturally split into stem and affix. For example, when producing the form walked, the model may chose to generate the stem", + "bbox": [ + 507, + 625, + 884, + 834 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$^{2}$ Orthogonal to the concerns in this paper, various data augmentation schemes such as heuristic alignment or rule-based methods (Kann and Schütze, 2017; Anastasopoulos and Neubig, 2019) or the use of multilingual data (Bergmanis et al., 2017; McCarthy et al., 2019) have been proposed to improve these standard architectures.", + "bbox": [ + 507, + 845, + 882, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "Our code is available at https://github.com/marccanby/bidi_decoding/tree/main.", + "bbox": [ + 112, + 891, + 487, + 917 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "4485", + "bbox": [ + 480, + 927, + 519, + 940 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4485-4507 December 6-10, 2023 ©2023 Association for Computational Linguistics", + "bbox": [ + 216, + 945, + 779, + 972 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "walk from the left and the suffix ed from the right.", + "bbox": [ + 112, + 84, + 489, + 99 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We explore several methods for training models under this framework, and find that they are highly effective on the 2023 SIGMORPHON shared task on inflection (Goldman et al., 2023). Our method improves by over 4 points in average accuracy over a typical L2R model, and one of our loss functions is particularly adept at learning split points for words with a clear affix. We also set SOTA on both the 2022 and 2023 shared tasks (Kodner et al., 2022), which have very different data distributions.", + "bbox": [ + 112, + 99, + 489, + 261 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Prior Bidirectional Decoders", + "text_level": 1, + "bbox": [ + 112, + 273, + 396, + 288 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Various bidirectional decoding approaches have been proposed for tasks such as machine translation and abstractive summarization, including ones that use some form of regularization to encourage the outputs from both directions to agree (Liu et al., 2016; Zhang et al., 2019; Shan et al., 2019), or algorithms where the model first decodes the entire sequence in the R2L direction and then conditions on that sequence when decoding in the L2R direction (Zhang et al., 2018; Al-Sabahi et al., 2018). Still more methods utilize synchronous decoding, where the model decodes both directions at the same time and either meet in the center (Zhou et al., 2019b; Imamura and Sumita, 2020) or proceed until each direction's hypothesis is complete (Zhou et al., 2019a; Xu and Yvon, 2021). Lawrence et al. (2019) allows the model to look into the future by filling placeholder tokens at each timestep.", + "bbox": [ + 112, + 298, + 489, + 589 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3 A Bidirectional Decoding Framework", + "text_level": 1, + "bbox": [ + 112, + 600, + 473, + 615 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The following sections present a general framework for training and decoding models with bidirectional decoding that is irrespective of model architecture, subject to the constraints discussed in §3.3.", + "bbox": [ + 112, + 625, + 487, + 689 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3.1 Probability Factorization", + "text_level": 1, + "bbox": [ + 112, + 700, + 359, + 715 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "For unidirectional models, the probability of an L2R sequence $\\vec{y} = y_1\\cdots y_n$ or an R2L sequence $\\vec{y} = y_n\\cdots y_1$ given an input $\\mathbf{x}$ is defined as", + "bbox": [ + 112, + 721, + 485, + 769 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\nP (\\vec {\\boldsymbol {y}} | \\boldsymbol {x}) = \\prod_ {i = 1} ^ {| \\boldsymbol {y} |} P (\\vec {\\boldsymbol {y}} _ {i} | \\vec {\\boldsymbol {y}} _ {< i}, \\boldsymbol {x}) \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 181, + 781, + 487, + 825 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\nP \\left(\\overleftarrow {\\mathbf {y}} | \\boldsymbol {x}\\right) = \\prod_ {j = 1} ^ {| \\boldsymbol {y} |} P \\left(\\overleftarrow {\\mathbf {y}} _ {j} | \\overleftarrow {\\mathbf {y}} _ {< j}, \\boldsymbol {x}\\right) \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 181, + 828, + 487, + 875 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\overrightarrow{y}_i = y_i$ or $\\overleftarrow{y}_j = y_{n - j + 1}$ is the $i$ th or $j$ th token in a particular direction. Generation begins", + "bbox": [ + 112, + 885, + 487, + 919 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "with a start-of-sentence token; at each step a token is chosen based on those preceding, and the process halts once an end-of-sentence token is predicted.", + "bbox": [ + 507, + 84, + 880, + 131 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In contrast, our bidirectional scheme starts with an empty prefix $\\$$ and suffix #. At each timestep, the model chooses to generate the next token of either the prefix or the suffix, and then whether or not to join the prefix and suffix. If a join is predicted, then generation is complete.", + "bbox": [ + 507, + 131, + 882, + 228 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We define an ordering $\\mathbf{o} = o^{(1)} \\cdots o^{(n)}$ as a sequence of left and right decisions: that is, $o^{(t)} \\in \\{L, R\\}$ . We use $y^{(t)}$ to refer to the token generated at time $t$ under a particular ordering, and $\\overrightarrow{\\mathbf{y}}^{(\\leq t)}$ and $\\overleftarrow{\\mathbf{y}}^{(\\leq t)}$ to refer to the prefix and suffix generated up to (and including) time $t$ . An example derivation of the word walked is shown below:", + "bbox": [ + 507, + 228, + 882, + 341 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/4db9c17ef7a20b382a043dcd235f3ff3c8099691b64e3467ed0ce5005228b479.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
o(t)y(t)y(≤t)y(≤t)
$#
o(1) = Ly(1) = w$w#
o(2) = Ly(2) = a$wa#
o(3) = Ry(3) = d$wad#
o(4) = Ly(4) = l$wald#
o(5) = Ry(5) = e$waled#
o(6) = Ly(6) = k$walked#
", + "bbox": [ + 566, + 353, + 826, + 470 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Dropping the dependence on $\\pmb{x}$ for notational convenience, we define the joint probability of output sequence $\\pmb{y}$ and ordering $\\pmb{o}$ as", + "bbox": [ + 507, + 489, + 882, + 537 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} P (\\boldsymbol {y}, \\boldsymbol {o}) = \\prod_ {t = 1} ^ {| \\boldsymbol {y} |} P (o ^ {(t)} | \\vec {\\boldsymbol {y}} ^ {(< t)}, \\overleftarrow {\\boldsymbol {y}} ^ {(< t)}) \\cdot \\\\ P \\left(y ^ {(t)} \\mid o ^ {(t)}, \\overrightarrow {\\mathbf {y}} ^ {(< t)}, \\overleftarrow {\\mathbf {y}} ^ {(< t)}\\right) \\cdot Q ^ {(t)} \\tag {3} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 521, + 558, + 880, + 613 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $Q^{(t)}$ is the probability of joining (or not joining) the prefix and suffix:", + "bbox": [ + 507, + 625, + 880, + 657 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\nQ ^ {(t)} = \\left\\{ \\begin{array}{l l} P (j o i n \\mid \\overrightarrow {\\boldsymbol {y}} ^ {(\\leq t)}, \\overleftarrow {\\boldsymbol {y}} ^ {(\\leq t)}) & \\text {i f} t = | \\boldsymbol {y} | \\\\ 1 - P (j o i n \\mid \\overrightarrow {\\boldsymbol {y}} ^ {(\\leq t)}, \\overleftarrow {\\boldsymbol {y}} ^ {(\\leq t)}) & \\text {o t h e r w i s e} \\end{array} \\right.\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 678, + 848, + 712 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3.2 Likelihood and MAP Inference", + "text_level": 1, + "bbox": [ + 507, + 724, + 800, + 738 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To compute the likelihood of a particular sequence $\\pmb{y}$ , we need to marginalize over all orderings: $P(\\pmb{y}|\\pmb{x}) = \\sum_{o} P(\\pmb{y}, o|\\pmb{x})$ . Since we cannot enumerate all $2^{|y|}$ orderings, we have developed an exact $O(|\\pmb{y}|^2)$ dynamic programming algorithm, reminiscent of the forward algorithm for HMMs.", + "bbox": [ + 507, + 745, + 882, + 840 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To simplify notation, let $\\bar{P}_L(\\vec{y}_i\\mid \\vec{y}_{< i},\\vec{y}_{< j})$ (or $P_{R}(\\overleftarrow{y}_{j}\\mid \\overrightarrow{y}_{< i},\\overleftarrow{y}_{< j}))$ be the probability of", + "bbox": [ + 507, + 839, + 882, + 873 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "3We use superscripts to refer to timesteps, and subscripts for sequence positions. Note that if, at a particular timestep $t$ we have prefix $\\vec{y}_{\\leq i}$ and suffix $\\vec{y}_{\\leq j}$ , then $i + j = t$ .", + "bbox": [ + 507, + 881, + 882, + 919 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "4486", + "bbox": [ + 480, + 928, + 519, + 940 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "generating the $i$ th token from the left (or the $j$ th token from the right), conditioned on $\\overleftarrow{\\pmb{y}}_{< i}$ and $\\overleftarrow{\\pmb{y}}_{< j}$ , the prefix and suffix generated thus far:", + "bbox": [ + 112, + 84, + 485, + 130 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} P _ {L} (\\vec {y} _ {i} | \\vec {y} _ {< i}, \\vec {y} _ {< j}) = P (L | \\vec {y} _ {< i}, \\vec {y} _ {< j}) \\cdot P (\\vec {y} _ {i} | L, \\vec {y} _ {< i}, \\vec {y} _ {< j}) \\\\ P _ {R} \\left(\\overline {{\\mathcal {Y}}} _ {j} | \\overrightarrow {\\mathcal {Y}} _ {< i}, \\overleftarrow {\\mathcal {Y}} _ {< j}\\right) = P \\left(R | \\overrightarrow {\\mathcal {Y}} _ {< i}, \\overleftarrow {\\mathcal {Y}} _ {< j}\\right) \\cdot P \\left(\\overleftarrow {\\mathcal {Y}} _ {j} | R, \\overrightarrow {\\mathcal {Y}} _ {< i}, \\overleftarrow {\\mathcal {Y}} _ {< j}\\right) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 115, + 131, + 485, + 167 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Let $Q_{ij}$ be the join probability for $\\overrightarrow{\\pmb{y}}_{\\leq i}$ and $\\overleftarrow{\\pmb{y}}_{\\leq j}$ :", + "bbox": [ + 114, + 168, + 485, + 189 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nQ _ {i j} = \\left\\{ \\begin{array}{l} P (j o i n \\mid \\overrightarrow {\\boldsymbol {y}} _ {\\leq i}, \\overleftarrow {\\boldsymbol {y}} _ {\\leq j}) \\quad \\text {i f} i + j = | \\boldsymbol {y} | \\\\ 1 - P (j o i n \\mid \\overrightarrow {\\boldsymbol {y}} _ {\\leq i}, \\overleftarrow {\\boldsymbol {y}} _ {\\leq j}) \\quad \\text {o t h e r w i s e} \\end{array} \\right. \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 132, + 193, + 487, + 225 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Finally, denote the joint probability of a prefix $\\overrightarrow{\\pmb{y}}_{\\leq i}$ and suffix $\\overleftarrow{\\pmb{y}}_{\\leq j}$ by $f[i,j]$ .", + "bbox": [ + 112, + 231, + 485, + 265 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We set the probability of an empty prefix and suffix (the base case) to 1:", + "bbox": [ + 112, + 266, + 485, + 296 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nf [ 0, 0 ] = 1\n$$\n", + "text_format": "latex", + "bbox": [ + 257, + 303, + 342, + 319 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The probability of a non-empty prefix $\\vec{y}_{\\leq i}$ and empty suffix $\\epsilon$ can be computed by multiplying $f[i - 1,0]$ (the probability of prefix $\\vec{y}_{< i}$ and empty suffix $\\epsilon$ ) by $P_L(\\vec{y}_i \\mid \\vec{y}_{< i}, \\epsilon)$ (the probability of generating $\\vec{y}_i$ ) and the join probability $Q_{i0}$ :", + "bbox": [ + 112, + 325, + 485, + 406 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nf [ i, 0 ] = f [ i - 1, 0 ] \\cdot P _ {L} (\\overrightarrow {\\mathcal {Y}} _ {i} | \\overrightarrow {\\mathcal {Y}} _ {< i}, \\epsilon) \\cdot Q _ {i 0}\n$$\n", + "text_format": "latex", + "bbox": [ + 139, + 411, + 460, + 429 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Analogously, we define", + "bbox": [ + 114, + 435, + 290, + 450 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nf [ 0, j ] = f [ 0, j - 1 ] \\cdot P _ {R} (\\overleftarrow {y} _ {j} | \\epsilon , \\overleftarrow {\\mathbf {y}} _ {< j}) \\cdot Q _ {0 j}\n$$\n", + "text_format": "latex", + "bbox": [ + 132, + 455, + 465, + 475 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Finally, $f[i,j]$ represents the case where both prefix $\\vec{y} \\leq i$ and suffix $\\vec{y} \\leq j$ are non-empty. This prefix-suffix pair can be produced either by appending $\\vec{y}_i$ to the prefix $\\vec{y}_{